The addBatch() allows you to group related SQL statements into a batch and submit them with one call to the database. You can get the same result by starting a transaction, adding all your data, and then issuing a commit. This also is heavily dependent on the load on your server and record contention.
I would suggest uploading without transactions (set auto commit to 0) and testing performance. Then try using transactions with groups of records. Very big transactions can be problematic for several reasons.
Also look at the parallel bulk loader in the new MySQL shell (mysqlsh)
In its simplest form you could just write the insert statement like this to "bulk" insert records:
insert into a_table (col1, col2, col3) values
(1, 'text1', 'text'),
(2, 'text2', 'and more'),
(3, 'text3', 'etc...');
which will be faster than issueing n seperate insert statements. For large datasets you will need to split the data into multiple queries, keeping the query byte size below the "max_allowed_packet" size to avoid getting an error of exceeding the Maximum allowed packet size. You can check what the limit in bytes is by:
mysql> show variables like 'max_allowed_packet';