Let me fix that for you:
We found that using any database backup method (including MySQL's own mysqldump command line tool) with plugins that create millions of rows cause the default backup to be very slow.
MySQL is slow iterating tables, especially if there is no SERIAL (unsigned big integer, autonumber, and indexed) field on the table. It gets much, much slowed when the table rows are of varying sizes i.e. when they have variations of VARCHAR, TEXT, BLOB, since this data is stored separately from the rest of the row data which means the OS needs to do double the I/O to retrieve a single record. That sounds a lot like AcyMailing tables, right?
The best thing you can do is increase the "Numbers of rows per batch" to the maximum value of 1000 which, incidentally, is the default value.
Let me tell you a secret: why the default value is set to the maximum. Akeeba Backup asks MySQL for the average row size of the table prior to starting the table's backup, and will thus adjust the batch size downwards if it is likely to cause memory issues. As long as your table doesn't have any outliers, i.e. rows which are gigantic compared to the average, this works absolutely fantastic.
The only reason this is a configurable option is in case you do have a table with an outlier record, or a server so slow it will make a snail crawling on flypaper look positively supersonic. A value of 50 only makes sense if your server could run in a race by itself and still get lapped by the leader, if you follow my drift. It's not a value I'd ever use in production.
In general, you should not adjust this setting unless you have a very good reason to do so. In the vast majority of cases the default of 1000 works great and is the best compromise between speed and risking a memory outage or a timeout.
Nicholas K. Dionysopoulos
Lead Developer and Director
🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!