Support

Akeeba Backup for WordPress

#41304 Akeeba backup with acymailing plugin

Posted in ‘Akeeba Backup for WordPress’
This is a public ticket

Everybody will be able to see its contents. Do not include usernames, passwords or any other sensitive information.

Environment Information

WordPress version
n/a
PHP version
n/a
Akeeba Backup version
n/a

Latest post by nicholas on Wednesday, 06 November 2024 01:56 CST

preflight

Thanks for great plugin Nicholas and crew.

These notes are for future google searches.

 

We found that using Akeeba Backup with plugins that create millions of rows cause the default backup to be very slow.

ACYmail in particular creates millions of empty rows for the wp_acym_history table. to track click history of email sent.

The default for the "Database backup engine" - Number of rows per batch is 30 rows. Setting this to more. i.e. 500. (the Max is 1000 ) speeds up the backup significantly.

Our 1Gb backup went from 55min to 1:50sec when we went from 30 to 500 rows per batch. 

Keep in mind that the more rows the potential for errors gets bigger. 

 

Thanks

 

nicholas
Akeeba Staff
Manager

Let me fix that for you:

We found that using any database backup method (including MySQL's own mysqldump command line tool) with plugins that create millions of rows cause the default backup to be very slow.

MySQL is slow iterating tables, especially if there is no SERIAL (unsigned big integer, autonumber, and indexed) field on the table. It gets much, much slowed when the table rows are of varying sizes i.e. when they have variations of VARCHAR, TEXT, BLOB, since this data is stored separately from the rest of the row data which means the OS needs to do double the I/O to retrieve a single record. That sounds a lot like AcyMailing tables, right?

The best thing you can do is increase the "Numbers of rows per batch" to the maximum value of 1000 which, incidentally, is the default value.

Let me tell you a secret: why the default value is set to the maximum. Akeeba Backup asks MySQL for the average row size of the table prior to starting the table's backup, and will thus adjust the batch size downwards if it is likely to cause memory issues. As long as your table doesn't have any outliers, i.e. rows which are gigantic compared to the average, this works absolutely fantastic.

The only reason this is a configurable option is in case you do have a table with an outlier record, or a server so slow it will make a snail crawling on flypaper look positively supersonic. A value of 50 only makes sense if your server could run in a race by itself and still get lapped by the leader, if you follow my drift. It's not a value I'd ever use in production.

In general, you should not adjust this setting unless you have a very good reason to do so. In the vast majority of cases the default of 1000 works great and is the best compromise between speed and risking a memory outage or a timeout.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

Support Information

Working hours: We are open Monday to Friday, 9am to 7pm Cyprus timezone (EET / EEST). Support is provided by the same developers writing the software, all of which live in Europe. You can still file tickets outside of our working hours, but we cannot respond to them until we're back at the office.

Support policy: We would like to kindly inform you that when using our support you have already agreed to the Support Policy which is part of our Terms of Service. Thank you for your understanding and for helping us help you!