Support

Akeeba Backup for Joomla!

#41440 Optimizing Akeeba Backup Settings for a Large Website (~15GB Files, ~5GB Database)

Posted in ‘Akeeba Backup for Joomla! 4 & 5’
This is a public ticket

Everybody will be able to see its contents. Do not include usernames, passwords or any other sensitive information.

Environment Information

Joomla! version
5.2.2
PHP version
8.3
Akeeba Backup version
9.9.11

Latest post by nicholas on Thursday, 19 December 2024 02:07 CST

keep

Hello,

I manage a large Joomla website that consists of approximately 15GB of files and a 5GB database. Within the database, the largest table contains over 30 million records, and there are several other tables with over 1 million records.

I’ve set up a command-line ZIP backup using Akeeba Backup with an upload to Google Drive. While I managed to configure it successfully, the backup process (excluding the upload) takes over 1.5 hours to generate the file.

To improve performance, I increased the "Number of rows per batch" to 100,000, which reduced the time, but I am unsure if there are additional settings or best practices that I can apply to further optimize the process.


Since this is a CLI-only setup for a late night backup, timeouts, I/O limits and memory limits are not major concerns.

Thank you for your help!

Best regards,

nicholas
Akeeba Staff
Manager

Increasing the number of rows per batch is actually the only thing you can do to speed up the backup. Akeeba Backup sets the PHP memory limit to 1 GiB. If you have the available physical memory (i.e. your server won't start swap-thrashing) increasing this value will try to load a lot of the table being backed up into memory. Don't worry about setting this too high; Akeeba Backup checks the average row length, the available PHP memory, and applies a conservative scaling factor to reduce the batch size if necessary. We created this protection and made sure it works in the real world out there before allowing you to increase the batch size to very high numbers. If I recall correctly, the maximum limit we've set is a million rows.

Beyond that, it's just down to the CPU single-core performance and the I/O throughput of the server's storage. Remember that server-grade CPUs tend to have loads of cores with relatively weak single-core performance each. So, of course, the speed at which data can be read, compressed, and written to the archive will be lower than on a desktop (and most high-end laptops, to be honest).

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

Support Information

Working hours: We are open Monday to Friday, 9am to 7pm Cyprus timezone (EET / EEST). Support is provided by the same developers writing the software, all of which live in Europe. You can still file tickets outside of our working hours, but we cannot respond to them until we're back at the office.

Support policy: We would like to kindly inform you that when using our support you have already agreed to the Support Policy which is part of our Terms of Service. Thank you for your understanding and for helping us help you!