Hi team,
I don't want to back up locally to the server, but only to Dropbox and Drive externally.
Is it possible to do this and how?
Best reagrds
Everybody will be able to see its contents. Do not include usernames, passwords or any other sensitive information.
Latest post by nicholas on Friday, 31 May 2024 10:01 CDT
Hi team,
I don't want to back up locally to the server, but only to Dropbox and Drive externally.
Is it possible to do this and how?
Best reagrds
You can't. As documented in https://www.akeeba.com/documentation/akeeba-backup-joomla/how-to-cloud-backup-basic-config.html the files are first created locally, then uploaded.
This is not a bug, nor a missing feature. It is the only sensible way to do it.
Taking a backup requires two things. Random access to the file, and appending or overwriting data.
Most remote storage providers do not offer random access to a file. You can either download the entire file, or none of it. Those which do offer random file access are very slow, in the order of several seconds compared to local filesystem access which is in the order of nanoseconds. In other words, we're talking about hundreds of thousands, to millions of times slower access.
Likewise for appending and overwriting data.
On a blank Joomla! installation with approximately 6000 files and just the sample blog data installed we need to perform around a hundred thousand of these operations. With the local disk latency being in the order of nanoseconds (thanks to filesystem caching) the backup takes 15 seconds on our test server to generate a ~35MiB backup archive. If we were to try and write directly to remote storage -- let's say Amazon S3, which is pretty fast all things considered -- the same backup would take TWO DAYS. A more substantial site which backs up to ~600MiB would take several years to back up.
It's impossible to reduce the number of operations. It's the nature of taking a backup. You go through the entire database and filesystem, dumping its contents in a structured manner. The theoretical maximum you could achieve given infinite memory and CPU usage limits (which, by the way, are outright impossible in the real world!) would be about 10,000 operations for the nearly empty site, which would still mean that the backup would take several hours instead of just 15 seconds. Again, that's the theoretical maximum speed, assuming you have infinite memory, infinite CPU usage limits, a network connection which will never fail, and a remote storage provider which has zero downtime. None of that is even in the same universe as objective reality. In the real world, you'd most often than not get a failure to take a backup, and when it doesn't fail it will still take forever.
And that's why we didn't implement it like that.
If you want to see how hard it sucks to do things this way, there's the DirectSFTP archiver engine. Instead of creating an archive, it uploads all files being backed up to a remote SFTP server. This only creates small files locally (about 1MiB each) when backing up the database. The empty site takes about an hour to back up on a local gigabit network because the latency introduced by the network stack is several orders of magnitude higher than local disk access. And that's just uploading whole files, without asking SFTP to do random access which is dead slow.
Realistically speaking, the only way you could reliably write directly to a remote backup is if you were to use NFS and the remote server was a box sitting on the same network rack, connected to yours with a direct 10GBps or higher speed fiber optic or CopperLink connection. Even then, you'd need plenty of RAM as a filesystem buffer. An yes, I have tried that with a 2.5Gbps network (I don't have access to a fiber link; that's as high as I can go with our Cat6 cables) and it's still quite slow to be practical. Considering how many orders of magnitude slower API-based file storage like what you get with remote storage providers is, there's no way to realistically backup to remote storage directly.
Your best bet to conserve space is to use a smaller part size for split archives and enable the option to upload each backup archive part file immediately after being created. The maximum space you will need is twice the part size file, plus the size of the log file (it will be zero if you set the logging level to None), plus 1MiB for temporary files keeping the backup engine state while the backup is running. That's how we back up medium to very large sites (anything over 300MiB) on space-constrained servers.
Nicholas K. Dionysopoulos
Lead Developer and Director
🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!
OK it's clear as usually !
Best regards from France.
Stéphane.
Have a pleasant weekend!
Nicholas K. Dionysopoulos
Lead Developer and Director
🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!
Working hours: We are open Monday to Friday, 9am to 7pm Cyprus timezone (EET / EEST). Support is provided by the same developers writing the software, all of which live in Europe. You can still file tickets outside of our working hours, but we cannot respond to them until we're back at the office.
Support policy: We would like to kindly inform you that when using our support you have already agreed to the Support Policy which is part of our Terms of Service. Thank you for your understanding and for helping us help you!