Support

Akeeba Backup for Joomla!

#41563 Pre-check script before Joomla Backup Task

Posted in ‘Akeeba Backup for Joomla! 4 & 5’
This is a public ticket

Everybody will be able to see its contents. Do not include usernames, passwords or any other sensitive information.

Environment Information

Joomla! version
5.2.3
PHP version
8.2.26
Akeeba Backup version
9.9.11

Latest post by nicholas on Thursday, 06 March 2025 10:06 CST

dsic-se-support-web

Hello Akeeba support team,

We would like to know if there is a way to run a pre-check step before a backup task can be performed.

It could be in the form of an executable script, or reading a file at some path.

 

Our use case is having NFS shares as target path. It can potentially lock up syscalls until they time out because the other end is in outage, for instance.

Our pre-script would test the availability of the target output path, with a timeout deadline, which is something I believe is not really possible at the PHP level.

 

As an aside, joomla.php is scheduled (cron) to run every minute and such requests will pile up if something goes wrong.

 

Best Regards,

Marc

tampe125
Akeeba Staff

Hello,

I think you can easily fix your issue by creating a small bash script that should perform these kind of checks and chain it with the actual call to be backup process. Something like:

your_script.sh && path/to/php joomla.php akeeba:backup:take

Your script should return 0 on success, non-zero when it can't probe the backup output path.

Davide Tampellini

Developer and Support Staff

🇮🇹Italian: native 🇬🇧English: good • 🕐 My time zone is Europe / Rome (UTC +1)
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

dsic-se-support-web

Thanks for the reply.

Would you mind if we suggest an improvement on the administrator backend side?

 

One of the best avoided, biggest risks would be to clog other web server activity with too many stuck php processes/threads and prevent more critical parts such as the frontend part, or articles modification etc. from operating regularly.

Would it be possible for the plugin to somehow (user session data?) remember that a file access to the storage backend is being performed and limit concurrent access?

It would then mark the storage path as unavailable if the primary action seems to time out. This state would persist (cache its status) for a fixed period of time.

For the admin user, a callout message while browsing in the admin area would remind them of the limitation in progress.

It could be an individual setting in the options to toggle this behavior.

nicholas
Akeeba Staff
Manager

Trying to develop an integration with the operating system would require that our users only use very specifically configured servers. This is completely impractical for mass-distributed software like ours.

There are much simpler solutions.

First and foremost, you should be taking backups when your server is at a low utilisation which is usually in the middle of the night.

Stuck PHP threads are not really a problem, except for the fact that you seem to have a problematic NFS mount policy. Still, you can of course take backups using a user account which has a CPU usage limit (ulimit -t) that's enough for taking a backup, with a little bit of headroom, but not as big as to allow a stuck thread to be an issue.

Regarding the NFS mounts, you can change the mount options to have a faster failure if connectivity to the remote end is lost for any reason. Out of the top of my head, you can use softerr or hard instead of soft for delayed or immediate transfer errors respectively, and set retry=2 for a maximum locking time of 2 minutes (instead of the default which can be up to 10000 minutes, which is almost a week!), or even retry=0 for an immediate error (which might not be an option if you are connecting to the NFS mount over the Internet instead of an internal network). A failed disk operation will cause a backup failure which, of course, results in the thread terminating instead of lingering forever while potentially locking I/O in the kernel.

Another option is, of course, to not go through NFS at all. Instead, have the backup transfer the archives to the remote server over SFTP (Upload to Remote SFTP Server post-processing option). Along with the Process Each File Immediately you will be uploading each backup archive part as it is created, not taking up too much space on the web server.

Don't try to overengineer and overcomplicate a solution to what is garden variety network issues affecting NFS connectivity. They won't work as well as the standard mount options which have been around for decades and have been tried and tested by hundreds of thousands of IT professionals on countless servers.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

Support Information

Working hours: We are open Monday to Friday, 9am to 7pm Cyprus timezone (EET / EEST). Support is provided by the same developers writing the software, all of which live in Europe. You can still file tickets outside of our working hours, but we cannot respond to them until we're back at the office.

Support policy: We would like to kindly inform you that when using our support you have already agreed to the Support Policy which is part of our Terms of Service. Thank you for your understanding and for helping us help you!