OK, this is one of the reasons I was suspecting. You have enabled the multipart upload to Amazon S3 which is dependent on two external factors: the response time from Amazon servers and your PHP timeout. As you can read in the last
3 paragraphs of our troubleshooting instructions some server configurations don't play nively with that and it's advisable to disable it.
Generally speaking, it's best to use small part sizes (around the 10Mb area) when uploading backups to remote storage. The exception is when using the native CLI script (cli/akeeba-backup.php) from a CRON job. In this case you do not have any execution time limits so you can have big parts, up to 2Gb, without any problems.
As for the execution time, it does make sense. What you actually need to do is set:
Minimum execution time: 1 second
Maximum execution time: 14 seconds
Runtime bias: 75%
This is the default and rather conservative setting. Akeeba Backup will work happily until the 10 seconds mark (14 * 0.75 = 10.5 seconds to be exact) and then it will start considering whether it should break the step. If it expects a long operation to take place, such as uploading another part to S3 or Dropbox, it
will break the step, preventing the timeout error from occurring. You seem to have a high Maximum execution time which explains why you had to lower the runtime bias a lot to get it working and why this doesn't always happen (the runtime bias is the threshold for considering whether to break the step, not the hard limit to break the step no matter what).
Overall I am very impressed that you did grasp most of the minutiae of configuring the time limits in Akeeba Backup. Most clients throw in the towel much earlier than that and come here asking for a solution :)
Nicholas K. Dionysopoulos
Lead Developer and Director
🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!