Hi Davide,
Yes - I double checked and integrity check was disabled - I typically only use integrity check when I'm doing a critical archive before taking a site off-line - and only from the Akeeba control panel. Can't remember last time I did that. Otherwise I have a cron running that takes a backup weekly in the middle of the night - and that was that last backup that fired (which was 3 days ago). What triggered me looking was I was getting 'out of disk' errors on the account. What appeared to be happening is that the process would write till a 3ish GB log file was created, then stop writing to that file and start a new one. They grew slowly over the hours till I got alarms.
*** I do have a code suggestion for Akeeba to consider - see the end of this post.
After studying a bit - I think I know the sequence that led to this.
The one part I don't know is why it was running integrity check - that I don't know. I checked the configs (I only have one configuration in Akeeba) and 're-saved' it to try to make sure it was configured write.
But - integrity was running - and this is what I think happened.
-> My cron ran on Sunday night in the wee hours and created a jpa. Integrity check ran (for some reason)
-> Normally, I archive my jpa's to a google drive and delete the server copy, but I had turned off the local delete for a migration.
-> For that reason I notice that disk space decreased suddenly, realized the server jpa's (I have 3 sites using that directory) were still there - so I deleted the server copies.
-> An integrity check was running ON the one of the jpa's when I deleted it - so the integrity check started failing, because the jpa was gone. Makes sense and fits the error.
-> That caused an infinite loop in the integrity check -so the log file started growing. At some point the max log file size is hit and a new one starts. This tripped my disk room alarms again - but this time because of the mulit-gig log files, not the jpas (which are 100's of megs, not gigs)
To resolve
-> Got my host provider to find the process and kill it.
-> Enabled SSH so I can do this myself it if happens again.
-> Deleted the logs.
-------------------
I will watch after the next cron driven backup to monitor (that will happen in 3 days). As I said - I re-stored the profiles with integrity off and server copy delete on (post google).
------------------
Code suggestion.
*** I think the infinite loop process where you fail and write to the log file, forever, is bad. This should be trappable, even if it is just a matter of counting how many lines you write to the log file and finally say - something is broke - and stop. It should not be an infinite loop. For most people - this would have crashed the system when disks quota filled.. I'm an advanced user with SSH access and good support, so I was able kill the process.
The log files being created would run up to 3+GBytes. When I managed to download one, I couldn't even open it in Notepad++. Chrome would open it enough for me to see the beginning, but then would die. I did not try vi. But there were something on the order of 40 million lines in the file - that should be a flag to any process to stop.
-----------------
I'll update this ticket if the process repeats after the Sunday backup runs.
-Bob