Update the crawl condition when continuing the loop early#178
Update the crawl condition when continuing the loop early#178catalystfd wants to merge 1 commit intocatalyst:MOODLE_310_STABLEfrom
Conversation
We've encountered pathological issues where the crons appear to get stuck within these loops and never ever finish, which leads to the rest of the site tasks being starved of resource.
| $persistent = new url(0, $node); | ||
| $persistent->update(); | ||
| $lock->release(); | ||
| $hastime = time() < $cronstop; // Check if we need to break from processing |
There was a problem hiding this comment.
I suspect the flawed logic might be in this loop specifically.
since when stracing they seemed to be spinning calling update and then checking the lock advisory table.
There was a problem hiding this comment.
Perhaps it could be specifically courses that have a lastcrawled of null, they are always eligible for queue selection.
And setting the needscrawl to lastcrawled will simply set it to null, which means they will always be eligible for queue selection even if the course is never in recent courses, and this would be an infinite loop
and if all the available cron processes are tied up running this loop, there will never be time/space to update the needscrawl field.
I think this change is still good, but perhaps we also need to skip this entire if block if the nodes lastcrawled is null.
We've encountered pathological issues where the crons appear to get stuck within these loops and never ever finish, which leads to the rest of the site tasks being starved of resource.