Many collections in our US2 platform last night didn't complete in their normal timeframe due to an issue with our predictive scaling software. The scaling software was attempting to restart itself to self-heal but couldn't stay started long-enough to actually produce any scaling.
WHEN DID IT HAPPEN
The issues appeared around 09:22 UTC and were resolved at 14:05 UTC.
WHY DID IT HAPPEN
Our scaling software ran into an issue where it needed to restart yesterday, but it was unable to fully start up successfully to properly scale out the applications to service the work in our job queues. The cause of the initial restart is still under investigation by our operations engineers and we will update this again once we have more information.
WHAT WE COULD HAVE DONE BETTER
Our alerts caught the issue in the scaling software and attempted to restart it, but it was stuck in a loop continuously restarting. It was awake long enough for it to notify our systems it was back alive, but not awake long enough for any scaling to have occurred. We are looking into more ways we can more accurately detect issues like this to ensure we don't get any false positives when the software is not performing as expected.
We apologize for the inconvenience this caused our customers last night and we continue to improve our processes so things like this don't happen again. Thank you for your understanding and we will update this notification once more information becomes available.