Due to increasing resource utilization, during a planned maintenance window on February 18 we increased the memory of our caching/queueing component. During this maintenance we also changed the CPU class for increased consistency with our other systems, including our UAT environment. The modified system initially performed normally, but as platform traffic increased on February 19 we began experiencing increased operation latency, leading to degraded API performance. Modifying these components requires complete platform downtime to ensure consistent queue processing, so we first attempted to increase the number of front-end servers in order to reduce load on the backend systems. Performance improved temporarily but began degrading again as the platform reached peak time of day. We took an emergency platform outage in order to revert to the original CPU class at which point performance returned to normal levels.