Service disruption
Incident Report for Castle
Postmortem

On Sunday, April 4th, 2021, beginning at 13:56 UTC, Castle's /authenticate endpoint was unavailable. Our teams promptly responded and service was restored at 14:09 UTC.

We've conducted a full retrospective and root-cause analysis and determined that the original cause of the incident was the hardware failure (as confirmed by AWS Support) of an AWS host instance that contained Castle's managed cache service. This hardware failure caused an accumulation of timeouts, resulting in some app instances being marked unhealthy and automatically restarted in a loop.

Although rare, we do expect occasional hardware-level failures, and our system is designed to be resilient to these failures whenever possible. In this case, the accumulated timeouts caused the system to behave in a way we have not seen before. We have re-prioritized our engineering team to implement 'circuit breaker'-style handling around cache look-ups which will prevent subsequent cache layer failures from impacting synchronous endpoints like /authenticate.

Posted Apr 08, 2021 - 14:29 UTC

Resolved
System is back to normal. We will follow up with more details about this incident
Posted Apr 04, 2021 - 14:26 UTC
Update
API endpoints responding normally again. Queued requests are catching up. Monitoring
Posted Apr 04, 2021 - 14:13 UTC
Investigating
We’re experiencing timeouts in API endpoints. Investigating
Posted Apr 04, 2021 - 13:56 UTC
This incident affected: Legacy APIs.