This week, as storms engulfed the East Coast, Amazon Web Services (AWS), the cloud-computing giant, lost power for six hours in one of its four availability zones – due to connectivity issues.
I read though that a monitoring firm – which both uses AWS and provides monitoring services to its customers using AWS – measured the outage more accurately than Amazon disclosed from multiple locations across the country (at 44 minutes and 42 or 44 seconds), depending on location.
Despite the downtime, observers gave Amazon points for its fast response. Maybe they were more forgiving due to the ‘act of God’ nature of the incident.
Interestingly, the monitoring service that tracked the outage said that since its customers were using Amazon and Rackspace and their own hardware so much more interchangeably, the risk of more failures was higher. Another point is the basic fact that interrupted Internet access is a far more common problem than AWS going down. So, cloud providers are in the same spot as when enterprises shifted from frame relay connections to Internet-based connectivity between sites.
That’s a good analogy.
I can certainly understand the fickle nature of weather – especially given the snows and storms of winter on the east coast of the U.S – and how that can affect service. But again, I have to say that cloud providers have to do a better job of making service uninterrupted and more stable if they want to convince more businesses to move from internal servers.
I was glad to see, too, that a cloud-based monitoring service tracked AWS’ downtime for its customers – and more accurately than AWS. That’s just one reason, among many, in which monitoring proves its value.