When you trust your data to a cloud service provider, do you automatically assume that it will remain safe? And do you assume that, should a service failure occur and the data appears lost, that the provider will have automatically backed it up somewhere – saving the day?
Earlier this month, there were three power outages at Amazon Web Services, and people lost their data. One customer was so angered by it that he/she left a post on Amazon’s blog, entitled: “Amazon EBS sucks; I just lost all my data.” The gist of the disappointment was a notification to the user that they’d “experienced a failure due to multiple failures of the underlying hardware components and was unable to be recovered. We recommend recovering from your most recent snapshot.”
But new users to cloud computing shouldn’t assume that AWS’s redundancy will automatically restore any data loss. Amazon actually states that customers’ data depends on its durability – meaning, it depends on the size of a user’s volume and how much the data has changed since the last snapshot. So, really, the responsibility is on the user to plan for failure and continually perform EBS snapshots. Problem is that new users often don’t know this.
I think that using the cloud shouldn’t be this complicated. The cloud promises ease of use and fewer worries for IT administrators. So why not deliver, especially to smaller businesses that don’t have a hearty internal IT infrastructure that prevents data loss?
Best thing is to be prepared, and that’s why it’s smart to employ monitoring tools to track the performance of cloud platforms. Monitis’s Universal Cloud Monitoring Framework gives users the confidence that comes from employing a third-party independent tool to monitor the cloud infrastructure. Even when cloud computing providers offer monitoring services (like Amazon CloudWatch), there’s often an inherent conflict of interest because they’re more likely to show higher uptime. You want a customized, independent audit of SLAs (service level agreements)!
Monitis’s Cloud Monitoring Framework helps companies:
– Control Amazon Web Services’ costs. Companies often find their cloud computing costs escalating when they use auto-scaling mechanisms to add extra Amazon EC2 virtual servers on demand and, due to bugs or faulty configurations, processes get out of control;
– Stay on top of monitoring and notifications for newly launched URLs — via an automatic discovery process;
– Automates agent deployment on each newly launched virtual server, which saves companies setup time and allows for deep, process-level monitoring and detailed performance analysis.
– Analyze historical data on each virtual server’s start and stop and performance data, enabling IT pros to examine failure and root causes;
– Monitor installed software on each virtual server, along with other parameters, and smoothes the configuration management of large numbers of cloud servers.