Despite organizations’ readiness to embrace cloud and awareness of the risks, too many are too slow to recover from data loss. For example, research from Everbridge finds that organizations take an average of 27 minutes to rally the right team of experts once they declare an incident, and those minutes add up quickly in direct costs to the business. With unplanned IT downtime costing $8,662 per minute on average, companies today are spending an average of $233,874 just to get the right team in place and begin recovery efforts.
As more businesses move critical resources to the cloud, the best protection against downtime is to stay in control of cloud data. This is easier said than done, as Gartner projects the worldwide public cloud services market will grow 18 percent this year to reach $246.8 billion, up from $209.2 billion in 2016. Additionally, more than half of business-critical data will reside in the cloud by 2019, according to a recent Teradata survey.
This trend puts more data at risk of a cloud breach or security incident. In just the first six months of 2017 alone we saw five major cloud application and infrastructure data breach incidents. Most notably, the notorious Cloudflare breach exposed data for 3,400 websites (including Uber, Fitbit, and OKCupid) and Deep Root Analytics misconfigured its Amazon S3 servers, leaking personal and political information on more than 198 million people, or about 61 percent of the U.S. population.
Protecting cloud data from downtime or breach requires organizations to know which data is most critical, where it resides and how to bring it back quickly. Here are two steps to help protect data from downtime and breach in the cloud:
#1: Use Strong data classification programs
Using technologies like cloud access security brokers (CASBs) and integrated data loss prevention (DLP) tools, organizations should not only classify their data (e.g., public, private and restricted), but they should also track where it’s kept, exactly who has access and why. Such information can be invaluable when an incident hits and every second of downtime results in a financial hit to the business.
#2: Practice well understood – and tested – backup processes
Organizations move to the cloud to gain scalability and resiliency, but when data breaches or other incidents strike, few have the backup processes in place to ensure they regain access to data quickly and efficiently. In fact, three out of four companies hit by a ransomware incident in 2017 suffered two days or more of downtime while they struggled to get their files off backups, and one-third went five days or more without access.
Fortunately, the cloud also holds the answer to this volatile mix of critical data, increased incidents, and expensive downtime. New disaster recovery as-a-service (DRaaS) offerings are taking the guesswork out of cloud data recovery, enabling organizations to offload such services to experts who know exactly how to use the cloud to recover data fast while minimizing loss.
For example, CCSI’s Disaster Recovery Cloud Service lets you implement scalable, flexible and secure disaster recovery without requiring changes to your existing IT infrastructure. Since it’s hosted in the cloud, it lets you get back online quickly no matter how or where disaster strikes.
Is your critical data residing in the cloud? Let us create an effective DRaaS plan tailored directly to your needs. Learn more.
*** This is a Security Bloggers Network syndicated blog from CCSI authored by CCSI Team. Read the original post at: http://www.ccsinet.com/blog/2-ways-speed-recovery-cloud-data-loss/