Misconfigurations remain one of the most common risks in the technology world. Simply telling organisations to “fix” this problem, however, is not as easy as it might first seem because there’s a myriad of technologies at play in modern infrastructure deployments. All of this results in a complicated mix of hardening approaches for each system.

What is key, then, is to identify where hardening is required and then consider the methodology for each area. Even something as simple as data storage requires detailed planning to ensure that security controls provide robust protection not just on Day One but for all time regardless of where that data is.

Understanding the Cloud’s Security Risks

For starters, it’s important to consider that both private and public (cloud-hosted) networks are susceptible to the risks associated with these compliance objectives. For data stored in the cloud, we continue to see inappropriate access controls applied to online storage, resulting in leaked data as well as organisations storing credentials in insecure ways.

Unfortunately, these problems are not unique to the online world. Incorrect permissions are an easy way for insider threats to become more costly by exposing more data. And storage of credentials in insecure documents or scripts remains a common way for outsiders to find new ways to expand their access. Ultimately, many of the same risks exist for data regardless of where it is.

There have been improvements in this area in recent years, but whilst “security by default” is a common approach for IaaS providers, these systems are not fool-proof and are even sometimes ignored, which further complicates things. And for those who are still deploying physical or virtual machines in their own data center, there is a worrying lack of default hardening applied to templated server builds that (Read more...)