Cloud Security Challenges in 2020

Cloud computing has given organizations the ability to harness the power of advanced infrastructure without incurring the upfront costs traditionally required for on-premises networks. Provisioning IT resources requires little knowledge of the underlying infrastructure. By allowing users to create resources with a few simple configurations, deployment doesn’t require much more than a few clicks of the mouse.

While beneficial for organizations, superficial knowledge of a specific IT resource can leave it vulnerable to myriad cybersecurity issues. For example, a poorly configured AWS S3 bucket can expose sensitive data, which was the cause behind data breaches for Netflix, Ford, TD Bank, Capital One and thousands of others.

Understanding the Challenges of Securing the Cloud

Full traffic mirroring and packet analysis were the easiest ways of ensuring you wouldn’t miss out on important events and also have full forensics in case of a security issue. For on-premises data, there are no additional costs for mirroring traffic and performing proper logging, but cloud providers such as AWS charge for each VPC mirroring session as well as the bandwidth needed to transfer the data.

In an effort to reduce costs on mirroring, organizations will often eliminate doing it on some data they consider unnecessary. Unfortunately, this is a huge risk as it can also eliminate key elements essential for effective forensics after a cybersecurity event. The impact of missing data can make it impossible to identify vulnerabilities and monitor day-to-day events.

Distributed denial-of-service (DDoS) attacks are still a threat to organizations, as bad actors keep developing better offensive measures. Advanced persistent threats (APT) such as eavesdropping, malware or ransomware can take months to detect and several weeks to contain. Stuxnet is another infamous example of how attackers can remain undetected for long periods of time.

Without proper traffic and logging data, intrusion detection suffers and can lead to massive data breaches costing much more than budgeting for mirroring and data aggregation.

However, even organizations that do integrate proper monitoring solutions face a barrage of false-positive alerts. Too many false positives lead to analyst fatigue, which runs the risk of missing signals of an actual attack.

How SIEM Evolved to Solve These Challenges

Security information and event management (SIEM) was first coined in the 2000s, but logging network traffic and identifying threats have been around for decades. It’s an essential part of cybersecurity defenses that gives organizations a way to quickly contain threats before attackers are able to exfiltrate large amounts of data or inject malware on systems.

Traditional SIEM systems worked with limited data and presented information to human analysts who further researched each notification. SIEM systems built in the mid-2000s were much more powerful than their ’90s predecessors, but mid-2000 SIEM systems were resource-intensive and were very poor at scale. An organization would need to vertically scale, which meant more CPU and RAM would be required to continue managing data aggregation and reports as the business grew.

With evolved SIEM 2.0 solutions, organizations no longer needed to worry about computing power, but then a new issue emerged: an overflowing amount of data. The inherent logging and reporting features with each solution only gave IT basic information that couldn’t be used in advanced forensics and threat detection.

What Makes the Perfect SIEM Solution?

The right SIEM solution solves every challenge, including data aggregation and analysis. Not every IT administrator understands cybersecurity, but administrators are the people responsible for allocating cloud resources while still defending them against numerous cyberattacks. A SIEM solution should allow these administrators to monitor solutions properly and easily even with no cybersecurity analytical background. As large amounts of data are stored for many solutions, a good SIEM system will consume and analyze logs without retaining unnecessary data. Making data optimization a priority also helps companies reduce costs from large storage silos.

The ideal next-generation of SIEM needs to provide ease of use while still providing advanced cybersecurity capabilities. It should also be flexible to work with any organization and its unique business characteristics and be comprehensive to cover all aspects of data and traffic analysis, while not breaking the bank.

Seamless Data Integration

The foundation for a good SIEM is data aggregation. Administrators must be able to easily configure a SIEM to retrieve raw data from any location, including cloud storage. The right SIEM can use data from multiple log sources, applications and data aggregation locations. It should be able to use as much raw data for its analysis for the best possible insights.

Full packet inspection of mirrored traffic data is necessary for advanced cybersecurity defenses. It isn’t enough to simply analyze basic traffic data (e.g. IP address, protocol, and destination IP); deep analysis must also include raw data. Without this depth of analysis, a SIEM could overlook attacks such as a malicious SQL query sent to a database server, for example.

Leveraging Enriched Data

The ability to log every aspect of network activity is provided by cloud solutions, but an organization needs a SIEM that can use it. When searching for a new SIEM, the organization needs one that can work with existing collected data as well as any logging data that could be collected from new infrastructure provisioned in the future.

The solution needs to work with any collected data and must be able to intelligently analyze it and send alerts to administrators when necessary. An ideal SIEM should collect much more precise information and data points about a traffic source (e.g. WHOIS data from lookups, geolocation of an IP and common strings used in domain generation algorithms) to enrich SIEM data with the right level of context.

Intelligent Insights Without Causing Analyst Fatigue

Imagine your analysts being bombarded with false positives. If they don’t trust a SIEM’s alerts, they become cynical toward every notification. The fatigue phenomenon leaves the organization open to attacks when crucial events are ignored. The answer to this problem is to choose a SIEM with enough intelligence to understand the behavior of a system more deeply and provide the necessary context for analysts to help them decide to investigate or ignore alerts.

AI has shown promise at reducing the number of false positives and identifying complex patterns indicative of an advanced persistent threat. It takes away much of the analyst fatigue from older SIEMs and introduces an impactful, intelligent system with insights that can be useful and beneficial to cybersecurity defenses.

A SIEM that leverages AI consumes historical and current logs and performs user and entity behavior analytics (UEBA). This means that specific traffic patterns and user behavior information can be used to identify thresholds. Only when these thresholds are met do human analysts receive alerts.

When analysts receive an alert, they know that it must be taken seriously and further investigated. For instance, if file access spikes on a resource with typically very few access requests during off-peak hours, this suspicious pattern would trigger an alert.

Reports That Make Sense to Any IT Ops Administrator

Intelligent insights make it easier for administrators to decipher reports and alerts. Not every administrator is a cybersecurity analyst, so the right SIEM must provide dashboards and monitoring alerts easily understood by administrators, operations and developers. These reports and dashboards are made even easier when the solution works with AI and machine learning to let administrators know what type of attack has occurred and the appropriate tools to mitigate and contain it.

It’s also important to monitor suspicious traffic so that any bad actors are unaware that they are being watched. At the moment, one of the best ways to achieve this is directly monitoring the raw traffic data via traffic mirroring sessions. The ideal SIEM will use this to its advantage for making an organization more secure.

Conclusion

The introduction to cloud computing has been a boon to just about every organization large and small, but it comes with a unique set of challenges, unlike on-premises infrastructure. Cloud computing opens the organization to numerous attacks, but a SIEM can be your best defense and detect them before a critical data breach.

Choosing the right SIEM is important as it integrates with your entire system and will be difficult to replace. The right solution will scale with your organization and provides the right features so that analysts have real-time information in a fast-paced, highly dynamic cloud environment.

Featured eBook
Identifying Web Attack Indicators

Identifying Web Attack Indicators

Attackers are always looking for ways into web and mobile applications. The 2019 Verizon Data Breach Investigation Report listed web applications the number ONE vector attackers use when breaching organizations. In this paper, we examine malicious web request patterns for four of the most common web attack methods and show how to gain the context and ... Read More
Signal Sciences
Ariel Assaraf

Ariel Assaraf

Ariel Assaraf is CEO of Coralogix. A veteran of the Israeli intelligence elite, he founded Coralogix to change how people analyze their operation, application, infrastructure, and security data — one log at a time.

ariel-assaraf has 1 posts and counting.See all posts by ariel-assaraf