SBN

MY TAKE: How advanced automation of threat intel sharing has quickened incident response

Threat intelligence sharing is such a simple concept that holds so much promise for stopping threat actors in their tracks. So why hasn’t it made more of an impact stopping network breaches?

Related: Ground zero for cybersecurity research

Having covered the cybersecurity industry for the past 15 years, it’s clear to me that there are two primary reasons. One is the intensely competitive nature of organizations, and the other has to do with the escalating digitalization of commerce.

I had an illuminating discussion about this with Jonathan Couch, senior vice president of strategy at ThreatQuotient. We spoke at Black Hat USA 2019. ThreatQuotient is a Reston, Virg.-based security vendor in the thick of helping companies make more of their threat feeds.

The company launched in 2013, the brainchild of Ryan Trost and Wayne Chiang, a couple of buddies working as security analysts in a U.S. military complex, who got frustrated by their inability to extract actionable intel from a deluge of threat feeds. For a full drill down of my conversation with Couch, give a listen to the accompanying podcast. Here are key takeaways:

Ripe for badness

Let’s face it, for-profit enterprises, and even public agencies, are geared to keep their rivals in the rearview mirror. Sharing proprietary information, even from one in-house department to the next, is simply not in their DNA. At the same time, digital transformation has redoubled the complexity of company networks, catapulting us from Big Data to Very Big Data.

Consider that 90% of the data that exists in the world was created in two years — 2017 and 2018 — and that our digital universe is on track to swell from 3.2 zettabytes to 40 zettabytes, as the Internet of Things and 5G networks take hold.

Yet today as much as 73 percent of all the data stored or moving across the network of a typical enterprise goes untouched by any analytics tools. Threat actors couldn’t ask for a more ripe environment. The huge swathes of overlooked or poorly analyzed data serve very nicely as cover, thank you very much.

Security information and event management systems (SIEMs) came along about 15 years ago as a tool security analysts could use to sift data logs for suspicious traffic. But complexities bogged things down almost from day one. Along came incident response (IR) teams to focus on mitigating actual malicious events, followed by vulnerability management teams to handle patching, and then threat hunting teams to proactively flush out badness.

Couch

“You have all these functions that have grown up over the years and they don’t necessarily interact all that well,” Couch told me. “There turned out to be a lot of limitations, as far as the amount of data coming in, what the teams were able to address, how the teams communicated and what collaboration actually looked like on those teams.”

Too much data

That bottleneck precipitated the arrival of threat intelligence platforms as a means to better correlate threat feeds coming in from disparate security systems. ThreatQuotient was in this vanguard and helped introduce the use of threat libraries — receptacles for intel coming in from different teams. The idea was to pool intel from all sources, and make it readily available to all teams, so everyone operated off a common knowledge base.

“It was really about setting up a central repository,” Couch says, “where anybody could come into the library and pull out a book on a given topic and have access to everything the organization knows about the topic.”

However, for every step forward, in terms of improved sharing, organizations got shoved two steps back by the relentless rising tide of Big Data. SIEMs and threat intelligence platforms could collect and catalogue threat feeds much more efficiently, alright, but there was just so much more data than any human, or team of humans, could possibly be expected to correlate.

“Everybody soon realized there wasn’t enough people to go through that amount of threat intelligence,” Couch says. “There was so much information about what the bad guys were up to that the organization simply couldn’t deal with it.”

New security stacks                 

So along came two new security stacks: user and entity behavior analytics (UEBA) and security orchestration, automation and response (SOAR.)  Both solutions leverage machine learning and leading-edge data analytics techniques; the former focused on the behavior of users and entities, such as terminals, applications, networks, servers and connected objects; while the latter focused on helping analysts define orchestration, automation and response processes.

With a foundation in supplying threat intelligence platforms, ThreatQuotient pivoted to supplying advanced SOAR technologies, as well. “Our big play is really on the automation side,” Couch says. “There are a lot of orchestration platforms out there that manage things like incident response playbooks . . . we’re on the automated response side; as you get intelligence in, and as you see things happening in your network, we help automate getting that information out to all the key technologies and teams —  to make sure that they’re all aware of what’s going on.”

As Couch described it, enterprises today face an unceasing sorting and prioritizing challenge. UEBA and SOAR solutions help identify malicious activity very quickly and with a high degree of certainty, while also equipping companies to respond appropriately, quickly and at scale.

Lofty upside

More sophisticated intel sharing systems are very cool. Yet productive, centralized sharing with so much at stake, in such a dynamic environment, remains a delicate dance. Companies leaders must still do the hard work of aligning three foundational components —  people, processes and technology.

“You can’t just throw technology at it,” Couch observes. “There is no silver bullet technology solution that you can put in to just solve all of your problems. Yes, you need to leverage the technology that’s out there, but you also need to train your people, and develop the right processes in order to leverage that technology.”

As companies get better at centralized sharing and automated detection and response, the effectiveness of today’s leading-edge DDoS, ransomware and APT attacks ought to decline. Threat actors will innovate, of course. But as the sharing of threat intelligence advances, internally and externally, companies and industry sectors could actually get one step ahead, for once.

“We’re allowing the analysts to say, ‘alright, here’s some important, relevant intelligence coming into the system that we’re constantly finding.’ Then the analysts can start to take a look at, ‘OK, how do we become more proactive?’ ”

Threat intelligence sharing is still in a nascent phase. Massive data breaches are still common place. But there is a potentially lofty upside. The path to proactive defense of company networks has been set forth. Let’s see where it takes us. Talk more soon.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 


*** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: https://www.lastwatchdog.com/my-take-how-advanced-automation-of-threat-intel-sharing-has-quickened-incident-response/