A lot of people ask me how Chronicle is doing inside Google Cloud (TLDR: doing well), and I wanted to share some good news. I also wanted to reveal some of our lessons building our threat detection capabilities (that we just released).
If you recall, we announced our YARA-L detection language at RSA 2020. Naturally, many people loved it, and our capabilities have grown since then. Here is what we learned and then built as a result:
- Our initial YARA-L implementation laid the foundation for Detect, with the data foundation (where we stitch events together into a stateful timelines) and of course the scale for historical and real-time detections. Now we have multi-event operations; we added sequence awareness; aggregation and windowing that work well on petabytes of data we have.
- We did start with ATT&CK mappings for our detection content, but now our rules have additional magic as well: we can refer to low prevalence artifacts straight from a rule without any additional work by the client; this works really well for some tricky detections.
- Another common question was about existing rule creation approaches like Sigma — and now we have a way to convert Sigma rules into YARA-L. In fact, this will allow us to use public Sigma code and get some of their content into Chronicle (following this vision).
- Obviously, some people asked to see how we use our unique threat intelligence for detections. This is now accomplished by a detection feed built by our threat research team, Uppercase.
- Finally, our approach conceptually follows the idea I covered here as “detection as code.” Specifically, it is much easier to version our YARA-L detection content , map it to frameworks, create and reuse modules, and convert to/from Sigma (for cross-vendor/cross-tool usage).
Here is one detection example (as a narrative and not as raw YARA-L):
Give me all the documents opened through outlook.exe that were followed by a child process that made a network connection to a low prevalence domain and then creating and launching a process with a low prevalence hash.
Note that it makes references to “low prevalence domain” and “low prevalence hash”; these are just magically created by the system and don’t require any user action. This allows us to run powerful rules that mix “known bad” detection and anomaly detection gracefully.
Together this makes Chronicle Detect work well for the kinds of advanced, complex, and subtle threats our customers face today.
So, we now have a decent argument that our detection engine and rule approach are better and help clients implement a modern “detection as code” approach if they desire. However, we also still have an unbeatable argument that our scale/performance are the best.
Calls to action:
- Now, go read the official news on Chronicle Detect here on Chronicle blog and on Google Cloud blog.
- Watch a Chronicle Detect demo video here
- For a broader contest around detection, check out a recent SANS webinar we did on this topic.
- Hear from clients in the Chronicle Detect panel session at Google Cloud Security Talks.
- For a YARA-L whitepaper (and blogs 1,2,3,4) ) go here.
*** This is a Security Bloggers Network syndicated blog from Stories by Anton Chuvakin on Medium authored by Anton Chuvakin. Read the original post at: https://medium.com/anton-on-security/chronicle-detect-is-here-63a779679e56?source=rss-11065c9e943e------2