We’re facing an application cybersecurity crisis. Today, we’re shipping code faster than we can secure it and that’s left criminals with an offensive advantage. The fix? Move to a more autonomous application security pipeline. You can build an autopilot for appsec, but it won’t be by using the same old tech you currently use.
We know attackers are constantly looking for new vulnerabilities and creating new exploits. The Google zero-day project reported that defenders are faced with a new zero-day exploit in the wild once every 17 days, on average. Simply thinking you’re going to ship “unbreakable” software is just as fanciful as thinking we’ll all start driving idiot-proof cars. We need to stop thinking of an application as either safe or not and instead think in terms of moving faster than the attacker. If we could find and fix vulnerabilities faster than an attacker can exploit our computers, we would win.
In 2016, DARPA conducted a $60 million dollar research study to determine whether such an autopilot for appsec was possible. One surprising conclusion: None of the competitive entries used traditional industry tools like static application security testing (SAST) or software bill of materials (SBOM) tools. Instead, every competitive entry realized that fuzzing must be the foundation of any autonomous appsec program.
Designing Autonomous AppSec
There are actually three parts to a successful autonomous appsec program:
- Find and prove vulnerabilities first. Simply listing every possible “maybe” is useless. We must be able to accurately identify exactly where the problems are and how to trigger them.
- Propose a specific remediation such as a software patch.
- Perform assurance testing to determine if the proposed remediation will break existing functionality. In particular, just fixing the security problem itself is only half the battle. You must also show the fix doesn’t break existing business functionality, which is often the biggest barrier to fixes getting fielded.
Fuzzing addresses both the finding and proving vulnerabilities (step 1) and providing assurance on fixes (step 3).
What exactly is fuzzing? Fuzzing is the process of automatically sampling and testing an application’s set of inputs to automatically elicit new program behaviors. You can think of it as the robotic equivalent of a human appsec penetration tester.
If you think of an application like a maze, then finding an exploit is like finding a secret path through the maze. Fuzzers autonomously and continuously run through the maze, taking different twists and turns to discover those hidden, vulnerable behaviors. Note today’s fuzzers are not those random fuzzers from the 1980’s. Modern fuzzing tools are enterprise-ready, use advanced algorithms to navigate the maze of software intelligently and can be automated.
Modern fuzzing creates a proof of vulnerability whenever a vulnerability is found, which is like a lightweight exploit that triggers the vulnerable line of code. Modern fuzzers also help provide assurance by continually expanding and improving code coverage—the amount of code properly tested—the longer they run. Indeed, Google reported 40% of bugs they found were regressions where previously working code broke. Google has successfully employed wide-scaling fuzzing for its core products to find and fix tens of thousands of bugs. Google isn’t alone: Microsoft also uses fuzzing as part of its SDLC and NIST recommends fuzzing as part of its Minimum Standards for Vendor or Developer Verification.
Fuzz Testing Successes With Zero-Days
What are some interesting examples of zero-days found because of fuzzing? Let’s look at two examples from just this year.
In April 2021, researchers presenting at CanSecWest reported they could hack a Tesla car with a drone using a zero-click exploit. The attackers found they could do a remote drive-by hack of a Tesla car through the Wi-Fi access point software. The exploit was discovered by fuzzing the Wi-Fi access point software netcomm.
In September 2021, attackers successfully launched an attack on the Ethereum digital currency that caused a blockchain split. That meant that Ethereum might be parallel processing two chains simultaneously, resulting in a double-spend attack. The underlying vulnerability that the attackers exploited was found via fuzzing in both Go and Rust Ethereum clients.
Clearly, we need to solve the application security crisis and build more autonomous appsec pipelines to beat attackers. What’s interesting is not that fuzzing works—it does—it’s that so many enterprises still do not use it. To quote Google engineer Jonathan Metzman, “It’s important to fuzz for vulnerabilities in your code—because if you don’t, attackers will.”