Why fuzzing is your friend for DevSecOps

Leaders proactively mitigate risk. One large risk they can mitigate is being blindsided by an unknown software vulnerability. Attackers who find an unknown vulnerability potentially can exploit all of an agency’s systems. When agency IT teams find a vulnerability first, they can make sure it is fixed or remediated before an attack occurs. With the increasing numbers of remote workers, it’s even more critical to make sure the software agencies develop and use is secure.

How are big tech companies doing it? By incorporating a quality assurance technique called fuzzing into their software vulnerability testing and assurance processes to uncover coding errors and security loopholes. 

The Google Chrome web browser, for example, is used on billions of devices and is completely open source, allowing any attacker to review exactly how the software works. So how does Google check and protect Chrome’s millions of lines of code? With fuzzing, a dynamic and nondeterministic security testing technique that allows developers to continuously and automatically check the ever-evolving web browser, including supply chain dependencies. In 2019, Google reported finding over 20,000 vulnerabilities automatically with its in-house fuzzing toolchain. 

Google isn’t alone. Microsoft, for example, lists fuzzing as one of the steps in the Software Development Lifecycle, using it not just to find vulnerabilities, but also to improve the robustness of its own products.

Perhaps surprisingly, the Department of Defense includes fuzzing in many of its requirements. For example, the DOD Enterprise DevSecOps Reference Design requires fuzz testing, as does the Application Security and Development Security Technical Implementation Guide. 

Fuzzers are different from most software security tools. They don’t just identify problems, they also show how to trigger them. For example, fuzzing a common web server may output a HTTP request that allows the tester to crash or hack the server. As a result, fuzzing has proven much more actionable than many competing techniques.  

Indeed, many are choosing fuzzing over competing technologies for three reasons:

  1. Actionability.  Fuzzing always proves a vulnerability is present.  As a result, users can always identify real problems and not waste time chasing false positives. 
  2. Automatic.  After a one-time configuration step per app, users can set up an automated platform to fuzz their apps on each new release.
  3. Developer-friendly.  Developers get paid primarily to develop features and improve functionality. Traditional security tools only point out flaws, but fuzzers add value by automatically building a test and evaluation suite that goes beyond security.

Ten years ago fuzzing could only be conducted by security experts, but the technology has matured to the point that even novice developers can get up to speed quickly. Test and evaluation teams that have a basic understanding of Linux can also use fuzzers.  

How to get started?  

Those just starting out should try open source tools. The two most popular today are AFL and libfuzzer, both primarily targeted at developers who have source code access (more on what to do without developer participation later). These tools focus on applications that are compiled, such as apps written in C and C++.

Some fuzzers, predominantly commercial products, offer the ability to analyze compiled code, even without developer participation. For example, the Defense Advanced Research Projects Agency ran a Cyber Grand Challenge to see if fully autonomous cybersecurity (both offense and defense) was possible, without any developer involvement or source code. Tools derived from that competition can now analyze production environment applications from Ada, Go, Rust, Jovial and compiled binaries. 

One limitation today is that most tools focus on code that runs (or can be compiled for) Linux. Unfortunately, good fuzzing tools are hard to find for non-Linux based systems, such as Windows or embedded operating systems. Developers working primarily on such platforms would need to set up a toolchain for testing within Linux as well as their production environment.

Is fuzzing right for you?  For agencies running software critical to the enterprise based upon compiled code (e.g., a binary), the answer is yes.

Like it or not, software will get fuzzed – whether  organically in production, maliciously by attackers or proactively by developers. If attackers are willing to hurdle the effort of fuzzing, agencies should too so they can find critical problems first. 

Originally published on

*** This is a Security Bloggers Network syndicated blog from ForAllSecure Blog authored by David Brumley. Read the original post at: