October is National Cybersecurity Awareness Month (NCSAM) and this week’s theme is Today’s predictions for tomorrow’s internet.
Naked Security asked me for a “from the trenches” prediction – a prediction rooted in something practical, where I’m already preparing to spend some time and energy in the next six months.
I’m expecting fuzzing to remain an important technique in security testing, and for the sophistication of fuzzing to improve significantly.
What is fuzzing?
Fuzzing is fundamentally an automated code testing technique. It can be applied to find security problems by throwing vast amounts of tweaked and permuted (fuzzed) inputs at an application and monitoring for conditions with known security implications.
People can write clever tests, but not very many in one day. Fuzzing automates the process of test creation and so it can produce vastly more tests than a person can. Typically each test is quite stupid though, perhaps attempting to provoke the code into an exception or crash with nothing more than random input.
The raw speed of fuzzing compensates for the low odds of an individual test actually finding anything.
If you want to run millions of tests (or more – I try to test our engine for billions of iterations in each area I consider), then you need dedicated hardware, ideally lots of it.
Fuzzers that individuals can easily get running have also been rapidly improving, with the open source American Fuzzy Lop (AFL) being the standout player for me.
AFL describes itself as:
…a security-oriented fuzzer that employs a novel type of compile-time instrumentation and genetic algorithms to automatically discover clean, interesting test cases that trigger new internal states in the targeted binary. This substantially improves the functional coverage for the fuzzed code.
Fuzzing can be used as a black box technique (working without access to an application’s source code), so as it becomes more accessible to you it becomes more accessible to your adversaries too. That alone is reason enough to start.
One way to make fuzzing more accessible and efficient is to make it less stupid. This normally involves using knowledge of how a program works and how bugs can occur to influence the process of automated test creation.
Automatic exploration of code is hard though. Sophisticated computer programs have so many possible execution paths that attempting to trace them all causes a rapid “explosion” in complexity (known as a combinatorial explosion). There are simply too many possibilities even for a computer to cope with. (How the code is explored is a detailed topic beyond the scope of this article, but if you want to go down that rabbit hole, start with symbolic execution and then perhaps compiler transformation).
Hybrid techniques try to balance the speed of stupid tests with the greater efficiency of smarter ones, while avoiding getting lost in too many choices.
The recent winner of a $2 million cyber security prize used one such approach: concolic execution. That work, however, was sponsored at least in part by the USA’s Defense Advanced Research Projects Agency (DARPA), and is not likely to be released publicly anytime soon (the goal of the challenge was to automate writing exploits…)
As code gets harder to understand, the volume of code written each year increases, and as more and more of our lives touch computers in some way, the use of automation to find bugs will only increase in importance.
A number of promising approaches to improving fuzzing have already been demonstrated and it feels to me that we’re almost at a breakthrough where those different techniques are combined and made public – providing any developer with the opportunity to efficiently find bugs during development, before they cause problems.
The most promising tools that I know of come from Shellphish, but I don’t think they’re yet accessible enough to count as the breakthrough I’m hoping for.