In 2005, Chris “Weldpond” Wysopal and I were colleagues working with cutting-edge tools to spot vulnerabilities in software. Back in the 1970’s and 1980’s both of our fathers had worked in quality control for jet engine manufacturers. Like our dads using dye-penetrant testing techniques to find cracks in turbine blades, decades later, we were using static analysis and fuzzing to find vulnerabilities in various software packages.
So when I saw that Sarah Zatko was giving a presentation at BlackHat in Las Vegas last month, titled “Quantifying Risk in Consumer Software at Scale”, it caught my eye. I recalled this video from 2016, where Sarah and her husband Peiter “Mudge” Zatko talked about co-founding the non-profit Cyber Independent Testing Lab (CITL). I also remembered that Mudge and Weldpond had testified to Congress about security risks on the Internet nearly twenty years ago.
Given their brilliant insights and prior contributions to the security community, I felt compelled to learn more.
The Problem in 2017
Software development tools and processes have vastly improved over the last decade. And software has been eating the world. Yet, consumers still have a lack of objective/meaningful data for comparison-shopping based on the relative security and build qualities of the software in products they purchase.
Consumers should be able to get quantifiable answers to a variety of important questions. Some are very basic: what’s the safest operating system? Or, what’s the safest browser within a given operating system?
It’s difficult to get unbiased data to answer these questions. Vendors spin their own narratives, pay off reviewers or create fake reviews. And user communities are filled with tribalism which blinds them to their side’s weaknesses.
Getting a good handle on more sophisticated questions is even harder. For example, which auxiliary parts of an OS or a browser (Read more...)
This is a Security Bloggers Network syndicated blog post authored by Alan Krassowski. Read the original post at: Cylance Blog