Security bugs and flaws: Both bad, but in different ways
Security flaws are different from bugs, but they endanger the security of applications and systems just the same. Here’s how to find and fix design flaws.
All software defects are not equal.
That should be self-evident. Given that millions of lines of software code are written by thousands of humans all working under pressure, it is inevitable that they would be littered with different types of mistakes, some more severe than others.
Those mistakes range from functional, compilation, runtime, syntactical, and logical errors to missing commands, communication problems, and so on. They can make an app malfunction or crash and also make it vulnerable to attacks.
But we’re not just talking differences between individual defects. There is also an entirely different class of defects that occur in the design of an app or other product built with software. These are not simple mistakes in a line of code that can be found with an automated tool and fixed with a few keystrokes. Instead, they’re mistakes in the functional structure.
At Synopsys, we call the coding mistakes “bugs” and the design mistakes “flaws.” While these are not standard industry terms, they are useful, in part because bugs and flaws create different risks and because bugs get most of the attention while design flaws tend to get overlooked.
An example of flaws vs. bugs
A physical illustration of the difference is the notorious Tacoma Narrows Bridge in Washington State that spanned the strait of Puget Sound. It opened in July 1940 but collapsed Nov. 7 of that same year in a 40 mph wind. Engineers said moderate winds produced “aeroelastic flutter” that was “self-exciting and unbounded.”
And that was due to a catastrophic design flaw. Even if the bridge had been constructed exactly according to specifications, which it probably was, it was doomed to fail because of faulty design.
“It’s not that someone forgot to pour a section of concrete or accidentally forgot to put one bolt on, which would’ve been the equivalent of bugs,” said Sammy Migues, principal scientist at Synopsys. “The problem was that the bridge was not built for the design parameters required.”
The lesson for software: It is crucial to address both security flaws and bugs if you want your networks, systems, and applications to be secure.
“Not all the software in your program is an application,” said Migues, co-author of the BSIMM (Building Security In Maturity Model) for the past decade. “So if you worry only about the code in your application, you probably aren’t paying attention to a bunch of your software. That’s bad.”
Indeed, a rough estimate is that half of all software security defects are design flaws. If you ignore them—and hackers hope you do—they provide a fertile attack surface.
Why it’s harder to fix a flaw than a bug
So why do security flaws get so much less attention? Probably because it is more difficult, time consuming, and expensive to find and fix them.
Even if your code contains thousands of bugs, automated tools—static, dynamic, and interactive analysis, along with software composition analysis (SCA)—can you’re your developers find and fix them, sometimes even in real time as they work.
That means fixing bugs is, relatively speaking, quick, easy, and inexpensive. “If we didn’t catch an error when it occurred and it made the application crash, if we just change a line of code, then poof, it will work correctly,” Migues said.
A flaw, by contrast, is often much more subtle than an “off-by-one” error in an array reference or the use of an incorrect system call.
“A design is a protocol between two things,” Migues said. “It could be how a file is built or the methodology for logging.”
“A design flaw would be saying, ‘I’m going to allow this application or this microservice to accept any number of requests at any speed from any source. There will be no velocity checker, no identity and access control, no access management.’ That’s a design flaw. It’s not just screwing up a line of code.”
Unfortunately, finding design flaws is more labor intensive than finding bugs, and it takes significant expertise. Which explains why organizations are still not doing it nearly enough.
Finding security flaws with threat modeling
Migues said organizations have made incremental progress in doing design review, but only with the most basic version, called threat modeling (TM). The “deeper dive” version, called architecture risk analysis (ARA), not so much.
Migues said on a scale of 1–10, TM would be in the 1–3 range, a mix of TM and ARA would be 4–6, and an intense ARA would be at the 7–10 range.
“Many more organizations have started doing TM, which is basically saying, ‘Based on what we know, does anything look wrong with this design?’ By and large it doesn’t break designs—it uses the threat modeler’s experience to see if anything is missing or if anything is being done in a way where it’s made things go wrong before,” he said.
That is worth something. But as Migues puts it, threat modeling is a bit like determining that little rocks or medium rocks won’t break your window, but it doesn’t find out whether big rocks will break it.
Finding security flaws with architecture risk analysis
Architecture risk analysis could tell you more about the big rocks, but there is not much of that going on.
Beyond basic threat modeling, “even if you move into the 4–6 range, the amount of effort being spent tails off drastically,” he said. And when you start getting into 8 and above, that’s for special things like building an implantable medical device or a self-driving car.”
That is because it takes a very deep design review to find deep security flaws. “An ARA on an implantable medical device or a Tesla could take two to nine months,” Migues said.
“ARA is a human-intensive process. No matter how far we’ve gone down the CI/CD or DevOps yellow brick road, that doesn’t work for TM and ARA. We can do some data gathering, we can help automate little tiny pieces of the process, but unless you have people with skills, you can’t even do TM, never mind ARA. This is for the big-boy SMEs who wear the long pants.”
Of course, that takes more time and money. Which means that, as Migues puts it, design review is still hard. And given the pressure for “feature velocity,” it means that when it comes to finding design flaws, “we are probably stuck at threat modeling. There is no technology to turn this into automation, so it was a problem before, it’s a problem now.”
But that doesn’t mean the only thing to do is throw up your hands and ignore half of your software security vulnerabilities. There is help out there, for individuals, teams, and organizations.
Individuals
A good place to start is with the security commandments of whichever platform you’re using, such as the OWASP Top 10 List. These are areas such as HTTPS cookies, input sanitation, weak ACL, and more that developers can take the lead in securing.
There is also a white paper published by the IEEE Center for Secure Design titled Avoiding the Top 10 Software Security Design Flaws. If you follow the directives, you will “build security in” to your design, and leave fewer security flaws to find. They include:
- Earn or give, but never assume, trust.
- Use an authentication system that is tamper-proof and cannot be bypassed.
- Authorize after you authenticate.
- Strictly separate data and control instructions, and never process control instructions received from untrusted sources.
- Define an approach that ensures all data are explicitly validated.
- Use cryptography correctly.
- Identify sensitive data and how they should be handled.
- Always consider the users.
- Understand how integrating external components changes your attack surface.
- Be flexible when considering future changes to objects and actors.
Teams and organizations
Teams can “bring their experience together and do threat modeling whenever they are making big changes, especially when they are changing the attack surface, like adding a new API, breaking a monolithic app into microservices and so on,” Migues said.
Finally, organizations can help themselves by building a stock “foundational” application, doing an intensive ARA on it, and then require developers to use it as a framework to build other applications.
“That way you don’t have to do threat modeling and ARA for 100 applications that are all built in Java,” he said. “You use the same framework using the same secure-by-design libraries, using the same output protocols, using the same input validation. The foundational application includes layers 1 to 5 and then your developers just need to do layers 6 and 7.”
“That is one way to reduce the load.”
*** This is a Security Bloggers Network syndicated blog from Software Integrity Blog authored by Taylor Armerding. Read the original post at: https://www.synopsys.com/blogs/software-security/security-flaws-vs-bugs/