Metrics are critical for measuring and expanding an application security program. And there are a lot of important numbers you need to track to gauge your program’s progress, but sometimes you need one number that sums up your progress. Executives don’t always want to see a slew of complicated charts and graphs – they want one simple number that answers, in a nutshell, is this working, are we getting a return on our investment?
When this is the case, and you need to report a single metric, we actually recommend a “single” metric that is a combination of two metrics. Maybe that’s cheating, but we feel strongly that this is the only way to get an accurate view of an AppSec program without looking at a bunch of different variables.
With that said, our recommended “single” AppSec metric is the total number of apps in program, alongside the percentage of apps in compliance with your AppSec policy.
This gives you a more clear and accurate picture of your AppSec progress than only the percentage of apps in compliance – why? Because you might start your program with a small number of apps and then grow over time. Reporting just the compliance percentage with that initial small subset of apps would give you a very skewed view of your state of application security. If the number is high, executives might say, great, well done, no need for expansion. But if it’s low, they might question why the security level is so poor.
Keep in mind that if this is the metric you are reporting up, you want it to be as accurate as possible. So you want to make sure you regularly update your policy on an annual basis to match up with your program’s goals, your particular environment and threat landscape, as well as your maturity level for addressing application security risk. For instance, if you give all apps the same stringent policy, your chances of success are pretty low, and you are discouraging your development teams with an overload of security findings. But this stringent policy may be something you want to work to over time as your development teams become more familiar with how to address security findings and how to develop secure code. Our overall philosophy of AppSec policy is that it should be only as complicated as it needs to be to deliver the necessary results, but no more than that. Set the bar too high, your policy will be unattainable, and most likely ignored. But set it too low, and you’re leaving your organization vulnerable.
A few of our best practices on AppSec policy creation:
Implement achievable policies at first: If security is being introduced for the first time or being enforced for the first time, start off with some achievable, and simple, policy standards. Don’t make a team that has never had security built into their daily cycle try to meet PCI or all OWASP requirements; they will not pass, feel defeated and give up before they start. Further, your development velocity will be significantly impacted, and security will be seen as slowing down rather than protecting the business.
Start with a simple and easy-to-explain policy: no high or very-high critical flaws. This makes it easy for your development teams to understand both the requirement and the risk associated with those flaws. Then get more stringent over time as developers adopt security into their daily routine. However, be clear with your development teams that you are taking this incremental approach to software security requirements, which will incorporate their feedback based on what they view as achievable and what security views as necessary. You want to ensure that development is bought in to the approach, otherwise developers will view security as a moving target that can never be achieved.
Not all apps are created equal: We recommend creating different requirements for different apps. For instance, an application that has confidential data, is public facing and has third-party components may require all medium to very-critical flaws to be fixed. An internal-only, temporary site may only require high/very-high flaws to be fixed.
Nor are all vulnerabilities: It’s important to distinguish between flaws that represent a theoretical risk and those that represent more substantial, real-world risks. In some cases, the likelihood of a vulnerability being exploited may be low, but the potential damage might be great. In other instances, the chance of exploit might be high, but the damage may not be substantial.
For instance, SQL injection, a very serious vulnerability that could allow an attacker to steal or destroy data through your application, is present in about 30 percent of all applications on average (40 percent if you’re in government). So it may make sense to start by focusing your program on eradication of a vulnerability that’s not completely pervasive, but which can cause a lot of damage. On the other hand, you may want to focus on other vulnerability categories based on industry requirements. For instance, when we analyzed the applications we scanned in 2017, 50 percent of all healthcare applications had cryptographic weaknesses. That’s a problem if the cryptography in the application is protecting sensitive personal health information, which is covered under HIPAA. So you may want to focus on eradicating those vulnerabilities that put you at risk of compliance violations.
Finally, consider which testing type is uncovering the vulnerability. For instance, a finding discovered through pen testing that is high severity should be a high priority for the business, while a high-severity finding from static analysis is a “flaw” that has not been proven to be exploitable.
Metrics Beyond the “Single” Number
Number of apps in program plus percent in compliance is a good “single” AppSec metric to report up. But don’t neglect other metrics you’ll need to keep your program on track, for instance:
Fix rate: The ultimate goal of your program is to fix the flaws you find. Your fix rate illuminates where you need remediation consulting and developer training to fix the kinds of flaws that your developers might struggle with. Fix Rate = Fixed Flaws divided by (Fixed + Open Flaws)
Flaw density, for instance flaws per MB of code: As your scans increase, the number of flaws increases, too. Flaw density —measured as the number of flaws divided by the size of the application —makes it easier to compare apples to apples across different teams or business units.
Flaw prevalence: This metric spotlights how common a risk is within a particular industry or business. It helps an organization prioritize threats such as SQL injection, Cross-Site Scripting (XSS), cryptographic issues and CRLF injection based on real-world impact.
Business and goal-specific metrics: These criteria are dependent on organizational goals and objectives. As a result, they vary across organizations. For instance, an important metric is the percentage of applications where testing has been fully integrated. In addition, a core metric may touch on developer education or the number of applications that have been assessed or retired.
What Gets Measured Gets Done
Metrics and policy are critical elements of an effective application security policy – they’re key to improving your program, getting buy-in and getting support to expand. And you’ll need all the proof points you can get; we recently surveyed more than 1,000 business leaders regarding their knowledge and understanding of cybersecurity and found that most are woefully uninformed about the risk that software introduces to their businesses.
Find out more about application security metrics in our guide, Proving Performance: Using Metrics to Build a Strong Case for Application Security.
Find out more about application security policies in our guide, Policy Pointers: A Best Practice Approach to Application Security Governance.
This is a Security Bloggers Network syndicated blog post authored by email@example.com (anielsen). Read the original post at: RSS | Veracode Blog