SBN

Seven Winning DevSecOps Metrics Security Should Track

Last week’s DevOps Connect event at RSA Conference brought together some of the leading minds in the DevOps and AppSec communities to discuss DevSecOps. Given the audience, a lot of the discussions focused on awareness themes for security folks still wrapping their heads around the idea of embedding their people into cross-functional DevOps teams. But each year these DevOps confabs at RSAC, the security audience grows savvier in continuous delivery principles, and the programming is trending more toward the real nuts and bolts of instituting DevSecOps.

Among a number of solid pieces of practical advice doled out by the speakers was the fact that security practitioners struggling to get themselves included in the DevOps collaborative process may be able to jump-start the process through the right kind of metrics. DevOps veterans live and die by metrics, so it only makes sense that the language of cooperation between devs and security pros should be spoken in numbers.

As Shannon Lietz, director of DevSecOps for Intuit explains, even as security teams dole out more security responsibility to developers, cybersecurity pros will always have plenty of room to remain relevant.

“As developers take over security are we (security professionals) all going to somehow eventually become extinct?” says Lietz. “The answer is no, and it all comes from what you measure.”

Her point is that in DevSecOps the day-to-day work may be done by the developers, but security professionals are meant to add value through their expertise in a consultative role. And one of the best ways security pros are able to provide meaningful advice to developers about their daily implementation of security principles is by using metrics. The following eight measures are some of those that Lietz and other DevSecOps speakers pointed to as the most valuable in keeping DevOps teams continuously improving their security mojo.

Defect Density

One of the simplest benchmarks to set, defect density can be used to measure progress across an organization, within teams and within specific applications or services. According to Caroline Wong, vice president at Cobalt.io, it’s a matter of recording the number of bugs and dividing them by a set number of code. From there, the security team and developers can start negotiating a reasonable set of goals for improving that density over time.

For example, in her past life in the security team at eBay, her team’s collaboration with devs resulted in a 20% reduction goal for defect density on customer-facing websites. It was an achievable goal and one which didn’t demoralize the developers to the point where they’d simply keep ignoring their security brethren.

Defect Burn Rate

While metrics like defect density and cyclomatic complexity are good temperature checks on the state of any given software, Paula Thrasher, director of digital services for General Dynamics, a large federal IT integrator, says that it’s ‘not necessarily insightful’ for the developer organization because of course legacy applications will have a lot of defects and newer applications will have fewer. She believes that as DevOps organizations start better embedding applications security scanning into their toolchains, one of the more useful things they can do is focus less on the quantity of defects and instead turn to how quickly those defects are addressed by the team.

“Answer how quickly are you burning those down as a team?” she says. “That tells you something about the team’s productivity in getting to a more secure place.”

Critical Risk Profiling

One of the biggest acts of self-sabotage that security folks can engage in is tossing laundry lists of vulnerabilities to DevOps teams without any prioritization. Security pros looking to add value in the DevSecOps model should start doing the analysis to characterize defects by criticality and start putting together matrices for developers to give them easy visibility into the order in which they burn down those defects. According to Wong, one way to accomplish that is a risk profile that on the y-axis might have the bug criticality value and on the x-axis the value of that vulnerability to attackers.

“You’ll be able to go in and say, ‘Not all applications are created equal, not all bugs are created equal, we’re actually going to help you to prioritize,'” she explains, saying that this is a huge way for security teams to build trust with developers, who will see that security understands the constraints they have when it comes to making time for bug fixes and is willing to do the work to enable devs in their security efforts.

SLA Performance

Not only should security teams be organizing defects by criticality, but they should also be setting up service level agreements (SLAs) based on criticality and tracking the SLA performance religiously.

“SLA performance is a real measure,” says Lietz. “If you’re not in SLAs, you should ask yourself why.”

Top Vulnerability Types and Top Recurring Bugs

Security teams that start tracking top vulnerability types will be in a much better position to help developers make long-term improvements in the way they code.

“To know a specific organization’s top vulnerability types can help an organization do things like customize training accordingly,” says Wong, who says that tracking which types of vulnerabilities are most likely to recur can also help in training efforts. “It’s great to know what bugs are out there, but which ones keep coming back?”

Number of Adversaries per Application

According to Lietz, security teams that want to improve their developer’s risk IQ should be asking them how many adversaries they think an application actually has.

“And I’m going to tell you, it’s a turning point for every developer I’ve worked with because, it’s going to have them looking for the adversary and interested in what they’re actually doing to their application,’ says Lietz. “That was my epiphany moment, and it’s led to a whole bunch of people that are really interested into security at the last two companies that I’ve worked for.

Adversary Return Rate

Along those same lines, Lietz says adversary return rate is another good metric that gets developers invested in thinking about how applications are being attacked. This measures how often an adversary is coming using the same tactics, techniques and procedures. Doing so gives developers yet another tool for prioritizing both bug fixes and training in certain types of insecurities that adversaries are hammering within an organization’s portfolio.

“As a developer, that’s much like a customer return rate, so why can’t we measure it better?” she says. “With DevSecOps, you can now do that by getting enough instrumentation into your applications to be able to associate it with adversary return rate.”

*** This is a Security Bloggers Network syndicated blog from Business Insights In Virtualization and Cloud Security authored by Ericka Chickowski. Read the original post at: http://feedproxy.google.com/~r/BusinessInsightsInVirtualizationAndCloudSecurity/~3/0-z2F6X_jfU/seven-winning-devsecops-metrics-security-should-track

Ericka Chickowski

An award-winning freelance writer, Ericka Chickowski covers information technology and business innovation. Her perspectives on business and technology have appeared in dozens of trade and consumer magazines, including Entrepreneur, Consumers Digest, Channel Insider, CIO Insight, Dark Reading and InformationWeek. She's made it her specialty to explain in plain English how technology trends affect real people.

ericka-chickowski has 88 posts and counting.See all posts by ericka-chickowski