Security User Stories suck, here’s Why – Product Security in Agile Organizations: part 3

Security User Stories suck, here’s Why - Product Security in Agile Organizations: part 3

A year or two ago one of my teammates made me aware of a thing called “Security User Stories” from SAFECode. I read the paper and thought about it quite much and quite often. I’ve considered a while about whether I should write-up some documentation on our internal wiki, so that development teams could use them in their products or process. I have, however, not done that as I’ve never felt that these Security User Stories are quite right. As I’ve grown and learned over these past two years, I now have the strong opinion that while the content of these Security User Stories is good, the implementation is very flawed and not Agile at all. Instead, I advocate that you build relationships with the development teams and help them build Acceptance Criteria inside their User Stories that prevent Security vulnerabilities.

Before moving further into this post I want to make clear that this is not about features that positively impact the security of a product (such as having 2FA), but purely about how to integrate security perspectives into User Stories in general so that these are developed more securely.

The paper describes a set of 36 Security User Stories, of which the Product Owner and Security Team select the relevant ones and add them to the Product Backlog. This is the first problem: It assumes that we’re not building a Product, of which its User Stories can be shipped to the customer at any given time, but a Project with a clear beginning and clear end. In this scenario all Security User Stories from the backlog only need to brought into the last sprint before the project is delivered to the customer. This is fundamentally wrong: None of the regular User Stories are done and shippable until the “validation sprint” happens! That is because all the Security User Stories on the backlog haven’t been done yet. So if your customer asks you halfway through the project to leave the project as is and deliver what you have, you now have to say that whatever is built so far, isn’t build securely.

In case of a Product, the development team builds potentially shippable items every sprint. This means that if you’ve done some form of Security validation in a previous Sprint, any of the items delivered afterwards would need to be re-vetted.

Okay, so what if we do the relevant Security User Stories in every sprint, so it only applies to the items we have in our Sprint? While already much better, there’s still a significant amount of delivery risk. User Stories don’t always neatly fit in a sprint. So once the regular User Stories have been done, you may find yourself in a situation where you do not have the time to do the Security User Stories.

So, it looks like we need to go through these Security User Stories after each and every regular User Story, to make sure that at the end of each sprint the regular User Stories that are done can be shipped to the customer, right? Still wrong. The authors don’t appear to get what finishing a User Story really means.

The point of marking a User Story as done means it must be releasable, shippable. No more testing or development that needs to happen on it. Including Security. That means that for a development team to validate whether they’ve done things securely, it has to be part of the User Story itself.

There are a few places where various pieces of Security requirements can be, within a User Story.

The Acceptance Criteria is the place where specific security functionality or safeguards should be described. For a User Story describing some functionality in a website’s admin panel, one of those Acceptance Criteria might be “The admin panel is only accessible from the corporate network”. For a login page there might be some criteria on how to prevent SQL injection and clickjacking.

The Definition of Done (which is consistent for every User Story for a Product) is where some of the “Operational Security Tasks” (as described in the paper) can be included, such as making sure the latest compiler versions are used, dependencies are using the latest versions or the resolution of critical findings by static analysis tooling.

As I mentioned earlier, the security information in SAFECode’s paper is great. The method they intend to use to deliver it into a product’s development lifecycle is flawed.

Lastly, a big step that is still missing is the lack of growing Security awareness and skillset in the development team. Giving them a set of items to simply go through doesn’t teach people a whole lot other than which checkboxes to tick. What you’re looking to do is build knowledge in a development team (and the content of that document is a good starting place) on how to securely write their product, and help teams understand what activities, checks and tests they should be doing as part of their Acceptance Criteria and Definition of Done to increase the security bar.

With my previous posts about Empowering Product Teams and practicing gemba in mind, I’m going to spend some time in 2018 researcher how teams currently conduct their Sprint Planning sessions, and what the mechanics would be to integrate security perspectives into their User Stories without it being a time-consuming frustration.

A short description on some of the terminology in this post:
A User Story describes a feature, written from the end-user’s perspective, deliverable to the end-user once done.
Definition of Done is a uniform set of activities/criteria that must be done on almost every User Story for a given Product. This usually includes items such as: integration tests passed, regression tests passed and deployed to staging or production environment
Acceptance Criteria are a set of functionality statements that either pass or fail. These are unique for each User Story and are an indicator as to whether the User Story is complete.
A Sprint is a timeboxed window in which a full cycle from planning to potentially shippable product takes place. Usually lasting one to four weeks. For a typical development team, the amount of User Stories are in the range of 4-10.

*** This is a Security Bloggers Network syndicated blog from Koen Hendrix authored by Koen Hendrix. Read the original post at: https://blog.koenhendrix.me/security-user-stories-suck-heres-why/