SBN

The Importance of Validating Fixes – Lessons from Google

Error Ilustration

Zohar Shachar, an Israeli security researcher, recently revealed the details of a bounty that he received approximately a year ago from Google. The security issue that he found was an advanced cross-site scripting (XSS) vulnerability in Google Maps.

There was one detail about this case that stood out. Shachar actually received two bounties for the same vulnerability. After Google awarded the first bounty, Shachar decided to check the fix and found that he can work around it. He also admitted that this is not the only such case – he was also able to exploit a previously-fixed SMTP injection vulnerability in GSuite.

This case raises a major question: how many publicly accessible vulnerabilities are actually not fixed at all? It also shows that even the most renowned web application giants such as Google make rookie mistakes and either forget to test their fixes or don’t test them well enough.

The Consequences of Not Validating Fixes

Not everyone is as lucky as Google was in this case – not every vulnerability is found by a responsible penetration tester like Shachar. Most bounty hunters would simply take the bounty and move on.

This is also often the case with internal, manual penetration testers, especially the ones overworked due to the cybersecurity skill gap – they simply don’t have time to test the fixes. It is therefore very much possible that many vulnerabilities that have been found are still there and it’s even worse because the business is confident that they are gone.

Why Are Fixes Not Effective?

One of the reasons why fixes are ineffective is the approach that is still quite common among developers. There are developers who do not see vulnerabilities as true issues because security teams and developer teams often work in silos – except in companies that managed to fully shift left. Penetration testers may be perceived by developers in a negative light – as people who cause unnecessary work. And with such an approach, a developer will simply aim to satisfy the penetration tester’s request, without truly thinking about fixing the vulnerability. For example, they will filter just for the string used to exploit the vulnerability, ignoring other potential strings that can have the same effect.

Another reason is that not every type of vulnerability is easy to fix. While SQL injections are easy to get rid of and all common back-end programming languages allow you to use parameterized queries to eliminate such vulnerabilities, things are not that simple, for example, with advanced cross-site scripting (such as in the case of Google Maps). Avoiding XSS in application code may be difficult and, in some situations, demands a lot of attention from the developer.

Lack of retesting, such as in the case of Google, makes the situation even worse because developers don’t learn from their own mistakes. Late testing and late retesting are just as bad. Imagine a situation where developer A makes an error and introduces a vulnerability. Due to testing in staging, it is developer B who is tasked with fixing the vulnerability several weeks later. Developer A has no idea what he did wrong (and will introduce the same vulnerability the next time in similar code).

If this wasn’t bad enough, let’s think about late manual retesting. It will be developer C that will be tasked with correcting the fix introduced by developer B. Therefore, developer B has no idea about the bug and will probably make the same mistake in a future fix. As a result, two of the three developers think they did everything right and will keep introducing the same vulnerabilities.

This is definitely not a good way to create secure web applications.

Be Smarter than Google – Automate Well

You would think that vulnerability scanning is the best way to ensure that all fixes are automatically retested, but this is also not obvious. Most web vulnerability scanners are manual tools. You point them to the web application, you run the scan, you save the report, you send it to developers. There is no process to make sure that the vulnerabilities are retested.

However, advanced business-class vulnerability scanners like Acunetix have built-in vulnerability management functionality. This means that once a vulnerability is identified, you can automatically create tickets, even in external issue trackers such as Jira, and then, after the issue is closed by the developer, you can retest the vulnerability to see if the fix was effective. With enterprise-class solutions like Acunetix 360, you can even automatically start retesting once the issue is closed in Jira.

Even better, modern vulnerability scanners such as Acunetix work within CI/CD pipelines. This means that the original developer cannot introduce a vulnerability at all because the build fails if a vulnerability is found. They need to immediately address what they did wrong and then re-run the build, thus automatically retesting the fix and learning from their mistakes. There is no situation where three different developers are involved and the issue lingers on for months.

Of course, vulnerability scanners won’t be able to handle every single vulnerability and there will be cases when new vulnerabilities will be discovered by manual testers. However, the majority of problems resulting from lack of retesting will be addressed if you introduce efficient automation and shift security left as much as possible.

THE AUTHOR
Tomasz Andrzej Nidecki
Technical Content Writer

Tomasz Andrzej Nidecki (also known as tonid) is a Technical Content Writer working for Acunetix. A journalist, translator, and technical writer with 25 years of IT experience, Tomasz has been the Managing Editor of the hakin9 IT Security magazine in its early years and used to run a major technical blog dedicated to email security.


*** This is a Security Bloggers Network syndicated blog from Web Security Blog – Acunetix authored by Tomasz Andrzej Nidecki. Read the original post at: http://feedproxy.google.com/~r/acunetixwebapplicationsecurityblog/~3/-YPT2MA0bPI/