Whenever a big data breach happens – like the Equifax one – there is almost always a predictable order of subsequent events:
- The breach happens
- The affected company announces it
- The news outlets pick up the story and make it known to the general public
- Security researchers wonder how the breach might have happened and investigate further
Then there is the aha moment: security researchers stumble a catastrophic lack of security practices, countless numbers of vulnerabilities and breaches of well-established protocols.
Does It Have to Be Like This?
In the end, the public often knows more about the dangerous vulnerabilities in the company’s website than the actual attacker. Given enough eyeballs, all bugs become more shallow – particularly once an organisation is under public scrutiny.
Going back to the series of events, you might conclude that we could completely eliminate events one to three, if there were more security researchers examining the security of their own products. So what would have happened if someone had warned Equifax about vulnerabilities on their websites before the breach happened? Would they have listened to concerned researchers?
In 2016 Equifax Was Notified That Their Website Was Vulnerable To a Cross-site Scripting Vulnerability
— x0rz (@x0rz) September 8, 2017
It seems that – even though they were notified in 2016 about a vulnerability on their website – they did not address the issue. In this instance, it seems that XSS (Cross-site Scripting) is probably not the reason why the Equifax website was breached. However, the incident does illuminate their security protocols.
The consensus about the Equifax breach is that they were vulnerable to another kind of web application vulnerability – one that did not require interaction by a privileged user to gain access to administrative functions. Instead it is one that usually results in the complete compromise of a web server and the applications running on it – Remote Code Execution (RCE).
Equifax Website Hacked Through the Exploitation of CVE-2017-5638
On March 6, 2017, The Apache Software Foundation published a security advisory about a new vulnerability affecting the Apache Struts 2 framework. By manipulating certain HTTP headers, an attacker could easily execute system commands on affected systems.
As it often happens with this kind of vulnerability, it did not take long for attackers to take advantage of the flaw, and use bots to crawl the web for vulnerable hosts. Organisations that take security seriously are unaffected, because they immediately follow the recommended steps to fix it. However, many do not, as reported by security researcher David Hoyt.
— David Hoyt (@h02332) June 30, 2017
He posted screenshots of the vulnerability in question being exploited on the website annualcreditreport.com, which is owned by Equifax, Experian and TransUnion. They were notified about their vulnerable website by David Hoyt four days after the Apache Struts advisory was released, but he never heard back from them.
According to Equifax, the data breach occurred between mid May and July. Therefore if they had acknowledged and reacted to the security researcher’s report, the vulnerability would have been closed and other systems could have been checked for the said vulnerability as well, avoiding the breach and all the mess they are into right now.
David Hoyt on the State of Security of the AnnualCreditReport Website
We asked David Hoyt for his thoughts on the vulnerability and Equifax’s decision not to respond. He sent us a detailed report about the Apache Struts deserialization vulnerability in www.annualcreditreport.com, with a few surprising statements:
The Form on annualcreditreport.com accepts the PII of Consumers, then connects via API to the 3 Credit Reporting Agencies, and other Fraud and Loss Control Partners. The typical Form containing 1 SSN may be parsed and distributed to many, many more 3rd parties. Any successful Attack may expose millions of PII Records from a Database.
This seems to suggest that if the hackers had chosen to attack annualcreditreport.com, they could have compromised a far greater number of accounts and would have breached Experian and TransUnion as well. Given the easily exploitable vulnerability that David Hoyt found in the website, it is surprising that it had not already been exploited by attackers before the breach was announced.
No WAFs or IDS/IPS were Installed in Front of the AnnualCreditReport Website
Dissecting some of the Indicators of Poor Judgement, my own research indicated _no_ Web Application Filter in front of annualcreditreport.com or consumer.experian.in and neither Site had any IDS/IPS to block Command Line Injection.
This is surprising for such a large website. While many Web Application Firewalls can be bypassed, they should still be implemented since they act as a first line of defense when a new vulnerability is made public. It is doubtful whether they can halt seasoned attackers completely, but they are great for blocking automated attacks and can be used to slow down hackers until a patch is applied. This should be a basic security measure by anyone dealing with Personally Identifiable Information such as SSNs but the fact that it was not applied by Equifax put consumers’ data at unnecessary risk.
The Information Security Industry Vendors and Service Providers Can Do More
Hoyt also thinks that Information Security vendors and service providers can do more to avoid still having a lot of unpatched websites days after such announcements:
The InfoSec Industry should work on mitigating the exposure of their Clients in the first 24 hours after major Bug announcements, not promoting SEO Campaigns. Alerting an Organization to this Apache Struts Vulnerability should have received priority as the business day came to a close on March 10, 2017 at 4pm Eastern Time. At that point in time, at least 72 hours had elapsed since the Public Announcement.
Fixing serious vulnerabilities that affect a wide range of customers should be a top priority on the IT security professional’s agenda. In recent years, there were lots of security issues that had serious consequences for numerous companies – vulnerabilities such as Shellshock and Heartbleed. Within a relatively short time after the announcement, hackers had already devised automated exploits.
Corporations Should Have a Vulnerability Disclosure Process or Bug Bounty
While a few years ago you could patch your applications days after the announcement, nowadays you cannot let more than a few hours elapse. The chances of someone exploiting vulnerabilities within just a few minutes are very high. This is especially true for web application vulnerabilities, where exploits are generally fast and easy to build.
But there were other problems that came to light following Hoyt’s discovery. There was no obvious way for him to report his findings about annualcreditreport.com to Equifax.
The fact that none of the Credit Agencies currently have a Coordinated Vulnerability Disclosure Process or Bug Bounty indicates to me they don’t understand the big picture of Bug Reporting.
A good way to ensure that researchers can report the possible flaws they identify in your web applications is to have an advertised and coordinated Vulnerability Disclosure or Bug Bounty Program.
The Federal Trade Commission (FTC) Should be More Proactive
Another thing that David Hoyt found disturbing was the lack of control from the FTC.
The FTC should take steps to monitor the Security Posture of such an important Website in Real Time and not rely on executives to Notify for a Breach after their Stock Sales have Settled.
The fact that the announcement of the Equifax breach came months after it happened put consumers at unnecessary risk. Had the FTC had existing insight into the company’s IT security procedures, the general public would have known about the breach much earlier and could have reacted appropriately.
This is a Security Bloggers Network syndicated blog post authored by Sven Morgenroth. Read the original post at: Netsparker, Web Application Security Scanner