Preload Saves Lives
Thanks to Google and projects such as Let’s Encrypt, there are more websites running on SSL/TLS now than a few years ago, which means the internet in general is getting more secure.
The HTTP Strict Transport Security (HSTS) Preload List is a key element of SSL/TLS for web browsers. The problem is that if a website makes traffic encryption optional, it can be bypassed by Man in the Middle (MiTM) attacks.
Moxie Marlinspike (pseudonym), founder of Open Whisper Systems, is an American security researcher who demonstrated at Blackhat in 2009 how he was able to prevent victims from using secure HTTPS connections and force them to use an unencrypted, plain HTTP connection instead. In order to do this he leveraged his SSLStrip tool. Theoretically, when a secure connection is established, it ensures both security and privacy. However, to establish a truly secure connection, HSTS is required. Websites that have HSTS configured instruct users’ browsers to convert all future links to HTTPS.
Perhaps you’re thinking: “But, I could just disable port 80. I could set up a routing process on the server side.”. The problem is that this still won’t be enough to emulate the features HSTS provides. Let’s take a closer look. Assume that you disable port 80 and start accepting connections only through port 443. A hacker could establish a secure connection between himself and the user’s browser, and between your site and himself, and present a fake certificate to the browser – a classic MiTM attack. There is one exception: browsers have mechanisms to prevent this type of attack.
For instance, if the certificate is invalid or expired, or has a weak cipher, browsers will warn the user that something went wrong. The problem is that users can simply choose to ignore these warnings by clicking the Add Exception or Go Anyway buttons. Users are not always technically savvy. If you are unfamiliar with computers, and there is a warning with technical details such as ‘ERR_CERT_AUTHORITY_INVALID’, it’s close to impossible to find out what’s going on. However, if there is a button on the page that lets you continue to the site despite the error, why not just click it?
When the HSTS header is set for a web application, the user’s browser converts HTTP links that reference the web application to their secure HTTPS equivalent, at least for the time specified in the max-age option of the HSTS header. But more importantly, if a browser encounters invalid certificates, the Add Exception and Go Anyway buttons are disabled, which means that there is no way for users to ignore these errors. You might argue that the TLS warnings in most browsers are clear indicators that something is wrong and that you shouldn’t continue. And that’s probably true for a majority of users. However, sometimes you need to make sure that users don’t even have the option to ignore the warnings in security-critical applications. Banking, insurance and e-commerce web applications are just a few examples of the types of websites that can benefit from HSTS.
This should account for any mistakes that users could possibly make, but what if the mistake is a little bit higher up the chain of trust? Let’s say one of the certificate authorities in the browser trust chain is hacked, and a certificate is signed on behalf of your website? It is obvious that none of the security benefits of the HSTS header apply here, since the attacker is in possession of a valid certificate. This scenario seems far fetched, but in 2011, Dutch certificate issuer DigiNotar was hacked and 500 certificates were signed in their name by attackers. Google quickly discovered the fake certificates via the Public Key Pinning feature they built into Chrome for Google domains. In the process, they removed DigiNotar from their list of trusted certificate issuers and the company was declared bankrupt the very same month! This highlights a few salient points about TLS: browser vendors take vulnerabilities regarding encryption very seriously, they hold certificate issuers to high standards and they won’t fail to punish those who don’t comply with them – by no longer trusting the certificates they’ve signed.
When a Certificate Authority (CA) signs a certificate for a site, currently it is not required to notify the owner of that site. This is obviously a problem and shouldn’t be the standard. However, there is now a mechanism that will submit each issued TLS certificate to a public log. It’s called Certificate Transparency (CT). While purely voluntary for now, it will become mandatory in the near future. The CT program will provide an open and almost real-time monitoring system for TLS certificates, making it more difficult both for CAs to erroneously issue them and for hackers to illegitimately acquire them.
HTTP Public Key Pinning
For now, though, how do we get ahead of this problem? How do we prevent hackers from signing certificates on behalf of our web pages – without our permission?
Unfortunately, we can’t! However, what we can do is prevent the use of these certificates, thanks to HTTP Public Key Pinning (HPKP) technology – at least for now. The problem with HPKP is that it’s incredibly difficult for the average webmaster to achieve. If you do something wrong, you might render your website useless for a very long time and there is nothing you can do about it. To understand why, we need to take a look at how HPKP works.
Since Google Chrome 13, websites can send their certificate’s public key fingerprint (a hash of the server’s public TLS key) to the browser using the Public Key Pinning HTTP response header. Browsers store these fingerprints locally, together with the hostname to which they belong. When a user establishes a connection to the website, and the browser encounters a certificate other than the pinned one, it refuses to establish a secure connection, or even report the URL to a designated endpoint on the server using the report-uri field. The Public Key Pinning feature allows us to protect our users and websites in a circumstance in which the authorities required to secure communications are somehow duped into signing a fake certificate.
So what happens if you lose access to your private key? The simple answer is: Every user who already has visited your website will lose access to it for as long as the public key is pinned. Whenever they want to visit your website again, they will receive an error message, because the public key does not match the pinned one. This is why browser vendors need an alternative to HPKP that wouldn’t affect users too much. In the case of HPKP deprecation, you can either:
- Rely on the CA to respect the choice of authority allowed in the CAA DNS entry, or
- Check the CT log to determine whether there is a certificate signed on behalf of your site through one of the CT websites such as crt.sh
The Certificate Transparency program becomes mandatory from April 2018. Google has announced that each certificate to be signed must be included in this list, otherwise the connection will be refused by the browser. If a ten-year certificate was signed on your behalf prior to 2018, you can block these cases by configuring the Expect-CT header to the Enforce mode.
The advantage of having two security headers is that it enforces certain browser behavior within the time (in seconds) specified in the max-age directive. For this reason, it is necessary to update max-age by responding with the HSTS and HPKP headers each time.
If any of the local HSTS or HPKP records that browsers store on the user’s file system are deleted or expire, these strict security mechanisms will obviously have no effect. Did we say they can be deleted? Yes!
According to a presentation delivered by Eleven Paths at Black Hat 2017, it is possible to wipe or disable the cache of HSTS records in all major browsers (i.e. Firefox, Google, Edge and IE). This is done by exceeding the available space for these lists.
In Google Chrome, we can query the HSTS and HPKP lists from chrome://net-internals/#hsts. Unfortunately in Firefox there is no way to view these lists. However, you can access them using the PinPatrol addon which was developed by Eleven Paths.
Firefox uses a TXT file limited to 1024 lines to manage HSTS and HPKP lists. PinPatrol’s Firefox addon can also display this TXT file.
You might think that 1024 entries are more than enough for a regular user. However, for an attacker who wants to push the limits, it is exactly this limit that can be used as a vector for new attacks.
How Can Hackers Use Firefox’s Score Value?
When more than 1024 entries are entered, Firefox deletes the old entries in the list to make space for new ones. And, Firefox uses an interesting detail: the Score value. The Score value indicates how often a site has been visited by a user on different days. For example, if a site is visited by a user for the first time (or if the values which are set by the site have expired) this value is set to ‘0’. If it is visited again the next day, the value is updated to ‘1’. On a subsequent visit on another day, it is updated to ‘2’. This value is updated every time a user visits a site. When the list reaches a size of 1024 lines, it deletes the record with the lowest Score to make space for new entries.
Can website records with a Score value of ‘1’ or higher also be deleted? Researchers repeated the same attack the next day, raising the Score of the subdomains of cloudpinning.com by one point. They were able to delete records where the Score value was ‘0’ or ‘1’.
But how realistic is it that they were able to repeat the same attack on another day? The researchers recommend Delorean, an Network Time Protocol (NTP) MiTM tool instead. However, NTP is a relatively old protocol; it predates SSL by about ten years. It is, therefore, easy to manipulate the time on some Mac and Linux machines, just by intercepting NTP traffic and sending back the wrong timestamps to the machines. For more information about Delorean, see Bypassing HTTP Strict Transport Security, a study written by Jose Selvi for Black Hat 2014.
What Happens When Records are Deleted?
HSTS and HPKP headers are valid starting from the date they were added to the list, and expire after the time specified in the max-age header. But, once they are deleted from the list, there is no way for the browser to remember whether or not the site previously sent a HSTS header. Therefore it becomes possible to conduct an MiTM attack.
What About the Chrome and Edge Browsers?
There is no concept similar to the Score value, or any site record limit, in Chrome. Instead Chrome simply stores the HSTS and HPKP values in a JSON file: C:\Users\USERNAME\AppData\Local\Google\Chrome\User Data\Default\TransportSecurity. In theory, you could enter an infinite amount of records into this file either through an MiTM attack or simply using your own server, as explained above. However, in practice, limitations are imposed by the available memory on the victim’s machine.
Currently, what an attacker can do is make the browser issue thousands of requests, each one containing the maximum amount of public key pins and an HSTS header. During their tests, researchers found that after about ten minutes, the JSON file reached an approximate size of 500 MB, the browser froze and it was rendered useless. Even restarting the browser could not return it to its usable state, and their only option was to delete the JSON file.
There is an API, or function, that manages the HSTS list for IE and Edge browsers. It is called HttpIsHostHstsEnabled and is stored in the WININET.DLL file. Unfortunately there is no formal documentation for it.
Microsoft stores the HSTS data in a database called the Extensible Storage Engine (ESE). The data used by this database is stored in the WebCache directory, under the user profile directory, with the name WebCacheV01.dat. However, as with its counterparts in Chrome and Firefox, the storage mechanism is far from perfect.
For some reason, HSTS does not work as expected in the IE/Edge browser. The table only contains the data of the most popular domains.
When the researchers sent a 131 request to their test site (cloudpinning.com) they noticed that there was no change in the HSTS table, even when they restarted both the browser and the computer.
What is the Solution?
If it wasn’t for the above-mentioned vulnerabilities, HSTS would sound like a great invention. Even though just a few years ago almost every connection that your browser established with a website was completely unencrypted, browser vendors now take TLS bugs very seriously. It comes as no surprise that browser vendors have already taken precautions that counter the shortcomings of HSTS. The solution comes in the form of Preload Lists.
An HSTS Preload List is a file that is delivered together with your browser when you download it. Instead of relying on a dynamic list, like the ones that the researchers showed to be vulnerable, the site that should be protected by HSTS security features is included directly in the browser’s source. Therefore there is no reliance on the the Trust On First Use protocol. Instead, the browser immediately knows that the site wants HSTS to be enabled. However, to qualify for inclusion in an HSTS Preload List, your site must have the following criteria:
- A valid TLS certificate
- Use of the same host when redirecting from HTTP to HTTPS
- All subdomains must be served over a secure connection, including www
- The max-age value in the HSTS header must be set to at least 18 weeks (i.e. 10886400 seconds), include subdomains, and preload options must be in the HSTS header:
Strict-Transport-Security: max-age=10886400; includeSubDomains; preload
For further information, see the slides the researchers published for Black Hat EU 2017, Breaking Out HSTS (and HPKP) on Firefox, IE/Edge and (Possibly) Chrome.
Two Critical Vulnerabilities in vBulletin
vBulletin is a very popular forum script which is also commonly found on websites in the Alexa Top 1 Million.
According to an independent researcher’s report vBulletin contains both a local file inclusion (LFI) vulnerability as well as an arbitrary file deletion vulnerability. The most striking aspect of the report is that the researchers have been trying to reach vBulletin’s developers since November 21, 2017, but they have been unable to secure a response! Consequently, there is no published patch for these vulnerabilities.
In this section, we will only focus on the details of the LFI vulnerability.
The Cause of the Local File Inclusion Vulnerability
The GET parameter vulnerable to the LFI is called routeString. Whenever you pass that parameter to the index.php file, vBulletin conducts a variety of checks on the value. It checks whether or not the supplied value contains one or more forward slashes, or whether you’ve attempted to pass the path to a gif, png, jpg, css or js file. In order to detect this, it will simply check for the value following the final period (.) character.
This table below shows how it works.
No ‘/’ or forbidden extensions
The file has the extension ‘gif’ after the final period
There are forward slashes
No forbidden extensions, or forward slashes
No forbidden extensions after the final period
As you see, the first three rows don’t yield any surprises. Index.php is allowed, as expected, and the other two are blocked. The last two, however, are unintended. Let’s start with the test.gif. input. Why does it pass the check?
As mentioned, vBulletin only checks whether or not there is a forbidden extension after the final period. However, since there is another period right after the gif extension, vBulletin will return an empty string. This would probably be the correct check on a Linux system, but it doesn’t take into consideration that in Windows, there is certain behaviour that doesn’t play along. When Windows encounters a file that has one or more trailing dots, it simply strips all of them out. So, while vBulletin sees a file called ‘test.gif.’ with a trailing dot, Windows returns the content of ‘test.gif’ instead. This means that the extension check is bypassed.
But why does ‘..\something’ also pass the check? Unfortunately, vBulletin has, yet again, forgotten to take the Windows file system into consideration. While banning the use of forward slashes might be enough to prevent LFI in a Linux environment, in Windows, backslashes can have the exact same purpose (directory separators). That’s why the LFI vulnerability is restricted to Windows machines and why this particular input works and bypasses the filter. If you included a file like the server’s access log, you could turn a simple file inclusion into a remote code execution.
To learn more about the details of this LFI, see SSD Advisory – vBulletin routestring Unauthenticated Remote Code Execution.
Security Researchers Need to be More Creative!
Among web security professionals there is an important rule: Do not trust data from the user!
But user data doesn’t necessarily mean a POST parameter or some JSON data. Instead, user-controlled values can occur in the strangest of places. We generally refer to them as ‘second order’ vulnerabilities, and Robert Salgado has shown a few unexpected sources of user input that lead to real life vulnerabilities.
In the first example in the report, the researcher gave an example of how an XSS payload was injected into the PowerDNS web interface – through DNS queries! Within the blog post they show how an attacker might issue a DNS query containing an XSS payload. You will see that the payload that the researcher was sending via a DNS query was executed in the PowerDNS console.
In the second example, the researcher explained the impact of a vulnerability in the SSL Tester tool, which Robert Salgado reported to Symantec three years ago. If you upload an SSL certificate, whose common name contains an XSS payload the website would reflect it back to you without sanitizing the output – a classic XSS vulnerability. The screenshot in the researcher’s write-up illustrates that the payload was executed. It shows the content of the user’s cookies in an alert popup.
The final example provided by the researcher is the Rough Auditing Tool for Security (RATS) application developed by CERN’s Computer Security Department. It is a static code analysis program that has not been updated since 2013. The program can generate a report in HTML format. Unfortunately, this feature of the application can also be used as an attack vector for XSS payloads. As illustrated in the article, the attacker uses XSS payloads as file names in the operating system. You can see this if you enter the ls -l command. In order to exploit this, you have to scan a malicious application using the analysis tool, that contains an XSS payload in one of its filenames. In the screenshot included in his write-up, you can see that the XSS payload was executed successfully. The alert popup displays the text specified in the payload.
The moral of the story is:
- Code injections are not always executed via HTTP requests
- Practically every point in the file system (i.e. log, API messages or database records) can be attacked, and these points should be all be taken into consideration
- It is necessary to apply sanitization depending on the context
This is a Security Bloggers Network syndicated blog post authored by Netsparker Security Team. Read the original post at: Netsparker, Web Application Security Scanner