GUEST ESSAY: Why online supply chains remain at risk — and what companies can do about it

The Solarwinds hack has brought vendor supply chain attacks — and the lack of readiness from enterprises to tackle such attacks — to the forefront.

Related: Equipping Security Operations Centers (SOCs) for the long haul

Enterprises have long operated in an implicit trust model with their partners. This simply means that they trust, but don’t often verify, that their partners are reputable and stay compliant over time. Given the dynamic nature of websites today and the constantly changing integrations to a site, this implicit trust model no longer suffices.

So what does the average modern website look like? More than 70 percent of the content that loads on an end user’s browser does not come from the website’s server at all. Enterprises are designing client heavy applications that are executed through JavaScript at runtime, and these browsers are acting as modern day OSes.

Let’s discuss how the SolarWinds hack relates to a regular website supply chain. Web architecture from the past decade followed a trend where most web applications were server heavy, and enterprises’ data centers handled the bulk of the processing. The web browser was more of a graphical interface or a rendering engine.

Due to optimized speeds and improved computing capacity on client devices, the architecture has evolved over the last few years. Today’s websites integrate dozens of third-party service providers, from user analytics to marketing tags, CDNs, ads, media and these third-party services load their code and content into the browser directly.

Supply chain attack tactics


Attackers have been using various techniques to exfiltrate sensitive data from websites. One common methodology used across a number of recent attacks (e.g., British Airways, Ticketmaster, NutriBullet, Focus Camera) has the following signature:

The attacker tries to get visibility into your third-party dependencies and JavaScript supply chain.

Once the attacker has identified a suitable target, various methods like credential theft, SQL injection, RDP attacks, etc. are used to gain access to third party servers.

Once the attacker has control over the third-party server, they inject their own malicious code. In many cases, this code is designed to avoid detection and the attacker can now deface your website, inject inappropriate ads or even mine for bitcoin.

Over the last few years, attackers have been using such attacks to steal sensitive customer data such as payment and login information.

A UK based ticketing website was attacked recently using the techniques described above – attackers targeted a third-party service called Inbenta and attacked its servers. By injecting their own JS to Inbenta’s servers which then got loaded on to the ticketing website, the Magecart group of attackers were able to steal over 60K credit card numbers.

Controls and guardrails

One of the core principles of Zero Trust security concept is that organizations should not trust any external or internal agents by default and should validate all connections before granting access. It is assumed that everything is malicious by default and should be verified.

This might sound obvious when it comes to security but traditionally, this has not been the case. Similar to the SolarWinds example, organizations have been adopting an implicit trust model and basing their trust on approved IP addresses, ports, and protocols to validate applications, third party services and users.

The first and immediate step to gaining control over your website supply chain is getting visibility into all of your third-party dependencies. Once you understand the architecture and your website inventory, you can then implement appropriate controls to restrict third party access.

In the long term, organizations must take steps to implement standards-based controls for robust and future-proof security that can respond effectively to zero day threats. Companies like Google, Dropbox, Twitter and others have successfully adopted W3C and HTML5 security standards like CSP, SRI, etc. to harden their security postures and defend themselves against client side attacks. These browser-based controls have been around for decades and are continuously updated to be compatible with evolving browser functionalities and emerging threats.

Lastly, even after implementing these controls, it’s important to actively monitor web application behaviors, data transfers and the connections made at runtime. True defense-in-depth for website security is achieved by an analysis, prevention and monitoring layer that work together to provide airtight security and detection mechanisms.

About the essayist: Aanand Krishnan, founder and chief executive officer at Tala Security which protects websites and web applications from threats, such as cross-site scripting (XSS), Magecart, website supply-chain attacks, clickjacking and more.

*** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: