SBN

Limited Impact of Phishing Site Blocklists and Browser Warnings

The life of a phishing site is brief, but impactful. A study published earlier this year found the average time span between the first and last victim of a phishing attack is just 21 hours.  The same study observed the average phishing site shows up in industry blocklist feeds nearly 9 hours after the first victim visit. By that time, most of the damage is done. 

Blocklists are an important part of how the security community fights phishing attacks. Many email filtering tools, browser-blocking services (such as Google Safe Browsing and Microsoft SmartScreen), and other controls use industry blocklists to limit user exposure to phishing sites.

However, it is important to recognize that blocklists and controls that depend on them do not fully mitigate the impact of phishing attacks. In fact, an estimated 65 percent of phishing victims are compromised prior to the URL being listed in anti-phishing blocklists. Additionally, the unsafe site warnings presented in browsers are often ignored. An estimated 37 percent of traffic to phishing sites happens after those warnings are in place. 

 
Lifecycle of a Phishing Site (Image Source)
 

A more successful strategy is a layered approach that prioritizes early detection and rapid takedown while using anti-phishing blocklists and browser-blocking services as an additional safeguard.  

Detection Early in the Phishing Lifecycle

The earlier a phishing site is detected, the faster it can be mitigated. There are multiple ways to detect phishing sites in the wild:

  • Monitoring domain registrations – a significant portion of phishing sites are hosted on domains registered by the threat actor. Domain registration data can be mined for look-alike domains and content hosted on newly-registered domains can be analyzed for phishing content. 

  • Monitoring newly-issued SSL certifications – more and more phishing sites have SSL certificates to appear legitimate. Newly-issued SSL certifications can be analyzed to identify phishing sites.

  • Analyzing referer logs – phishing sites often refer to links and images that are on the real site. They also often redirect victims to the real site after they have been phished. The referring URLs in referer logs can be analyzed to detect phishing sites.

  • Installing beacon code – threat actors often scrape target websites when building phishing sites. Adding code that sends an alert whenever it runs on an unauthorized website can detect phishing sites. 

  • Analyzing email spam – the vast majority of phishing sites rely on email for distribution. Phishing sites can be detected by sourcing large volumes of spam messages and analyzing the URLs they contain. 

  • Analyzing SMS abuse – outside of email, SMS text messages are also often used to distribute phishing lures. URLs parsed from SMS spam and abuse reports can be analyzed to detect phishing sites. 
  • Investigation and pivoting – there are often elements of a phishing attack that, upon deeper investigation and pivoting, can uncover additional phishing sites. A basic example is finding additional phish being hosted on the same website or domain. 

These methods can detect phishing sites very early in the attack lifecycle, in some cases as they are being staged and tested by the threat actor. They are more proactive than blocklists (it’s often these methods that eventually lead to URLs being added to anti-phishing blocklists). 

The sources used in these methods are far more “raw” than blocklists. Using them for detection requires a robust collection process that is capable of efficiently sourcing, ingesting, and parsing high volumes of data in diverse formats. It also requires a scalable curation process that filters out noise and isolates potential phishing attacks. 

Mitigating Phishing Sites and Minimizing Impact

Browser-based blocking services such as Google Safe Browsing and Microsoft SmartScreen can help to mitigate the impact of phishing sites by warning potential victims when they try to visit the site. Defenders should submit phishing sites they detect to these services. But as previously mentioned, most victims visit a phishing site prior to the site being listed by browser-based blocking services. It’s also worth noting that many users ignore browser warnings. Almost 37 percent of traffic to phishing sites takes place after browser-based warnings have been put into place.

To minimize impact, phishing sites need to be taken down as quickly as possible. This prevents the attack from claiming further victims. Paired with early detection, rapid takedown can mitigate much of a phishing attack’s impact. For example, detecting a phishing site as it is being staged and taking it offline prior to distribution fully mitigates the impact. 

Taking down a phishing site quickly can be challenging. There are no universal standards for hosting providers, ISPs, registrars, etc. to receive and take action on abuse reports.  Different entities require varying degrees of evidence and have their own procedures for handling reported phishing sites.

To achieve consistently fast takedown of phishing sites, organizations need to develop efficient reporting processes, trusted relationships, and automated integrations with hosting companies, registrars, and others. 

In summary, phishing site blocklists and browser-based warnings are not enough to mitigate the impact of phishing attacks. They are useful as part of a layered strategy that incorporates robust early detection and rapid takedown capabilities. For most organizations, it is more effective to partner with service providers (like PhishLabs) for these capabilities instead of doing it on their own. Taking this approach allows organizations to benefit from economies of scale, focused technology investments, and specialized expertise that would otherwise be unattainable. 

Additional Resources:


*** This is a Security Bloggers Network syndicated blog from The PhishLabs Blog authored by Stacy Shelley. Read the original post at: https://info.phishlabs.com/blog/limited-impact-of-blocklists-and-browser-warnings