Red Sift Taps GPT-4 to Better Identify Cybersecurity Threats

Red Sift today announced it is employing the GPT-4 generative artificial intelligence (AI) platform via a Relevance Detection capability to better determine whether a suspicious online entity should be monitored.

Previously, Red Sift was applying machine learning algorithms to analyze domain name system (DNS) platforms, secure socket layer (SSL) certificates and the WHOIS database in real-time to determine whether an online entity might be suspicious. The company’s OnDOMAIN database ingests 10 million new certificates a day and checks more than 1.9 billion hostnames to drive Hardenize, an attack surface management platform the company provides.

That capability makes it possible for Red Sift to not only proactively identify threats within email, domains, brand and the network perimeter but also provide tools to shut down phishing sites and ensure ongoing compliance with email and web security protocols.

Red Sift CEO Rahul Powar said GPT-4 extends those capabilities by enabling Red Sift to surface recommendations that are worthy of additional investigation in natural language.

Cybercriminals are increasingly using fake websites and other tools to impersonate organizations and collect a wide range of data from unsuspecting end users. A survey conducted by MIT Technology Review Insights also found that more than half of respondents had experienced a cybersecurity attack originating from an unknown, unmanaged or poorly managed digital asset.

Organizations can, of course, request that a malicious website be taken down, but they first need to know it exists. The challenge is, even once identified, it requires significant manual effort to assess those threats, especially if the language used needs to be translated. GPT-4 makes it possible to automatically create those translations to accelerate the review process, noted Powar.

Using GPT-4, OnDOMAIN can now continuously surface any undiscovered domain assets owned by an organization, as well as uncover any potential lookalikes that might exist. These capabilities not only enable organizations to identify malicious activity that could damage their brand but also discover digital assets they may have forgotten about, said Powar.

While there is obviously a lot of concern over generative AI’s potential use for malicious purposes, Powar said there would also be plenty of future examples where generative AI can make organizations more secure. In effect, generative AI is only the latest advance in an ongoing technology arms race between cybersecurity teams and malicious actors that increasingly rely on automation to launch a wide range of attacks, noted Powar. The only difference is the toolbox that everyone has access to just got a lot bigger, he added.

In the longer term, Red Sift will also be taking advantage of the large language models (LLMs) used to build generative AI platforms to address a wider range of attacks more narrowly, he added.

It’s still early days as far as the use of AI to combat cybersecurity attacks is concerned, but Powar said there is always going to be a need for cybersecurity professionals to be in the loop. The only difference going forward is the level of scale at which attacks are being launched and thwarted is likely to be a lot higher.

Avatar photo

Michael Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

mike-vizard has 746 posts and counting.See all posts by mike-vizard