Traffic is a vital commodity in the cybercrime ecosystem that enables criminals to monetize their campaigns in various ways, whether by hijacking traffic from ad networks, carrying out phishing attacks, distributing malware to vulnerable computers or sending victims to vast networks of scam sites.
Many attackers protect this source of revenue by utilizing traffic and device filtering techniques to block out security researchers and optimize the type of traffic they get. In this post, we’ll examine a tactic we see more and more in the wild—obfuscated code on pages that redirect users to malicious pages. We’ll also take a look at why scam networks that burn through huge swathes of cheap, disposable infrastructure are a destination of choice for traffic captured by these campaigns.
The redirector below, which we call CaesarV after its use of a Caesar cipher to obfuscate code on its pages that cause redirection, is, in this case, sending traffic to what RiskiQ’s models identified as fake tech support pages.
Where does this traffic come from?
RiskIQ observes campaigns with CaesarV using spam techniques to build their traffic, typically by sending malicious URLs and attachments to a large number of contacts that may have been stolen from address books, harvested from websites, collected from data breach dumps or purchased from various sellers and marketing database suppliers. When a recipient clicks the URL embedded in the spam email, they are usually sent to a page on a compromised web server which then distributes the traffic among different scam pages.
One of the first things to recognize about CaesarV is the portion highlighted in blue, showing how the redirector uses a Caesar cipher to obfuscate the method of redirection as well as the address to which the traffic is going.
Shown above, the DOM from one of the CaesarV pages detected by RiskIQ contains a few interesting elements including the script (highlighted), which acts as the cipher. The variable seekinga=69 defines the shift for all of the numbers in the seekinga array, meaning each number in the seekinga array is a character code with 69 added to it. After deobfuscation, the character code array resolves to this redirection:
In RiskIQ’s DOM changes tab, we can see what changes the obfuscated code brings without having to deal with de-obfuscating the charCode.
From the obfuscated location change all the way to the payload, the Causes tab shows the user how this redirector eventually takes them to a fake tech support scam.
Due to their ease of use and relative effectiveness, scams such as scareware and fake rewards have become a go-to for criminals looking to accrue as much web traffic as possible, potentially for monetary gain. Each click and background request count as a minuscule but significant drop in a vast pool of monitored, tracked and often commoditized data points.
We’ve covered massive scam campaigns before, but new ones such as the scareware example above pop up every day. Below is an example using fake rewards, another popular type of scam that taps into a different type of emotion. By offering a prize (free iPhone!) in exchange for an easy action such as filling out a brief survey or clicking through content, these actors hope to leverage a user’s excitement to draw a click (Spoiler alert: You will not be getting the iPhone).
These scam actors tend to rely on highly disposable infrastructure, often maintaining domain names that last only days. This actor’s current infrastructure falls under two simple variations, either “come-here-now##(.)loan” or “time-to-live##(.)loan” with a rewards type of subdomain tied to it such as “competition” or “prize.”
To look a bit closer at the shifting nature of the infrastructure in this campaign, you can look at one of the domains we captured, which reveals that it was tied to an IP for only a single day.
Although these domains don’t resolve to an IP for long, these actors can be lazy and continually reuse their infrastructure. By looking at two IPs from the same scam campaign, we can see that they continually reuse hosting infrastructure to deliver their content, making it a bit easier for analysts to track them:
While relatively simple, scam campaigns are a challenge for those in charge of the security of ad networks. Their constantly shifting infrastructure means simply blocking domains and IPs isn’t enough. Often, scam campaigns spread so far and wide that blocking one piece of its infrastructure is akin to playing whack-a-mole: No matter how many you hit, another will pop up. Also, the scale at which these groups likely operate means identifying scams in time to block their impact is not easy.
As a result, digital advertising ecosystems will remain a desirable target for threat actors for the foreseeable future. It is our hope that fellow threat analysts continue to dig into these scammers’ tactical approaches and share them with the broader analyst community. Information sharing is necessary, if at the very least, to be more proactive in exploiting these actors’ laziness and reuse infrastructure.
This article was co-authored by Ian Cowger.