SBN

New Research From Imperva Bot Management Tracks Gift Card Abuse

Researchers at Imperva Bot Management (formerly Distil Networks) have been tracking online bots that target the e-commerce gift card systems of major online retailers. The threat actors they’ve studied show remarkable resourcefulness and adaptability. In a recent podcast, Imperva Bot Management’s Jonathan Butler joined CyberWire’s Dave Bittner to discuss the findings.

Listen to the podcast here.

Below is a revised transcript from their conversation that has been edited as a Q&A.  

Dave Bittner (Q):  Jonathan Butler is the technical account team manager at Distil Networks, which is now part of Imperva. The research we’re discussing is titled, “GiftGhostBot Attacks E-commerce Gift Card Systems Across Major Online Retailers.”

Jonathan Butler (A):  Typically, when you get a gift card, there will be a registration process.  Once it’s registered, it’s more or less money in the pocket for you to go and buy products or services from a particular retailer. Just like a credit card, these gift cards have a number on the back that identifies the card as tied to the actual money sitting behind it. There will usually also be a PIN associated with it, to additionally validate those funds.  When you go to validate the funds on the card, the systems are going to be able to read those digits and then validate against the additional PIN. So that’s how it’s doing the validation to access the funds.

Q:  How are hackers going about cracking an e-commerce site’s gift card system?

A:   An attacker, an adversary who sees me as a target with the gift card, he or she won’t necessarily know the number that’s associated with the funds. So it ends up forcing this play by the adversary to effectively guess those numbers. And this is where bots come in. That adversary will write a bot script that can target the check balance services on a retailer site. Hundreds, thousands, upwards of millions of guesses if they have the scale and support to do that. They can just start brute force guessing with no real rhyme or reason. But eventually, if they get enough guesses, the probability starts to increase drastically that they’ll guess right. Once that happens, they’ll have full access to the card and the funds.

Q:  Are they also guessing the PIN as well?

A:  Yes, they’re going to do the same enumeration process over both the card and the PIN as well. So they’ll have the card number, and then they can just randomize and start guessing at scale the PIN. Eventually, they crack it.

Q:  From the retailer point of view, you put this functionality on your website as a good gesture of customer service to the folks who are buying gift cards. What are they going to see on their end?

A:  On their end, they’d probably start seeing a bunch of validation requests coming in. So if they’re looking at their traffic logs, they’re going to see a huge spike on the particular application call that operates the gift card balance lookup. When we see these attacks, that’s typically what’s happening. What cues it and gives it away is that the chart or the traffic logs will see a large surge in particular to those calls. For retailers, it’s really important to have heightened visibility into some of the critical application functionality that are known to be high value targets for bot writers. What you’re really looking for is that surge in traffic on those particular validation requests or the balance check requests.

Q:  And that would be pretty clear – if the bots started targeting you, and you were looking at these logs, chances are you would know it?

A:   I would say so. Typically you’re not seeing a ton of traffic on those types of pages relative to what a bot writer is going to be doing to that thing. So you would expect a relatively low and stable volume and usually the traffic patterns of these things is, you know, very predictable, right? Like it’s going up and down, with the peaks – the on and off peaks of the website. Whereas when a bot writer comes in and runs their script against the site, you’re going to see that thing just go up very drastically and anomalously.

Q:   Specifically with the GiftGhostBot, how are attackers going at these things?

A:   In the GiftGhostBot scenario what we found is that this was a very coordinated attack that targeted more than one retailer. That alone implies that there was research and coordinated effort behind these attacks. When we dug into that, we realized that vendors – particularly those not being protected by Imperval – were actually having to shut down the functionality to look up card balances on the application because it was becoming such a costly affair for them.

Q:  Now, are retailers effectively being DDoS’d by the number of requests that they’re getting, or is it that so many gift cards are being compromised? Or a little of both?

A:  It’s a little bit of both. So in the bot world, when you’re talking about defending an application against it, it’s very much human in nature the way they respond. If they’re having success, and you put defense in front of them, it’s very likely that they’re going to almost stir that botnet to spin up even more traffic. That’s what we saw throughout the course of the GiftGhostBot attack. As we started putting more and more incremental defenses in front of this thing across all the different properties, it actually was evolving throughout the course of the attacks.

Very early on in these observations it was very primitive. It wasn’t doing a lot of things to necessarily obfuscate itself. However, as it started to have marginal success, we ended up having to throttle our defenses and put increasingly advanced and sophisticated signatures in front of it. As a result, we saw this thing evolve where it was distributing itself over more and more IPs. It started spoofing the browsers that it was identifying itself as. It even went from going to desktop browsers over to mobile.

Interestingly, what we saw is that there were actually channels within the broader attack that were suggestive of there being more than one kind of player involved. So, over the evolution of the attack, we saw simplistic efforts come and go. The sophistication levels were throttling and grouped into a few different core behaviors over the course of this thing. So, it was really interesting to see that not only was it a researched and coordinated attack that was just targeting retailers — and particularly in the clothing and fashion space — but that there might have even been multiple players involved.  Everyone was bringing their own tactics to the table.

Q:  Explain the significance of switching to iPhone and Android user agents. What’s the background on that, and why does that matter?

A:   It matters because the most important and fundamental concept to remember when you get into [malicious] organized bots is that it’s all incentivized by money.  It becomes an actual operation that involves investment, both in time, effort, and research. And what happens is, in the defense against really advanced and sophisticated actors, it’s not always about stopping every single request, but it becomes more about how do you thwart their ability to operationalize and make a business off of this?

What we saw is that as the defenses were put in place, [the attackers] actually had to invest more time, more effort, and more research into figuring out these detection tactics on our side.  But more importantly, it forced them to have to evolve and move from desktop to mobile. And that actually increases the cost of operations for them just because those are more expensive devices to get a hold of.

And so, what ends up happening is, as they evolve, you’re actually forcing the cost of their operations to go up. For very advanced and persistent actors, if you can force that bottom line to a point where it almost makes the whole effort or operation pointless, you almost discourage the motivation to a point where they’re going to go away.

It’s a pretty interesting phenomena that we see often in the bot space. If there is enough of a financial incentive behind these things, they’re never going to go away. And there’s correlations to why that could happen. If you’re the only person who has that particular dataset, or you’re just a high value target that happens to hold very valuable datasets, you start to correlate the persistence and advanced nature of these attacks to that type of thing. In this case, with the GiftGhostBot, this was a direct pipeline into being able to validate very real money that can be in turn either resold or leveraged in financial transactions as a real medium to get very real goods and services in the world.

Q: Can you give some insights: When you all are protecting an organization against bots, what’s going on there? How are you blocking the bots but still allowing the normal legitimate users to get through?

A:  For Imperva, the way our bot detection system is built is, when a client makes a request to an application, we’re doing a series and multilayered interrogation against that client to ultimately make a decision around, hey, are you human or not? Some of those interrogation steps get down to very simplistic things like, hey, is your user agent legitimate? Are you coming from a valid source or are you coming from like a hosting center? Are you just doing something that you otherwise shouldn’t? All the way into more advanced stuff like, are you running a JavaScript engine? And even as the space has evolved and progressed, we’re doing more and more algorithmic and probabilistic decision making via machine learning of whether the behaviors themselves are suspect.

And so all of this decision making is happening in real time, on every request, very seamlessly. So, when our customers are leveraging our platform and technology to effectively protect their applications and endpoints, we’re more or less running those interrogations and making very real-time programmatic decisions that ultimately know how to siphon out the bot traffic, while still allowing someone who’s just going to the site non-maliciously. We help promote and generate revenue for that business by making sure that non-malicious users won’t be impacted.

Q:  What are your recommendations for retailers in order to best protect themselves? What sort of steps can they put in place?

A:   I think first things first, it comes down to sitting down and looking at all of the functionality of the web application and making sure that the business units are very tightly connected at the hip with the security teams of those organizations. Even today, I think a lot of organizations see security as second to the growth of the business. Security’s always kind of taken the back seat, short of those early adopters and pioneers in the space. More and more, we’re starting to see that organizations are realizing the severity of and true damage of these cybersecurity attacks.

It’s just sitting down and taking a mature posture on security practices within your web applications and mobile applications, and making sure that when you roll out these new functionalities, that they’re being really considered and understood at that cybersecurity layer. The people behind the functionality that enabled the GiftGhostBot attack were probably thinking, hey, this is a huge win for our team! No more do people have to call in and ask a person at the support desk what their gift card balance is. I can just go to the website and very seamlessly interact with the application to get a validation of my balance and move on.

But when you do that, when you introduce that functionality on the website, you end up allowing someone to talk directly to your database of gift cards and more or less get creative and come up with scripts to guess these balances, cash out, and fraudulently steal money from your customers. So I think it just starts with having a mature cybersecurity posture on security and making sure that the business teams are very in lockstep with the security team.

More tactically, I would just make sure that the security teams are constantly scanning the web applications, looking for anomalous behavior in the logs that they have available, and making sure the tooling is giving them insight into those types of attacks. And obviously, as the security space evolves and new problem sets arise, just doing some education around it and talking with vendors, it’s always a really healthy thing to stay on top of this stuff.

Q:   Is there anything to be gained by doing any kind of rate limiting or things like that to keep it within the range of normal requests you would expect, but to keep these high volume requests from being able to go through?

A:   I think that’s really where it gets interesting and where the problem set really starts to get complex. A person looking at this who may not have boots on the ground and their nose close to the grindstone sees it as, hey, this is a huge flood of traffic – how come we can’t just rate limit this or put barriers around how many requests that a client or a user can make? The reality is that with a WAF, like a web application firewall, it all boils down to how the system is detecting an individual user. If the adversary can spoof and obfuscate their identity with relative ease, the idea of rate limiting against these types of attacks gets really hard. And that’s really where a bot detection system is coming in and able to do more granular identification to truly say, I know you’re doing all this stuff to obfuscate your behavior, but I still know that you are you, and the rate limiting becomes a lot more effective.

It is good practice to have rate limiting in place, and particularly around these types of application functionalities. But when you get into advanced bot attacks, these are people have done their research and reconnaissance efforts on your applications to more or less know how to beat and circumvent rate limit measures. It’s just a constantly evolving space, and I think in the next five years, the bot space will continue to evolve and it’s going to be a very interesting sector to be in. It’s something that a lot of companies who have serious revenue invested in their online presence, their web applications, they should be legitimately concerned about and making sure that they’re keeping their security practices and protocols and tools up to par with what every day is an evolving space.

The security world is a really interesting one, in that defense can be relative, especially in the bot space. If you build your defenses just slightly better than the competitor down the street, you’ve made it extra difficult to go after you. We do see this behavior where bots tend to go towards the path of least resistance that still allows them to accomplish their goal. So just putting in medium effort, medium level defenses really secures your company from being less of a target for those bot writers.

If you would like to learn more about how Imperva Bot Management can help mitigate loss caused by bad bots you can read up on that here.

 

The post New Research From Imperva Bot Management Tracks Gift Card Abuse appeared first on Blog.


*** This is a Security Bloggers Network syndicated blog from Blog authored by Blog. Read the original post at: https://www.imperva.com/blog/new-research-from-imperva-bot-management/