SBN

Social Media Countermeasures – Battling Long-Running Scams on YouTube, Facebook, Twitter and Instagram

For the past few years, I’ve been documenting, screenshotting, and sharing examples of criminal campaigns on the three big social media platforms: Facebook, YouTube and Twitter. I’m not that interested in speculating whether or not something is fake content, falsely amplified by nation-state sponsored threat actors (i.e. coordinated inauthentic behavior), but instead I’ve been focusing on two (a lot less media-sexy) themes:

  1. low-tier criminals using these platforms to promote their services
  2. so called “support scams” targeting mainly Facebook page owners

What is common across these two is the fact that they keep getting through social media platforms’ automatic filtering. I call this filtering – the good-willed type, not the censorship type – social media countermeasures. A term I think I picked up from Destin who runs Smarter Every Day YouTube channel, but I haven’t really seen it used. In a nutshell, social media platforms are trying to create countermeasures to prevent malicious behavior on their platform, and at the same time cyber criminals are developing counter-countermeasures to bob and weave their way around detection and filtering. Sometimes these criminals simply operate in a grey area not covered explicitly by a platform’s Terms of Service, making developing effective countermeasures even harder. Let’s take a look at few examples.

Instagram hack promotions

Apparently, people getting locked out of their Instagram accounts is a pretty common happenstance.

Back in 2020, these type of Instagram hacking service advertisements were quite rampant in YouTube comments. Looking at the samples I’ve saved, a few things spring to mind:

  • These accounts that leave these copypaste comments could be either hijacked or completely fake. However, there’s not a clear pattern that would fit all of them, so I suspect it’s a mix of both.
  • Sometimes the comments are long and elaborate “real life stories”. I don’t think I’ve seen these recently though, so that style probably wasn’t worth the time and effort investment for the criminals.
  • The next step in adding legitimacy in effort to avoid countermeasures was to use another account to automatically reply to the first fake promo comment. This technique has been popping up with multiple different variations of the reply comment. At first these fake replies were almost instant, but eventually the malicious actors introduced an artificial delay to them and then even made reply-threads where accounts “discussed” about how great and reliable this hacker was and totally helped to get their Instagram account back.
  • Sometimes the name of the hacker has been input as a hashtag or included as the commenter’s name, again most likely to avoid basic text-match-based detection.
  • For screenshots and more examples, open this continuously updating thread in Twitter:

Although I started seeing this type of promotion mainly on YouTube, it has moved to Facebook comments and Twitter comments as well. I suspect though that why it started and still is prevalent on YouTube is because their countermeasures for text aren’t as sophisticated as Facebook’s or Twitter’s. YouTube naturally does better job in detecting all sorts of suspicious things from videos (as that’s their main source of revenue). Also, what’s the “script-kiddie” equivalent term for these? There must be one.

Phishing targeting Facebook Business Manager accounts and Facebook Page admins

This scamming scheme has stayed relatively unchanged in the past few years, which could indicate that it has been working for the criminals. In a nutshell, criminals either create a new Facebook page or rename a hijacked one to something like “Page Temporary Blocked” or “Page Violated”. They use official Facebook iconography as the profile picture and mass-send private messages to Pages or Page admins. The messages vary, but the theme is always the same: your Page has been flagged for doing something wrong and Facebook will shut it down unless you click this link to resolve the issue. Usually there’s also some sort of sense of urgency added, like “this notice is valid only 24 hours” and so forth. Textbook stuff. Links themselves have been using either homograph attacks or official m[.]me URL shortener.

For screenshots and more examples, open this continuously updating thread in Twitter:

A good thing is that while these attacks do come through (i.e. they’re not caught by automated filtering), they seem to be short lived. Sometimes the fake page has been banned before I’ve had the chance to review the message. But this is just one way how criminals are targeting Facebook admins: ex-colleagues from WithSecure discovered a malware specifically designed to hijack Facebook Business accounts. Read more about “Ducktail” operation here.

2022 summer I also saw this same type of scam going on in Twitter. This time the criminals had taken over verified accounts and then used those to spam private messages to who knows how many random accounts. Messages followed that same theme as the ones described above.

Note that going for verified accounts comes with an obvious downside for criminals: they can’t change the account handle without losing the verified status. This has been Twitter’s policy for a decade. To be honest, I’d think a more legitimate sounding name like “Twitter Support”, even without a Verified badge, would work better for this purpose than a verified, but completely unrelated account name.

Layered attack to counter layered defence in Instagram

As a final curiosity, I’d like to share this interesting Instagram attack from 2020. I haven’t seen this type since, so it might be that Instagram’s countermeasures now prevent this, but it’s still worth to take a look as I think it illustrates how criminals can layer their attack to scale it. Here’s the breakdown:

  • First, you get a notification that you have been tagged in a photo. Intrigued, you’ll check it out.
  • The photo has been posted by a random account (in my case it didn’t have a profile picture and the name was gibberish). I assume that this is because the attacker knows that this type of behavior will be flagged, so it’s not worth their effort to try to make this first account seem legitimate. This also makes it faster to create multiple accounts spamming this first step of the attack.
  • Tapping the photo, you can see that a lot of accounts have been tagged to it. However, the real trick is this: this random, and frankly quite fake and suspicious looking account, isn’t itself sharing the malicious link. Instead, the photo encourages you to click on another tagged profile.
  • This second profile is then the actual one that includes the bit[.]ly link leading to the next phase of the scam. Judging by the profile’s name, there were more than one of these created as well.
  • To my understanding, the gist of this whole scheme was to run the spamming-phase through multiple low effort, low value accounts, so that even when they’re taken down the real money-maker accounts would stay up.

So, as you can see, social media countermeasures are easier said than done. There’s a lot more for the engineers in these companies to figure out than how to prevent copypaste spamming or known malicious URLs. It’s a cat-and-mouse game and inevitably, every now and then, criminals will be one step ahead. This is why sharing awareness of these scams matter. Practice good security hygiene. Be suspicious. Think before you click.

And most importantly, share your learnings!

*** This is a Security Bloggers Network syndicated blog from Privacy & Security – Joel Latto authored by Joel Latto. Read the original post at: https://joellatto.com/2022/09/01/social-media-countermeasures-battling-long-running-scams-on-youtube-facebook-twitter-and-instagram/