SBN

Abusing Communities

I may need to give up social media altogether. I can’t seem to avoid seeing scams in all directions, and I can’t seem to ignore them, even though writing about this stuff is no longer my living.

Perhaps it’s a curse, or the result of a misspent life: I remind myself of a quotation. Margaret Schlegel in Howards End says:

“You remember ‘rent’? It was one of father’s words– Rent to the ideal, to his own faith in human nature. You remember how he would trust strangers, and if they fooled him he would say, ‘It’s better to be fooled than to be suspicious’–that the confidence trick is the work of man, but the want-of-confidence trick is the work of the devil.”

Well, if I’m doomed to whichever circle of hell is most appropriate for the incorrigible inspector of gift horse dentition, I guess I can at least claim to have alerted a few people to threats to their security.

Facebook, of course, is a hotbed of malicious activity: spam, scams, hoaxes, impersonation, even malware. I wrote about that many times when I worked with ESET, and I don’t plan to go over that ground again here, but here are three scams that have yanked my chain just recently.

Shaggy Dog Story (but not at all funny)

I’ve recently come across examples of one of those scams that strike me as particularly unpleasant, because they subvert the best intentions of people who repost the scammer’s ‘appeal’ and/or follow a malicious link.

One of the saving graces of Facebook is that it encourages communities – groups with a shared historical interest, or cultural interest, or geographical interest, or all three. I often see appeals posted to local groups from people who’ve lost keys or money or to whom goods have somehow not been delivered, or are appealing for some essential service, and often I see other people come to the rescue. But here’s a message that cynically exploits that public-spirited behavour and that has been posted (with minor variations) from various accounts to various parts of the UK and  elsewhere.

At the moment, most of the examples I see are posts claiming to have found an injured or lost dog, sent to community sites requesting that the post be redistributed (or ‘bumped’) so that the owner can be found. However, other examples have been reported apparently pertaining to a stolen dog, or even a hurt, ill or lost child. Claims involving ill or lost children have long been used in social media and chain emails to trick victims into forwarding/reposting/retweeting a malicious message, so that more people would be drawn into disclosing their credentials or falling for survey scams and other malicious activity. Anything involving children can attract massive volumes of repost on media not specific to one region. For example, in the early 2000s NHS mail services were periodically hit by mailstorms from email users tricked into forwarding misinformation on the 2003 tsunami. However, here are a couple of examples involving dogs that are clearly aimed at specific communities, though I’m seeing near-identical posts in geographically widespread communities, as far away as Australia.

  1. “Hello. If anyone is looking for this sweet boy, found him lying on the side road in [hashtag location of targeted community]. He was hit by a car in a hit and run incident. I took him to the vet he is not chipped I know someone is looking for him. He definitely misses his family, I’ll continue to take care of him in the meantime. Please bump this post to help me find his owner.”
  2. “Hello, I haven’t found the owner of this sweet pup we picked up on the road in [hashtag location of targeted community] She’s really depressed and she’s not eating. We took her to the vet she’s not chipped. Please bump this post to help me find the owner.”

Clearly, this is cynical social engineering aimed not only at dog lovers but at anyone with a sense of community responsibility. Who wouldn’t want to help get an injured or lost dog back to its owner, especially if they know that the owner probably lives in the same area? Even more so if the lost individual is not a pet but a child who is autistic and has communication difficulties. That’s a real example, I’m afraid, and it’s straight out of the 2003 tsunami spam playbook. Yet another example combines the emotional blackmail element of the lost autistic child with the child’s (also missing) dog.

What do the scammers gain in this case, you may wonder?

Usually, once the post has been shared, the scammer will edit the message to something substantially different (a form of “bait and switch”) and including a malicious link, and it will appear to the friends of the innocent reposter that he or she is promoting a malicious link.  While they’re unlikely to believe that their friend is deliberately endorsing a fake message or link, it may put a strain on their relationship, and it may damage the credibility of the community. Not that the scammer will care about any of that.

The two examples above, like others I’ve seen, are pretty stereotyped, so for someone who is aware of the scam, it’s easy to recognize scam messages with minor variations. I have to point out, though, that there’s a likelihood that the scammers will adapt their approach as it becomes less effective. Still, at the moment, there are similarities worth pointing out.

  • The use of the trigger word ‘sweet’ as in ‘sweet boy’, ‘sweet girl’, ‘sweet pup’. I guess ‘snarly’ or ‘nervous’ just wouldn’t be so appealing.
  • Use of a regional hashtag.
  • Indication that the dog is missing his or her family and needs to be returned in the interests of its emotional/mental health (“depressed and not eating”).
  • Indication that the dog isn’t chipped, so the owner can’t be identified that way – hence the ‘need’ for widespread reposting.
  • “Please bump this post to help me find the owner.” In this case, the wording is identical, which is certainly a red flag, even though a genuine appeal would also probably want you to ‘bump’ a post to keep it at the top of the list of posts, or broadcast it on chat, or to repost it in other groups or on your own feed. It’s what is known in the world of marketing as a “call to action”. However, one of the characteristics of this kind of scam post is that the scammer turns off commenting, so it can’t be bumped back to the top by simply commenting. (If you see an old post return to the top of a discussion list with the simple comment “bump”, that’s what is happening: it’s just to get it back to the top so that more people will see it or be reminded of it.) Wouldn’t a legitimate poster want people to comment anyway, so that they can ask questions, or tag other people who might be able to help? Of course, but what the scammer doesn’t want is people noting in the comments that this is a known scam and linking to articles with reliable information. So he or she disables comments.However, they can’t stop people using an angry emoji to [dis]Like the post, or posting a warning about the scam post, or alerting the owners/moderators of the group to the fact that it’s false information.

What else can you do to check, identify and warn about scam posts?

  • It’s not unusual to find that Facebook fact checkers have flagged the post as containing false information. They don’t always get this right, and flagging of posts with political content can be particularly contentious. Still, if a post like this is flagged, it’s obviously worth investigating further, or just ignoring.
  • You can use a search engine to see if the same text turns up elsewhere. Sooner or later, it’s likely that a site like fullfact.org will come across a widely-broadcast scam message and flag it.
  • Check the poster’s profile. The likelihood is that it will turn out to be a page rather than an individual’s profile; it will probably have been posted recently, indicating that it’s been created in order to promote the particular scam that has attracted your attention. Even if it’s a personal profile, they will probably have no friends or followers, and probably a bare minimum of photos – a profile picture likely stolen from a celebrity site. In any case, if you think they’re fake, consider reporting them to Facebook. FB often takes its time about taking action, but it doesn’t take long to report a fake profile, and the more people do it, the sooner FB is likely to take action.
  • You can’t post to a profile that is really a page profile, but you can post a review of the page explaining why you don’t recommend it. I’d be careful about doing this, though: it may make you a target for some other malicious activity.
  • Check the photo they used for the scam post. A reverse photo search may turn up other examples of the same photo, possibly showing where the scammer stole the image from. However, social media sites tend to restrict access to images they show so that web crawlers like Tineye can’t find them, so not finding other examples certainly doesn’t prove that the post is genuine.

I can get angry about just about any kind of scam. But there’s something about scams that target the good intentions of people within a community in a way that can damage people who follow the scam links, people who innocently post appeals that eventually become scam links, and the whole altruistic ethos of the community/group, that really causes me to lose my rag. Perhaps the worst aspect of this is that people who have the best interests of others (human or canine) at heart may start to feel that they’re wrong to believe that other people also want to help others as best they can. They shouldn’t: most people do have good intentions. The fault lies with sociopathic scammers and social media platforms that don’t always go out of their way to protect the rest of us from those scammers. The trick is not to lose your faith in humanity while being aware of the sort of tricks that scammers play.

Here are a couple of other ongoing Facebook scam issues that I’ve mentioned before.

Account Cloning

I’ve posted previously about account cloning (and, less often, hacking) in Facebook, so I won’t bang on about it again here. However, I see so many people warning their friends that their account has been ‘hacked’ (or more likely cloned) that last year I wrote a lengthy article about it: Clone Wars Revisited – Facebook Friend Requests. As I noted in that article, while it’s a good idea to warn your friends if you think your account has been cloned, so that they don’t accept invites to connect from people pretending to be you, you may well find that such a warning almost immediately attracts comments from people (more likely bots picking up on the trigger word ‘hacked’ than real people) who offer ‘help’ to recover your ‘hacked’ account, either themselves or with links to some pseudonymous ‘good’ hacker. I noticed during a recent episode of the BBC programme Scam Interceptors that they were highlighting similar bot activity on other platforms such as Twitter. (I have some thoughts on Scam Interceptors that will be surfacing shortly in another, longer article.)

Fake Videos in Chat

Then there are the links that sometimes turn up in chat messages from your friends, usually with a hook along the lines of “Is this you in this video?” or “Did you make this?” or “I can’t believe you took part in this!” I remember malware that used hooks like this from many years back, in email as well as on social media. (In fact, many attacks, from malware to hoaxes to scam, have translated all too easily from email to social media.) In the last couple of years, though, I’ve been seeing it a lot on Facebook. Usually, when you attempt to click on the video, you get taken to a fake Facebook (or possibly YouTube, Vimeo or TikTok) page where you are required to confirm your identity by re-entering your credentials. You might think that people would find this suspicious, even if they didn’t notice that the site to which they’re taken has a URL that looks nothing like FB, or YT etc. Why would you need to log in again to see a Facebook or YouTube video? But it evidently works. People give up their credentials, and then find themselves facing a survey scam or a suspicious download, having already lost control of their account.

Why does it work? Well, sometimes people just aren’t paying attention to what they’re clicking. But sometimes, perhaps, because the video they think they’re going to see is something unusual or downright pornographic.

[Reminiscence Alert]

Long before Facebook reared its often less-than-pretty head, I was responsible, among other things, for dealing and trying to forestall computer virus attacks for a medical research organization. One day, my line manager passed me an email from the manager of an external unit complaining that his laptop had been infected with the worm W32/MTX and IT SHOULDN’T HAVE HAPPENED! Of course, she demanded that I supply her with an appropriate response since it was My Fault. Well, I don’t think I whined much about the special difficulties of trying to keep viruses out of a multitude of autonomous sites across the South of England, or the fact that I hadn’t even been in the country at the time, but I certainly pointed out politely that no-one – well, definitely not me – had ever claimed that antivirus software would instantly block every new virus. In fact, I doubt if any reputable researcher would claim that any such software offers complete protection, despite the development of very efficient technologies that recognize a high proportion of previously unseen malware or simply create a restrictive environment in which it’s (almost) impossible for malware to execute efficiently. I think it was A. Padgett Peterson who coined the acronym TOAST (The Only Antivirus Software That…) to describe overhyped security software. Not altogether amusingly, I notice that one of the market leaders still uses exactly that phrase. I can only say that such claims for security software, operating systems or hardware should be taken with large pinches of salt.

As a corollary, I believe I pointed out as tactfully as possible that, given that it was impossible to guarantee that no malware would ever get through the mail system, it wasn’t too unreasonable to expect users, even heads of unit, to be reasonably cautious about emails carrying attachments with names like Me_nude.AVI.pif or NEW_playboy_Screen_saver.SCR. This might sound like very crude social engineering, but by late 2000 MTX may have been the most prevalent malware in the world, and I don’t think I can take the blame for all those infections. Anyway, I didn’t hear another word from my line manager and the guy who made the complaint actually gave me an apology of sorts…

[End Reminiscence]

David Harley

*** This is a Security Bloggers Network syndicated blog from Check Chain Mail and Hoaxes authored by David Harley. Read the original post at: https://chainmailcheck.wordpress.com/2023/05/13/abusing-communities/