SBN

GUEST ESSAY: How to detect if a remote job applicant is legit — or a ‘Deepfake’ candidate

Technology provides opportunities to positively impact the world and improve lives.

Related: Why facial recognition ought to be regulated

It also delivers new ways to commit crimes and fraud. The U.S. Federal Bureau of Investigation (FBI) issued a public warning in June 2022 about a new kind of fraud involving remote work and deepfakes.

The making of Deepfakes

The world is on track to see around 50% of workers transition to sustained, full-time telecommuting. Conducting job interviews online is here to stay, and deepfakes may be part of that new normal.

The term refers to an image or video in which the subject’s likeness or voice was manipulated to make it look like they said or did something they didn’t.

The deepfake creator uses “synthetic media” applications powered by machine learning algorithms. The creator trains this algorithm on two sets of videos and images. One shows the target’s likeness as they move and speak in various environments. The second shows faces in different situations and lighting conditions. The application encodes these human responses as “low-dimensional representations” to be decoded into images and videos.

The result is a video of one individual convincingly overlaid with the face of another. The voice is more difficult to spoof. However, faked images continuously look more convincing as algorithms learn and get better at mimicking general human mannerisms and the specific characteristics of the target.

Some bad actors also use this technology to create synthetic audio. One high-profile story saw criminals use a deepfake to impersonate a high-level executive over the phone and successfully authorize large fund transfers. The losses totaled $243,000, and the fraud tricked individuals in the company who knew the real person.

Amos

Even deepfake examples designed to educate the public — like a doctored video of Nixon’s resignation speech — fool observers without meaning to.

The FBI’s warning

The FBI announced that its Internet Crime Complaint Center (IC3) had observed an uptick in employment-related fraud involving stolen personally identifiable information (PII) and deepfakes. These fraudsters frequently use ill-gotten PII to create synthetic images and videos to apply for work-at-home positions. Some of the roles include:

•Information technology (IT)

•Database design and maintenance

•Computer programming and app design

•Finance- and employment-related technology

Some of these roles involve handling intellectual property as well as employee, patient or client PII. The stakes are not as simple as lying one’s way into a new job. The larger goal is to use the stolen and synthesized likenesses to secure a position with proximity to valuable company data or personal information.

Protecting organizations

Deepfakes are convincing, but there are signs to look for. Machine learning isn’t flawless and sometimes results in an image with telltale artifacts such as:

•The subject blinks too frequently or not enough.

•The eyebrows or hair, or portions of them, don’t match the subject’s face or movements.

•The skin appears overly wrinkled or too flawlessly smooth.

•The voice’s pitch does not match other characteristics of the speaker.

•Reflections in the eyes or glasses don’t match the speaker’s surroundings.

•Other aspects of the speaker’s movement or appearance don’t match the video’s expected physics or lighting aspects.

Overlaying one individual’s likeness over someone else’s is seldom a seamless process. Spoofing a voice is likewise imperfect.

Even so, the losses accruing due to deepfake abuse are already staggering. A single example resulting from “deep voice” fakery resulted in a loss of $35 million in fraudulent bank transfers.

Best defense: awareness

The Nixon example was an attempt to educate the public through exposure. Jordan Peele’s deepfake of President Obama also sought to spread awareness. Elon Musk compared the use of deepfakes to “summoning the demon” to describe how dangerous they can be.

Beyond cultivating awareness, experts recommend companies and individuals take practical actions:

•Come up with a secret question or code word to exchange at the beginning of all online or phone conversations.

•Partner with a biometrics-focused security company and ensure their authentication technologies are up to the challenge.

•Educate employees and partners about deepfakes using the same techniques as general cybersecurity awareness.

Using technology to fight technology can take people only so far. The best defense for any new attack vector is vigilance, awareness and not being afraid to ask for confirmation when someone receives a request that raises suspicions.

About the essayist: Zac Amos writes about cybersecurity and the tech industry, and he is the Features Editor at ReHack. Follow him on Twitter or LinkedIn for more articles on emerging cybersecurity trends.

*** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: https://www.lastwatchdog.com/guest-essay-how-to-detect-if-a-remote-job-applicant-is-legit-or-a-deepfake-candidate/