2019 Predictions for AI | Avast
In our first segment, we discussed IoT in 2019. In our second segment, we focused on mobile threats in the new year. And now, in the final segment of our three-part series on 2019 cybersecurity predictions, we’re looking at how artificial intelligence (AI) will shape the coming year’s threats.
No area of security carries more mystery and cache than the field of AI. We are now increasingly seeing adversarial AI algorithms doing battle with security AI algorithms. Avast has invested heavily in developing artificial intelligence algorithms to combat the forces of adversarial AI, and our learning in this space has led us to research things we know exist but are not yet fully known or understood. One of those new areas of interest are a class of attacks known as DeepAttacks.
The Age of DeepAttacks is coming
We define “DeepAttacks” as “malicious content automatically generated by AI algorithms,” and it’s a class of attack we expect to see with growing frequency.
In 2018, we have seen many examples where researchers have used Adversarial AI algorithms to fool humans. Examples include the fake Obama video created by Buzzfeed where President Obama is seen delivering fake sentences, in a convincing fashion. This is commonly called a “deepfake” and features the use of AI to trick people. We have also seen examples of Adversarial AI fooling even the smartest object detection algorithms, such as in this example where an algorithm is tricked into thinking that a stop sign is a 45 mph sign. But real-life examples of AI-generated “fake news” are rare, and for that, we are fortunate.
At the same time, DeepAttacks can manifest themselves at scale in the form of fake URLs or HTML web pages. They can be used to generate fake network traffic in botnets. In 2019, we expect to see DeepAttacks deployed more commonly in attempts to evade both human eyes and smart defenses. We are also working hard to hone special detections for DeepAttacks, in order to identify and block them before they reach magnified proportions.
Smart attacks on home networks
Attackers now have sophisticated algorithms at their disposal which can identify and target homeowners with specific profile traits (e.g. lots of Apple devices, or with at least 10 vulnerable devices, etc). Then they can automate the next stage of the focused attack towards a desired target device (e.g. one that is suitable for cryptomining, using password crackers that adapt to the specific device types). Thus the entire malicious chain can become automated.
AI against clone phishing
We predict that AI will play a large role in ending the practice known as clone phishing, where an attacker creates a nearly identical replica of a legitimate message to trick the victim into thinking it is real. The email is sent from an address resembling the legitimate sender, and the body of the message looks the same as a previous message. The only difference is that the attachment or the link in the message has been swapped out with a malicious one.
We predict AI will become quite effective in detecting the short-lived phishing websites associated with these clone phishing attacks. AI can move faster than traditional algorithms in two ways — first, by accurately identifying domains that are new and suspicious, and second, by utilizing fast algorithms from the visual detection domains to match the layout of phishing sites and, identify fake ones. Also, AI learns over time, follows trends, and monitors malware advancements.
Sadly, targeted spear phishing techniques will continue to be successful as attackers spend time and money to gather target-specific information, such as crafting emails purporting to be from your child’s school, your company’s CEO, etc. In these instances, as in many others, a highly motivated attacker will often find a way in, and it’s up to other detection technologies, like behavioral engines, to stop the threat.
End of text captchas
For over a decade, humans proved humanity on the web by reading text letters and transcribing them correctly. It was the most effective tool at the time for proving that you weren’t a bot.
Well, not anymore.
Work first done by Vicarious late in 2017 showed that captchas — even complex ones — can be broken by algorithms. That has led to the introduction of behavioral analysis to identify bot activity on sites by generating a risk score on how suspicious an interaction is and ending the need to challenge users to enter distorted text in a box to prove they are a person. Even reCAPTCHA, the biggest provider of captchas, is moving on from text-based captchas. The technology has been assimilated by hackers, and as a result, we will see the end of text captchas, at least on all security-minded sites, in 2019.
AI will continue growing more common through this next year and beyond. And while we believe there’s much more good than bad to be gained by artificial intelligence, tools are only as helpful as those who wield them. Stay close to this blog through 2019 to keep up with the state of AI. And for a more complete picture of 2019 cybersecurity risks, download and read the full Avast Predictions for 2019 report.
*** This is a Security Bloggers Network syndicated blog from Blog | Avast EN authored by Avast Blog. Read the original post at: https://blog.avast.com/ai-predictions