Security: Using AI for Evil

Artificial intelligence (AI) is positively impacting our world in previously unimaginable ways across many different industries. The use of AI is particularly interesting in the cybersecurity industry because of its unique ability to scale and prevent previously unseen, aka zero-day, attacks.

But remember, similar to how drug cartels built their own submarines and cellphone towers to evade law enforcement, and the Joker arose to fight Batman, so too will cyber-criminals build their own AI systems to carry out malicious counter-attacks.

An August 2017 survey commissioned by Cylance discovered that 62% of cybersecurity experts believe weaponized AI attacks will start occurring in 2018. AI has been heavily discussed in the industry over the past few years, but most people do not realize that AI is not just one thing, but that it is made up of many different subfields.

This article will cover what AI is and isn’t, how it works, how it is built, how it can be used for evil and even defrauded, and how the good guys can keep the industry one step ahead in the fight.

What is AI?

We must first develop a basic understanding of how AI technology works. The first thing to understand is that AI is comprised of a number of subfields. One of these subfields is machine learning (ML), which is just like human learning, except on a much bigger and faster scale.

In order to achieve this type of in-depth learning, large sets of data must be collected to train the AI on in order to develop a high-quality algorithm, which is basically a math equation that will accurately recognize an outcome or characteristic. This algorithm can then be applied to text, speech, objects, images, movement, and files. Doing this well takes vast amounts of time, skill, and resources.

So (Read more...)

*** This is a Security Bloggers Network syndicated blog from Cylance Blog authored by Josh Fu. Read the original post at: https://threatvector.cylance.com/en_us/home/security-using-ai-for-evil.html