SBN

Artificial intelligence in cyber security: The savior or enemy of your business?

The role of AI in cyber security and how it’s reinventing cyber security
and cybercrime alike

Artificial intelligence poses both a blessing and a curse to
businesses, customers, and cybercriminals alike.

AI technology is what provides us with speech recognition
technology (think Siri), Google’s search engine, and Facebook’s facial
recognition software. Some credit
card companies
are now using AI to help financial institutions prevent
billions of dollars in fraud annually. But what about its applications in cyber
security? Is artificial intelligence an advantage or a threat to your company’s
digital security?

On one hand, artificial intelligence in cyber security is
beneficial because it improves how security experts analyze, study, and
understand cybercrime. It enhances the cyber security technologies that
companies use to combat cybercriminals and help keep organizations and
customers safe. On the other hand, artificial intelligence can be very resource
intensive. It may not be practical in all applications. More importantly, it
also can serve as a new weapon in the arsenal of cybercriminals who use the
technology to hone and improve their cyberattacks.

The discussion about artificial intelligence in cyber
security is nothing new. In fact, two years ago, we were writing about how
artificial intelligence and machine learning would change
the future of cyber security
. After all, data is at the core of cyber
security trends. And what better way to analyze data than to use computers that
can think and do in nanoseconds tasks that would take people significantly more
time?

Artificial intelligence is a growing area of interest and
investment within the cyber security community. We’ll discuss advances in
artificial intelligence security tools and how the technology impacts
organizations, cybercriminals, and consumers alike.

Let’s hash it out.

How artificial intelligence cyber security measures improve digital
security

Ideally, if you’re like many modern businesses, you have multiple
levels of protection in place — perimeter, network, endpoint, application, and
data security measures. For example, you may have hardware or software
firewalls and network security solutions that track and determine which network
connections are allowed and block others. If hackers make it past these defenses,
then they’ll be up against your antivirus and anti-malware solutions. Then
perhaps they may face your intrusion detection/intrusion prevention solutions
(IDS/IPS), etc., etc.

But what happens when cybercriminals get past these
protections? If your cyber security is dependent on the capabilities of human-based
monitoring alone, you’re in trouble. After all, cybercrime doesn’t follow a set
schedule —your cyber security response capabilities shouldn’t either. You need
to be able to detect, identify, and respond to the threats immediately —
24/7/365. Regardless of holidays, non-work hours, or when employees are
otherwise unavailable, your digital security solutions need to be up to the
task and able to respond immediately. Artificial intelligence-based cyber
security solutions are designed to work around the clock to protect you. AI can
respond in milliseconds to cyberattacks that would take minutes, hours, days,
or even months it would take humans to identify.

What cyber security executives think about AI

Capgemini Research Institute analyzed the role of cyber
security and their report “Reinventing
Cybersecurity with Artificial Intelligence”
indicates that building up
cyber security defenses with AI is imperative for organizations. This is, in
part, because the survey’s respondents (850 executives from cyber security, IT
information security and IT operations across 10 countries) believe that
AI-enabled response is necessary because hackers are already using the
technology to perform cyberattacks.

Some of the report’s other key takeaways include:

  • 75% of surveyed executives say that AI allows
    their organization to respond faster to breaches.
  • 69% of organizations think AI is necessary to
    respond to cyberattacks.
  • Three in five firms say that using AI improves
    the accuracy and efficiency of cyber analysts.

The use of artificial intelligence can help broaden the
horizons of existing cyber security solutions and pave the way to create new
ones. As networks become larger and more complex, artificial intelligence can
be a huge boon to your organization’s cyber protections. Simply put, the
growing complexity of networks is beyond what human beings are capable of
handling on their own. And that’s okay to acknowledge — you don’t have to be
prideful. But it does leave you with answering a critical question: What are
you going to do to ensure your organization’s sensitive data and customer
information are secure?

Artificial intelligence in cyber security: how you can add AI to your
defense

Effectively integrating artificial intelligence technology
into your existing cyber security systems isn’t something that can be done
overnight. As you’d guess, it takes planning, training, and groundwork
preparation to ensure your systems and employees can use it to its full
advantage.

In an
article
for Forbes, Allerin CEO and founder Naveen Joshi shares that there are
many ways that AI systems can integrate with existing cyber security functions.
Some of these functions include:

  • Creating more accurate, biometric-based login techniques
  • Detecting threats and malicious activities using
    predictive analytics
  • Enhancing learning and analysis through natural
    language processing
  • Securing conditional authentication and access

Once you’ve integrated AI into your cyber security
solutions, your cyber security analysts and other IT security employees need to
know how to effectively use it. This takes both time and training. Be sure to
not neglect investing in your organization’s human element.

Companies that integrate artificial intelligence in their cyber security
solutions

If you look around the industry, there are many heavy hitters
that are already using AI as part of their solutions and services. Examples of
businesses already integrating artificial intelligence cybersecurity tools
include major industry players like:

  • Check Point
  • CrowdStrike
  • FireEye
  • Fortinet
  • LogRhythm
  • Palo Alto Networks
  • Sophos
  • Symantec

The downsides of artificial intelligence in cyber security: cost,
resources, and training

Although there are many advantages to integrating artificial
intelligence in cyber security, there are also disadvantages to be aware of. Among
the chief challenges of implementing AI in cyber security is that it requires
more resources and finances than traditional non-AI cyber security solutions.

In part, that’s because cyber security solutions that are
built on AI frameworks — and those aren’t cheap. As such, they’ve historically
been prohibitively expensive for many businesses — small to midsize businesses
(SMBs) in particular. However, there are new security-as-a-service (SaaS)
solutions that are making AI cyber security solutions more cost-effective for
businesses. And, let’s just be realistic, it’s a lot cheaper to pay for
effective cyber security solutions than it is to pay the fines, downtime, and
other costs
associated with successful cyberattacks

Dealing with the vulnerabilities that artificial intelligence cyber
security tools create

The use of artificial intelligence in cyber security creates
new threats to digital security. Just as AI technology can be used to more
accurately identify and stop cyberattacks, the AI systems also can be used by
cybercriminals to launch more sophisticated attacks. This is, in part, because
access to advanced artificial intelligence solutions and machine learning tools
are increasing as the costs of developing and adapting these technologies
decreases. This means that more complex and adaptive
malicious software
can be created more easily and at lower cost to
cybercriminals.

This combination of factors creates vulnerabilities for
cybercriminals to exploit. Let’s consider the following example:

Imagine that one of your finance employees receives a
phone call from “you.” In the call, “you” instruct them to transfer more than
$2 million from the company’s account to a vendor or partner company. When they
ask for verification, “you” assure them that it’s fine and for them to perform
the transfer immediately so as to not hold up an important project. 

However, the problem is that you haven’t called them — nor
did you tell them to send millions of dollars to another account. In fact, as
it turns out, a cybercriminal used a combination of social engineering and
“vishing,” or a voice phishing call, to your employee while pretending to be
you. However, they took their attack to the next level by using
artificial
intelligence-based software
that “learns” to mimic and “speak” using
your voice. This means that even if the victim knows what you sound like,
they’re more likely to fall for the scam because it actually sounds like you
making the call.

But how is this possible? XinhuaNet reports
that there are AI software programs that, after just 20 minutes of listening to
your voice, is capable of “speaking” any typed message in your voice.

The hidden danger of artificial intelligence in cyber security

One of the less-acknowledged risks of artificial
intelligence in cyber security concerns the human element of complacency. If your
organization adopts AI and machine learning as part of their cyber security
strategy, there’s a risk that your employees may be more willing to lower their
guard. We don’t need to re-state the dangers of complacent
and unaware employees
as we’ve already talked about the importance of cyber
security awareness many times.  

Adversarial AI: how hackers use your AI against you

Another risk of artificial intelligence in cyber security
comes in the form of adversarial AI, a term used to refer to the development
and use of AI for malicious purposes. Accenture identifies adversarial
AI
as something that “causes machine learning models to misinterpret inputs into
the system and behave in a way that’s favorable to the attacker.” Essentially,
this occurs when an AI system’s neural networks are tricked into misidentifying or misclassifying objects due to
intentionally modified inputs. Let’s consider the example of a pair of
sunglasses sitting on a table. A human eye would be able to see the image of
the sunglasses. With adversarial AI, the sunglasses aren’t there.

What’s the purpose of doing that? Let’s replace the
table-and-sunglasses scenario with a self-driving vehicle. Imagine what would
happen if a hacker decided to create adversarial images of stop signs or red
lights. The AI would no longer see these traffic signals and would risk maiming
or killing the vehicle’s occupant(s) along with other drivers, pedestrians,
etc. Or, imagine that a cybercriminal creates an adversarial image that can
bypass facial recognition software. For example, an iPhone X’s “FaceID” access
feature uses neural networks to recognize faces, making it susceptible to adversarial AI attacks. This would
allow hackers to simply bypass the security feature and continue their assault
without drawing attention. 

Without the right protections or defenses in place, the
applications for cyber criminals could be virtually limitless. Thankfully,
cyber security researchers recognize the risks associated with adversarial AI.
They’re donning their white hats and are “building defenses and creating
pre-emptive adversarial attack models to probe AI vulnerabilities,” according
to an article in IBM’s Security Intelligence research blog.
IBM’s Dublin labs is also involved in the effort and developed an adversarial
AI library called the IBM Adversarial Robustness Toolbox (ART).

Final thoughts

Even with the negative aspects of the increasing use of
artificial intelligence in cyber security, we still think the good outweighs
the bad. After all, a human being simply can’t process the amount of data — at
the necessary speed —that’s needed to keep your network and data safe. AI can —
and it can do it without needing to sleep, eat, or take a vacation.

Of course, all of this isn’t to say that people aren’t
still needed in cyber security. The human element is still integral to cyber
security. This is why more and more industry experts are arguing that AI should
be integrated into the systems within each business’s cyber security operation
center (CSOC). The main message we want to drive home is that it’s imperative
to ensure you have the appropriate systems, training, and resources in place to
effectively manage and use AI cyber security solutions. This will help you to
reduce the risks associated with using artificial intelligence security tools.


*** This is a Security Bloggers Network syndicated blog from Hashed Out by The SSL Store™ authored by Casey Crane. Read the original post at: https://www.thesslstore.com/blog/artificial-intelligence-in-cyber-security-the-savior-or-enemy-of-your-business/