Sixty-four percent of the more than 1,200 senior security executives from around the world, whom we surveyed for the 2018 Thales Data Threat Report (DTR), believe artificial intelligence (AI) “increases data security by recognizing and alerting on attacks,” while 43% believe AI “results in increased threats due to use as a hacking tool.”
They’re both right.
Strengthening Digital Security
On the one hand, security executives can use AI and its subset technology, machine learning (ML), to enhance digital security. For example, they can use AI to look for unusual security events and find those needles in a haystack faster. They can also use AI to detect malware and more. In this context, AI/ML is no more than training a system to learn to find things faster. If you train the system well by giving it a great deal of good sampling information to learn from, it can recognize patterns and find those unusual and potentially dangerous data access events that are difficult to spot in a sea of data. AI/ ML is a really good for that. So, digital security vendors, such as Cylance and others, are looking at how they can use this powerful tool to more quickly and efficiently solve the data security issues we all wrestle with.
Threatening Digital Security
On the other hand, adversaries already are using AI/ML and neural networks to identify more quickly the security weaknesses of their targets. And, once they’ve breached a target platform, they can take down its systems faster by tying them up with hacking tools, because they’ve already identified the vulnerabilities and know how and where to run the attacks. The really frightening aspect to all this is that in using AI, no human has to be involved. The algorithm can learn how, and then penetrate, the target system faster than a human could.
Viewing the Double-Edged Sword
So it’s really two edges to the same sword: one’s going to help you, and the other’s going to hurt you. Both are true. We should look at AI/ML as just a piece of technology that, in a security context, is both an opportunity and a threat.
What are the Security Implications of Using AI/ML in my Business?
A third very interesting question regarding AI/ML, which the DTR doesn’t address is: “My company is using AI in its business, so what does that mean for data security?”
Enterprises are leveraging AI/ML to make business decisions in numerous use cases, but the process still is driven by data, it still has data-processing systems involved, and there still are applications that need to be governed and controlled. Just having AI and ML doesn’t magically make security easier. It presents yet another data landscape or kingdom that enterprises have to manage and secure. Trust is a key part of this, too. How do organizations make sure all the nodes in their neural network can be trusted and are not running unapproved code that might compromise the decision? There are many pieces that make up an AI, neural-net or ML platform and that need to be protected.
So the question is, if you’re using an AI, neural-net or ML platform, how do you operate it securely? This is the missing piece. From our perspective at Thales eSecurity, the answers are about trust and protecting your data in the platform through encryption and tokenization, identity and access management, security intelligence logs, and so forth.
More and more organizations are hopping on the AI/ML bandwagon, and there are many ways to do it, just as there were when Big Data and the Cloud came along. Because it’s new and growing, there is still a lot of learning to do with regard to how organizations take advantage of AI/ML, protect and govern it, and make sure it provides more upside reward than downside risk. We at Thales can help you with reducing that downside risk.
This is a Security Bloggers Network syndicated blog post authored by Sol Cates. Read the original post at: Data Security Blog | Thales e-Security