Armorblox Applies AI to Prevent Data Loss

Armorblox announced a namesake platform that combines a natural language text engine with deep learning algorithms to make it easier to apply cybersecurity policies to specific documents.

Fresh off raising an additional $16.5 million in funding, Armorblox CEO Dhananjay Sampath said the cloud-based platform enables organizations to secure email and documents based on the sensitivity of the data they contain. That approach will ultimately prove more effective, he said, because the alternative is to apply the same level of security to every document and email regardless of content.

The capabilities enabled by Armorblox will prove more critical over time as cybercriminals become more adept at employing social engineering techniques such as phishing to trick end users into sharing sensitive data, added Sampath.

While training end users to be more circumspect about sharing documents and emails that contain sensitive data, it’s not practical to expect every user to be equally diligent every day. Augmenting humans with algorithms that identify sensitive data should make it easier for cybersecurity teams to reduce the number of potential security incidents that might ever occur, Sampath noted.

Cybersecurity professionals have been advocating for risk-based approaches to cybersecurity for decades. Trying to secure everything ultimately diffuses resources to the point where cybersecurity becomes ineffective. The Armorblox platform employs artificial intelligence (AI) in the form of deep learning algorithms, also known as neural nets, to prevent data loss.

Armorblox doesn’t replace the need for layers of defense most organizations have already implemented. But it does bring AI into the realm of data loss prevention (DLP).

Naturally, many cybersecurity professionals are skeptical of the degree to which AI platforms can effectively take on any task. Concerns range from not being able yet to recognize a specific type of threat to inundating cybersecurity professionals with alerts. However, machine and deep learning algorithms continuously learn about the environments—and, just as critically, those algorithms never take a sick day or find a better-paying job. Cybersecurity teams, therefore, will be challenged with striking a balance between when to rely on machines versus humans.

The chances that AI technologies will replace cybersecurity professionals anytime soon are slim to none. At the same time, however, a chronic shortage of cybersecurity expertise means that most organizations are finding it difficult to hire and retain cybersecurity professionals. The only way to practically address that shortage is to rely on AI, because the rate at which cybersecurity professionals are entering the field has increased only modestly in recent months.

Of course, the goal is to prevent cybersecurity incidents from happening in the first place. As every cybersecurity professional already knows, end users don’t make achieving that goal especially easy. The same users are also likely to be troubled by an AI service that continuously scans documents and emails. But seeing as how those documents and emails technically belong to the organizations they work for, they are just as likely to appreciate how much trouble that AI service may be keeping them from getting into in the first place.

Michael Vizard

Avatar photo

Michael Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

mike-vizard has 745 posts and counting.See all posts by mike-vizard