Fortinet Adds Data Loss Prevention Capability Following Acquisition of Next DLP
Fortinet has added a data loss prevention (DLP) platform to its portfolio that is based on the technology it gained with the acquisition of Next DLP earlier this year.
Nirav Shah, vice president of products for Fortinet, said FortiDLP makes use of machine learning algorithms to enforce policies in a way that is now integrated with the Fortinet Security Fabric. This framework enables the various cybersecurity tools and platforms that Fortinet provides to share alerts.
That capability is especially critical at a time when more employees are sharing sensitive data with generative artificial intelligence (AI) platforms, noted Shah. With the rise of those services, many organizations are revisiting their data-sharing policies, he added.
FortiDLP to aid in that effort, for example, makes it possible to create a policy that reminds employees about proper data handling practices when using these tools, Shah noted. It can also identify actions, behaviors and other indicators to prevent employees from disclosing sensitive data by identifying, analyzing and capturing employee activity involving sensitive data regardless of whether their endpoint is connected to a corporate network or not.
Additionally, FortiDLP creates an inventory of risk scores for software-as-a-service (SaaS) applications based on classifications of data and insights into data ingress, egress, and the credentials used to access them.
Finally, FortiDLP provides access to an AI assistant that makes it possible to summarize incidents and risk levels using the MITRE Engenuity Insider Threat Tactics, Techniques and Procedures (TTP) Knowledge Base.
The overall goal is to make it simpler for cybersecurity teams to coach end users who are not always aware of how sensitive data might be exposed, said Shah.
FortiDLP is the latest addition to an expanding Fortinet portfolio that makes use of a single agent to streamline workflows. That approach makes it simpler for organizations to rely on a single vendor to integrate those workflows in a way that reduces the total cost of cybersecurity, noted Shah.
It’s not clear how many organizations are re-evaluating how they secure data but just about everyone from interns to senior business executives has more access to data than ever. Not every organization, however, has made it clear to every employee that they are stewards of sensitive data about, for example, customers that should not be shared outside the organization. In many cases, organizations don’t realize how extensively that data is being shared until there is either a data breach or an audit finds sensitive data being stored in a way that can be easily accessed by almost anyone.
In either event, the penalties for mishandling that data can be stiff. More challenging still, it might be years before it’s discovered that sensitive data has been used to train a generative AI platform that is now incorporating that data into its output.
In an ideal world, employees wouldn’t make those types of mistakes in the first place. In reality, however, employees inevitably are going to mishandle sensitive data. The challenge is finding a way to minimize the number of opportunities for those mistakes to be made.