Prompt Security Adds Ability to Restrict Access to Data Generated by LLMs
Prompt Security today extended its platform to enable organizations to implement policies that restrict the types of data surfaced by a large language model (LLM) that employees are allowed to access.
Originally developed to secure LLMs, the Prompt Security platform can now analyze user identities and the context of the request being made, to determine if an end user is authorized to access sensitive data that might be included in the output generated by an LLM.
Prompt Security CEO Itamar Golan said that authorization capability significantly reduces the inherent compliance risks that organizations encounter when using generative artificial intelligence (GenAI) tools.
Additionally, it prevents employees from using prompts to surface information, such as the salary of executives, that may have been inadvertently exposed to an LLM, he added.
Security teams can also leverage that capability to create an audit log that can be integrated with third-party platforms to ensure compliance mandates are met and maintained, noted Golan.
Prompt Security previously developed a lightweight agent designed to be deployed on endpoints, enabling organizations to restrict access to LLMs. That capability via integrations with identity management platforms such as Okta and Microsoft Entra is now being extended to include the actual content generated by those LLMs.
The usage of LLMs within most organizations is already widespread regardless of what official policies are in place. Shadow usage of these platforms is all but impossible to control, so organizations would be better served by implementing controls that restrict access to sensitive data rather than attempting to ban the usage of generative AI in a way that isn’t really feasible, said Golan.
Unfortunately, many cybersecurity teams are still catching up with yet another technology that is being widely employed without first considering all the implications. In addition to providing access to sensitive data, LLMs are now being attacked by cybercriminals who are either looking to steal them outright or poison them by exposing them to false data that they hope will eventually corrupt the outputs being generated.
Each organization will need to determine what level of risk they are willing to assume but the more sensitive data is exposed to LLMs the more likely it becomes there will be a breach when that data is inadvertently shared by an LLM. Many cybercriminals have already become adept at using prompts to tease information from LLMs, that many organizations don’t realize have been exposed to an LLM.
Of course, in theory, the LLM can be directed to not share that information, but the ability to apply that level of control to the outputs being generated has been inconsistent. Many of the LLMs have been programmed to be as helpful as possible, which can conflict with the policies being applied.
Ultimately, it will be cybersecurity teams that will be tasked with cleaning up any breach caused by the usage of an LLM. Given the number of prompts being used every day to generate some type of data, it’s all but inevitable those types of breaches either have already occurred or are about to very soon.