In Minority Report police use three mutated humans, called Precogs, who can previsualize crimes, to stop murders before they take place, reducing the Washington D.C. murder rate to zero. The Phillip K. Dick novella ( brought to the big screen by Tom Cruise) is set in 2054.
Yet here we are in 2018 with large data sets collected and stored on each individual who uses a phone and/or computer. Not only that, it’s possible to apply machine learning processes to these vast stores of data, create detailed behavior profiles of each consumer or worker and, well, do something along the lines of what Precogs do in Dick’s imagined future.
I had the privilege of leading a thought-provoking panel discussion drilling down on how this capability is just beginning to be introduced in the workplace. It is in the context of detecting network intruders, but it could also be extended to help companies reduce workplace violence.
Matt Moynahan, CEO of Forcepoint, supplied the technical backdrop, while Elizabeth Rogers, a privacy and data security partner at Michael Best & Friedrich, supplied expert insights on the legal and social implications. You can view the full panel discussion in the accompanying YouTube video.
It’s clear companies will increasingly use technology to monitor employees’ behaviors in the company network. This primary driving force is obvious: network breaches and data theft continue to run rampant — and machine learning can help automatically keep track of legit workers doing their jobs vs. an imposter doing harm. Not only are companies moving to proactively defend themselves — by leveraging machine learning, behavior analytics and automation in smart new ways – they are being compelled to do so by rising data handling and privacy regulations at the state level in the U.S. and at the EU level in Europe.
So what does this portend? Moynahan used Forcepoint as an example. The company looks at everyone – and everything – that touches data, whether it’s an authorized individual, a machine, a hacker using stolen credentials or a piece of malware behaving like an individual.
Essentially, all user-to-machine and machine-to-machine interactions with the potential to access and manipulate critical data gets scrutinized. Machine learning can be used to understand each worker’s unique cyber behavior as he or she moves across the applications used for work tasks, he explained.
By understanding how people interact with data and where information travels, a baseline understanding can be built, which puts the company in position to more quickly respond when unusual activities occur, like potential violence or when someone is trying to steal your trade secrets.
Behavior analytics act like the brains and understands identity and intent, Moynahan says. You can analyze data from travel companies, HR systems, customer databases, security door badge readers, chat, web, email – all the channels of interaction between the user, the data and the network.
As with anything involving access to human preferences and patterns, the potential for abuse is always there. This is constitutional law and regulators responsive to public mores push back. Rogers pointed out that there are expectations of privacy, and companies have an obligation to set policy and expectations, clearly communicating what is private and not.
Companies can monitor everything but should have controls over who has access to information on human behaviors, and there needs to be a hierarchy of responses to suspicious activities.
She emphasized that employees have a right to privacy in the workplace, but those rights must be within the rights of the employer to protect IP. And any company policies will be affected by industry standards, such as PCI DSS, not to mention federal rules under HIPAA, FISM and Sarbanes Oxley. And they there are rising state regulations, most notably the New York Department of Financial Services new cybersecurity certification rules and Europe’s GDPR, which gives individuals the right to request erasure of data.
There must be balance between the employer’s right to protect that data and workers’ right to privacy. Rogers says it’s analogous to parenting – companies can have rules, and if any are broken, then a decision has to be made on enforcement.
One thing to keep in mind is that not everyone is malicious. Employees can make honest mistakes. But it doesn’t mean they have to be treated as if they’re malicious or stupid. Yes, IT will want to flag bad behaviors, but it is possible to also understand people’s intent to some degree.
Moynahan and Rogers agreed on this point: cyber monitoring in the workplace is going mainstream, but the critical point for companies is to avoid discrimination. If you monitor, you must monitor all. It’s critical that companies preserve the sanctity of the relationship with employees by having a strong governance model and by respecting the identity of people.
This is a Security Bloggers Network syndicated blog post authored by bacohido. Read the original post at: The Last Watchdog