The pandemic paved the way for expanded remote work possibilities, but companies looking to ensure employees remain on the job while at home have led some to consider technologies to digitally monitor worker activity, in some cases through AI.
Those initiatives come laden with thorny privacy concerns, legal landmines and, more than likely, stiff resistance from employees themselves—a recent report from IT research firm Gartner indicates 10% of workers would try to trick AI-driven tracking systems.
These monitoring systems analyze worker behavior with tools that provide basic activity logging with alerts, or, in more sophisticated versions, AI-aided systems can attempt to detect positive actions or misbehavior through multivariable analysis.
Elizabeth Crooks, privacy consultant at Coalfire, a provider of cybersecurity advisory services, said organizations should consider how they are going to abide by generally accepted privacy principles, especially transparency, data minimization, purpose limitation, necessity and how employee consent is going to be managed and gathered.
This is true across the board and particularly for organizations who must abide by the General Data Protection Regulation (GDPR), which requires that data subjects, in this case, employees, have the right to not be subject to a decision based solely on automated processing.
“Organizations must be very cautious about whether or not decisions are going to be made based on the results of any AI processing, and give employees sufficient notice and ability to opt out,” she said. “Explicit consent should also be obtained from employees to process sensitive personal information, as these systems may include audio or video surveillance.”
She cautioned that even then, due to the power imbalance in the relationship between employers and employees, regulatory authorities may not find the consent valid.
Crooks also pointed out it’s likely that workers will seek to push back on remote work tracking, the same way that we’ve seen students push back during the pandemic against remote proctoring software that is invasive and often described as “creepy.”
She said employees being forced to use software that tracks their every move, minute by minute, may, in fact, make people more likely to rebel, as employees will acutely feel a loss of autonomy and perceive that the software represents a lack of trust from their employer.
Employees may be especially wary of AI-based tracking tools, as AI researchers have uncovered how large data sets can be encoded with sexist and racist stereotypes, which then are carried forward into the software.
“There have been many very public examples of bias in AI, and employees are unlikely to trust these systems, especially if decisions are being made based on algorithmic results into which employees may not have any visibility,” Crooks warned.
Programmatic Bias Baked In
John Bambenek, threat intelligence advisor at Netenrich agreed; beyond the obvious privacy problems that such systems inherently have, he said, essentially, these systems are spyware. The fact that they often use AI or machine learning is not only highly suspect but likely to also have programmatic bias.
“For instance, facial recognition systems historically have had problems with accuracy in minority communities,” he said. “Using AI to monitor productivity will struggle with accuracy when it comes to those who have productive but atypical working habits. No one likes a toxic micromanaging boss; it makes no sense to replace them with an even more toxic and micromanaging algorithm.”
He said there would likely be a pause in using remote employee monitoring as organizations decide the extent to which they will return to the office now that (hopefully) the pandemic is coming to an end.
“Odds are the companies most inclined to such micromanaging tactics are the same ones who will insist people come back to the office full time,” he said. “It wouldn’t be technology I’d invest in right now,” he added.
Crooks agreed that while the popularity of the idea of remote employee monitoring has certainly surged during the pandemic with the expansion of remote work, implementation won’t necessarily accelerate due to multiple factors.
Among them: Employee pushback, the cost of implementation, the additional compliance requirements for security and privacy that employee data requires and caution about regulatory fines.
“For instance, Barclays is currently facing a probe from the UK’s Information Commissioners Office regarding their employee monitoring tools,” she said.
Bambeck explained that, as we’ve seen in other supply chain attacks, any tool that has full privilege on many machines will become a target used by attackers looking to take over an entire organization.
“That said, the key risks remain HR risks; that good employees will quit, bad employees will evade the system and whoever is left will be demoralized and demotivated and that environment always has greatly accelerated cybersecurity risks, as a result,” he said.
Bambenek pointed out there are types of problematic, low productivity employees who game the system now and that taking a human supervisor out of the mix just means they’ll be able to game the system more effectively.
“It is likely that highly capable and sought-after employees would quit if forced to use such systems,” he said. “I know I would.”