Develop a Game Plan to Protect Your Robot’s Identity

Verifying one’s digital identity is tough. If someone has your credentials, it’s very difficult to detect the difference between legitimate access and a breach. But that verification process is even more difficult when it is a robot in your cloud system.

Robots serve a useful purpose. There is even a robot-as-a-sService offering that, as ZDNet explained, “leverages the cloud, and makes it possible for organizations to integrate robots and embedded devices into the web and cloud computing environments.” With robots, you can improve computational and storage functions—a must for industries that rely on complicated big data processing. Often, these robots come with privileges similar to those required by humans, so we know that the robot has permission for its data collecting and computing.

However, robots can be used for nefarious purposes, as well. They can just as easily compromise your data if programmed by a hacker. But we can’t easily tell if the robot is behaving strangely.

“Does the robot have access to sensitive information? In that case, their risk level as an insider threat goes up,” said Sarah Squire, senior technical architect with Ping Identity, speaking to an audience at Identiverse 2018. Robots have the capability to pivot their permissions from the cloud to your automatic applications.

Robot Identity Theft

If you have robots in your system, you need to come up with a game plan to learn how to identify those posing a threat to your data. According to Squire, this isn’t something you can, or should, do in one shot. Your game plan should be developed in increments to protect you from rogue robots now and in the future.

Software statements are one of the more sophisticated ways to securely identify—as well as secure—robots in the cloud. Software statements are part of the OAuth Dynamic (RFC 7591) and are loosely defined. Squire proposed creating a JWT (Java web toolkit) that would be compliant with this protocol. It would allow a robot to tell us details about itself: who it is, how it can prove that it is presenting the proper identity and how it verifies it has access to certain endpoints. Finally, if the robot is who it says it is, it should be able to answer questions from a cryptographic key.

“This is the ID card a robot should be able to give you, and you can design this into your system today,” Squire said. This is a step your team can begin immediately.

We need to recognize that identity theft happens to these robots, just as it happens to humans, and we have to change the way security works across the board, Squire said. How do we do that? Squire provided the following issues that all companies should take into consideration to protect data from threatening robots and other actors:

Assume compromise will happen. It’s important to authenticate from both sides, both humans and robots.
Act as if the PII you hold is like toxic waste. You need to know where it is stored, who has access and a game plan to go into effect immediately if it is leaked.
Throw away your root key as soon as you set up your cloud environment. “That key is the key to your kingdom, and if an evil robot gets ahold of the root key you are in big trouble,” said Squires. “With that root key, a robot can create administrative users and delete them. A robot can delete you and you can’t go back in to fix them.”
If you don’t need to collect data, don’t collect it. There is so much information out there, but much of it is superfluous. With GDPR and other regulations, why take the risk of a data breach of information you never needed?

These are steps that your company can take this week to begin securing your data from threatening robots.

Over the Next Six Months

Squire recommended organizations get involved with developing regulations and standards. These aren’t created in a vacuum; the community is asked for its input. You can have a say on standards that better regulate robot identity procedures. “If you help set up these standards, you can make them work for everyone,” said Squire.

Also, set up a bug bounty team. Work with the hackers who are taking over your robot. It means you’ll need to have responsible disclosure about vulnerabilities, you’ll need to patch it and you’ll have to go public and recognize the hacker. Most hackers want the recognition, not the bounty money.

Because this will be the most time-consuming and perhaps most expensive segment of your game plan, turn your organization over to microservices over the course of the year. “If your robots are small and separate, they can’t do much damage if your system is compromised,” said Squire.

Robots are a valuable data computational tool, but if someone hacks into their identity, you are set up for a data breach nightmare. Protecting and recognizing your robot’s identity is as important as your human employees.

Avatar photo

Sue Poremba

Sue Poremba is freelance writer based in central Pennsylvania. She's been writing about cybersecurity and technology trends since 2008.

sue-poremba has 271 posts and counting.See all posts by sue-poremba

Secure Guardrails