Visions of the future tend to include technology for good and for evil. The most frightening is when the former turns to the latter, seemingly without prompting by humans: robots gone wrong, Skynet, you name it. But more likely is the probability that good technology will be abused by humans for malicious purposes. In cybersecurity, this started with the urge to cheat at blackjack, but has now moved beyond financial theft and fraud into propaganda wars (aka “fake news”).
How will this go further? I believe that the abuse will continue, becoming in some cases more subtle. We are getting pretty good as an industry in detecting external sources of disinformation, data exfiltration, and breaches of confidentiality. But the next step will be a class of attacks that I’ll call Denial of Trust.
Denial of Trust attacks will focus on degrading further the trust among systems, users, providers and customers that is critically important to security. There are a few ways that these attacks could be carried out.
- Accusing the Breach. We have gotten so used to breach announcements that we don’t question them anymore; they’re almost an everyday occurrence. Within the security industry, we’ve made it worse with unhelpful pronouncements such as, “It’s not a matter of ‘if,’ but ‘when,’” creating the belief that breaches are not only inevitable, but they’re happening right now and nobody knows about them. Assuming the breach can lend itself very nicely to attackers damaging the reputation of a target by simply accusing it of having been breached. The denials won’t be believed, and nobody will stand up in defense and say definitively, “No, there hasn’t been a breach.” This is another front in the information wars that has yet to materialize fully, and it’s cheaper than a DDoS attack.
- Integrity attacks. There have been some of these, but again, not nearly as many as there will be in the future. Introducing subtle changes to business data over time will take a long time to be discovered, and the victim will not know how far back in time to search the data streams to discover when the interference started. In the fog of AI and machine learning, it will be even more difficult to distinguish mistake from malice when there are fewer humans to perform sanity checks — lacking what Nick Selby refers to as the “Raised Eyebrows Department.” Inducing an enterprise to lose trust in itself is bad enough; causing its business partners and customers to lose trust could be devastating.
- Double your fun: why not combine Integrity Attacks with Accusing the Breach? Announcing that there is “undetected” data corruption in an enterprise could be a serious matter, particularly in areas outside of finance that aren’t as closely monitored by third parties. Since you can’t prove a negative, victims will be on the defensive for inordinate periods of time, trying to defend themselves against an induced loss of trust and recover a reputation that was brought down solely by false accusations.
- Finally, there’s an area I call Malicious Design. UX and UI researchers know that there are subtle ways to influence users and guide them towards and away from certain choices. Usability research is important, since you want customers to feel confident, informed and in control of the application. Bad UI design (or, as we referred to it in the analyst business, an “engineering-grade UI”) can usually be attributed to incompetence, or a lack of empathy and diversity of thinking. Back doors or other additions or changes to an application can be detected by their use. But when an interface is subtly designed for malicious purposes — for example, to cause the user to make hasty decisions that will affect security later on, by introducing a session timeout warning and making some elements more visible than others — then that won’t be something that you can filter or screen for with your usual detection tools.
Yes, there will still be flaming drones, rebellious refrigerators, and cyber-kinetic attacks. Those will be the more obvious ones, though. Keep your eye on the cyber-threat ball, because it may well curve in an unexpected direction. It’s the quiet ones you have to watch out for.
This is a Security Bloggers Network syndicated blog post authored by Wendy Nather. Read the original post at: RSA Conference Blog