For most people outside of the IT and cybersecurity industries, the words “hacker” and “hacking” have ghastly connotations. Many envision a criminal bereft of any morals, whose only purpose is to pilfer, spy or defame his victims.
Much of that interpretation is wrong. For one, not all hackers are criminals, nor are all hackers male. Fear and misunderstanding around hacking has created barriers that can be difficult to overcome. Those barriers become problematic when they are used to create laws that govern user behaviors, especially when it comes to offensive security.
Offensive security is a bit of a loaded term. On one side, it involves hacking back, and Justin Elze, adversarial emulation and threat research (AETR) team lead at TrustedSec, said this is not really the future for anyone, particularly in enterprise security. Certainly, there are gray areas where government agencies work collaboratively with the private sector to make good use of hacking back. For example, let’s say there’s a botnet attack: The FBI might contract with a company such as Microsoft to take over the botnet and shut it down.
Beyond those extreme circumstances, though, hacking back can lead to potentially disastrous, unintended consequences. Where to draw the line is subjective, which Dr. Ben Buchanan relates in great detail in his book “The Cybersecurity Dilemma: Hacking, Trust, and Fear Between Nations.”
Even government agencies don’t see eye to eye on the limits of offensive hacking, as is evidenced in the question of whether the United States should be considering offensive responses in cyberspace to Russia’s continued information warfare.
“If you are attacking back who you think attacked you, it could be misdirected because it’s so easy to pretend to be somebody else. You could be targeting the wrong organization,” said Elze. Given the resources need to do the reconnaissance to hack back, it seems more prudent for enterprises to focus on the other aspect of offensive security—breaking things.
Breaking with Good Intentions
The approach of breaking and testing products you currently own and control is not new, but thanks to bug bounty platforms, it is becoming more widely accepted across the industry. Still, professors of cybersecurity programs at the college level are tentative about teaching offensive hacking techniques. No one wants the responsibility of having taught young people how to use hacking tools, only to have them go and break the law.
Instead, many are teaching how hackers think and work, which involves tinkering with products. But, often when ethical hackers break a product, they become the target of the company’s legal team. In some cases, they are breaking the law.
Legislation in Georgia has brought ethical hacking back into the limelight—which is a good thing as long as legislators are open-minded about technology. As it stands, the bill will make an ethical hacker’s life rather difficult, which does little to help the industry move forward.
Some researchers have even been arrested for reporting vulnerabilities. In other cases, the company’s legal team threatens the researcher. “A security researcher doesn’t want to be in that situation,” said Elze. “If there were some legal framework, something defined for responsible disclosure without repercussions, that would be a step in the right direction.”
Hackers are going to tinker with products. The good ones are going to report it because they want to help. Allowing for responsible vulnerability disclosure without consequence is a win-win for organizations and hackers.
The vendor gets free research despite the fact that it might be a breach of their acceptable user agreement, which should also be modified to allow for those with good intentions to check systems for flaws. Shifting the point of view to think about intent can allow more protections for researchers.