After the horrific shooting in San Bernardino, California, federal law enforcement officers seized the now-dead suspect’s iPhone, and sought to examine it. However, the phone was “locked” using proprietary hardware and software from Apple. The government sought a court order (under the All Writs Act — an 18th century statute) compelling Apple to develop and implement a process to break their own security, and to provide to the FBI the unlocked and unencrypted contents of the iPhone.
After much legal wrangling, the FBI backed down. A recent report in the Washington Post indicates that the reason the FBI backed down is that they were able to turn to a “white hat” hacking company in Australia, Azimuth, to “jailbreak,” or unlock, the phone for them. Cool, cool. In fact, for the most part, that’s what is supposed to happen. Companies attempt to design and implement secure software, hardware, networks and applications, and governments (oh yeah, and hackers, too) attempt to find and exploit weaknesses in them. They put it on the bill, I tear up the bill. It’s very convenient.
It is certainly a more desirable outcome than requiring companies to deliberately crack or, even worse, weaken their security so that a government agency can bypass that security, or compelling the manufacturer or software developer to spend considerable development time and effort to undo its own security.
And that’s the problem with good security – when it works, it’s good. So, was it legal for Azimuth to jailbreak Apple’s devices, and then sell the jailbreak to a government agency? Magic 8 ball says, “Situation hazy; ask again later.” There are several statutes involved here. First and foremost is the Computer Fraud and Abuse Act (CFAA). The statute has many parts, but it makes it a federal crime to exceed authorization to access a computer and obtain information. Generally, to access a computer means to use it; to obtain information was supposed to mean to steal data, but it could also mean just to learn something. And, while a modern cell phone is certainly a “computer,” it is not clear that phone software, apart from the phone (or running on a virtual machine), is a “computer.”
You see, you don’t actually own your phone. Well, you kinda own part of it, but the software that makes it work is licensed to you by Apple and others subject to the software license agreement (SLA). Violate the SLA, and you are using (accessing) your own phone “in excess of authorization.”
Maybe. In fact, the U.S. Supreme Court is currently considering a case that will help clarify whether accessing a database you are allowed to access, but for a purpose for which you are not authorized, constitutes a violation of the statute. When Azimuth “cracked” the iPhone, they probably violated something in the license agreement. I have no earthly clue, because, frankly, the license agreement is unreadable. In fact, that’s kind of the point.
So, violating a license agreement is kinda like cutting the tags off your mattress, right? No big deal? Well, as the Post article points out, some of the founders of Azimuth formed another company, Corillium, which used Apple’s operating system and mounted it onto a computer to create a “virtual” iPhone — one which could more easily be hacked and brute-forced. Cool, cool. But, of course, to do that, you have to copy all of the Apple code to the virtual machine. Apple sued for copyright violation, lost, and may appeal.
When you engage in security research – and particularly when you develop exploits – and do anything other than tell the developer about them under an approved “bug bounty” program, you run the risk of facing a lawsuit or criminal prosecution. The risk. Not the certainty. In fact, even if you discover a vulnerability and attempt to tell the developer about it, you might be prosecuted – especially if you want to be compensated for your time and efforts. Developers want to know about vulnerabilities and exploits so they can fix or defeat them, but they don’t want to be told about them, and they don’t want others to know about them.
In many cases, individuals have been prosecuted (and a few convicted) for embarrassing software developers and demonstrating vulnerabilities. Just ask Georgia’s Scott Moulten or Bret McDanel, or Stefan Puffer (use the Google machine). You have meddled with the primal forces of nature, and the powers that be won’t have it! Other forms of security research pose their own legal threats. If your research is used for illegal purposes and you either know it or are “willfully blind” to it (once the missiles go up, who cares where they go down) you could be prosecuted for conspiracy, aiding and abetting, criminal facilitation or aggravated mopery. You could be prosecuted for “transmitting information” which results in damage to a computer. You could be prosecuted for trafficking in counterfeit access devices (means of accessing a computer without authorization). You could be prosecuted for criminal copyright infringement, bypassing a technological measure designed to prevent access to copyrighted materials or a host of other things.
While the hacking statute notes that it does not “prohibit any lawfully authorized investigative, protective, or intelligence activity of a law enforcement [or intelligence] agency,” it is not clear that this immunity protects civilian companies that work for these agencies. And it’s not clear that everything they do is a “lawfully authorized” activity. And that’s just if you are working for the U.S. government. If you are working for other governments, who knows? And if you are working for the U.S. government, that does not provide you immunity from prosecution by other countries either. You weren’t really expecting to take that post-COVID trip to Guatemala, were you?
Best advice for white hat hackers – check with your lawyer. Every white hat hacker has a lawyer, right?