Guarding Against Backdoors and Malicious Hardware

In a post-Supermicro-scoop world, it’s important for security teams to review the basics on detecting and guarding against hardware backdoors.

Malicious software is relatively easy to find, but what if your actual device is the enemy?

Last month, Bloomberg Businessweek broke a story on Chinese nation-state actors secretly implanting spy chips in targeted motherboards manufactured by mega-supplier Supermicro, compromising large enterprises in both the public sector and the private sector. This story came on the heels of multiple revelations earlier this year by security researchers backed by the Department of Homeland Security that the firmware of millions of Chinese-manufactured smartphones was compromised.

There is much skepticism over the Bloomberg story because of vehement denials by the organizations implicated and other factors. If nothing else, though, it serves as a good wake-up call to IT security for guarding against hardware-embedded backdoors. For years, after all, it has been anticipated that China would try—or has already tried—embedding malicious backdoors directly into hardware. In 2012, researchers discovered a serious embedded backdoor in a Chinese-manufactured FPGA chipset used by military and aerospace organizations in the West. In this instance, for what it’s worth, the cybersecurati generally agreed that this backdoor was inadvertent, not malicious. However, even inadvertent backdoors can be converted to malicious ones if discovered by the wrong person.

To Catch a Spy Chip

The recent case of the Supermicro spy chips as described by Bloomberg appears to be nothing but intentional and malicious. Reportedly, they were only discovered by dumb luck: when Amazon due-diligence teams directly compared the motherboards of a company it was acquiring against the original designs of those motherboards.

If you’re not sitting down to carefully compare your physical motherboards to their original designs (and let’s face it, you’re not), there are three typical methods to detect backdoors in a chipset—none of which are perfect.

The first is reverse engineering of the chip. This can be partly accomplished to some degree of satisfaction when the backdoor or other security vulnerability is unintentional or just plain poorly hidden. This was the case five years ago when security researcher Craig Heffner used an interactive disassembler to find a major manufacturer-inserted backdoor in D-Link routers—a backdoor that could allow a user to take over all of the devices across one’s entire network.

This backdoor appeared to have been a debugging mechanism that was accidentally (if not recklessly) left in by the manufacturer, however—hardly the province of a malicious nation-state actor—and thus why it was so easy to find. For unveiling outright malicious chips by sophisticated supply-chain infiltrators, complete physical reverse engineering may be required. This method is extremely in-depth, but extremely costly in terms of both specialty resources (including lab equipment and dangerous chemicals) and time. Consequently, full reverse engineering is generally reserved for unique scenarios and/or when IT security groups have extra resources to burn through. Frankly, it might be easier to do what Amazon did and visually compare designs, but that might not be a feasible solution for enterprise customers at the end of the supply chain.

The second uses the application of test inputs. From there, the tester compares the results to the expected responses. This is probably the easiest and most accessible way to test for hardware compromises, but it also won’t work for many types of vulnerabilities (which are often designed to account for and fudge their way through such rudimentary testing—especially when we are talking about nation-state actors). Additionally, the more complex the circuitry involved, the more impractical this methodology becomes. There is just too much in the way of I/O to test.

The third involves looking for cryptological hints by passively measuring circuit parameters such as power consumption, EMR emissions, data remanence, computational timing or even the sound produced by the device during computation. Obviously, this type of testing requires sensitive measuring equipment, particularly because of countermeasures malicious actors may take to deliberately hide a backdoor being looked for in this way. Consequently, a lot of the factors that make the prior methods other than feasible come into play here as well.

Analyze Network and OS Activity

And so, this third method often devolves into a generic fourth: simply staying on the lookout for network-activity blips. But, as any IT veteran can tell you, unusual network activity can mean just about anything, and therefore is difficult to diagnose.

While the Bloomberg report is vague on specifics of the schematics, it has been theorized that the microchip was placed on or adjacent to the baseboard management controller, creating a backdoor to allow malicious code injection into the host operating-system kernel.

To this end, the right firewall rules and other network-activity monitors theoretically could prevent and detect this kind of malicious behavior. Ditto for analytics on operating-system activity—although this calls for greater sophistication.

Moreover, automation can’t necessarily do everything. Experts accordingly emphasize the need for periodic human review of logs; log data review was critical in exposing a major exploit of government-mandated “lawful intercept” backdoors in Greece more than a decade ago.

Ultimately, nothing is foolproof when it comes to preventing, finding and guarding against hardware backdoors. They are not as common, but they do happen—and they are more difficult to detect than malicious software is. As nation-states ramp up their cyberwarfare (regardless of the veracity of Bloomberg’s controversial story), these defensive efforts may prove more difficult. IT security departments must, perforce, step up their game in detecting malicious hardware.

Joe Stanganelli

Avatar photo

Joe Stanganelli

Joe Stanganelli, Managing Director at US research and consulting firm Blackwood King, has several years of experience as a technology writer and content creator. Having written and published hundreds of bylined tech articles over the years, Joe has successfully predicted such developments as the respective releases of the iPhone 4S and the iPhone 5C, the rise of IoT (and IoT botnets), and the fall of Google+. Joe has also served as Principal of Beacon Hill Law, a Boston-based law firm, for nearly a decade. When not working, he enjoys writing songs, playing bridge and spending time with his zero cats.

joe-stanganelli has 1 posts and counting.See all posts by joe-stanganelli