Bad vs. Less Bad Security Reporting: CoreML vs. Ships

Posted under: General

As I was flying home from a meeting today, I read two security stories that highlighted the differences between bad and less bad ways to approach reporting on security issues.

Before I go into them, here is how I evaluate articles related to either stunt hacking or super-popular technology:

  • Is there a real vulnerability?
  • Is it exploitable, and to what degree?
  • What are the actual, known, demonstrable consequences of exploitation?
  • Would other controls or the real-world ecosystem limit either exploitation or impact?
  • Who is writing the article or giving the presentation, who are the sources, and why are they talking about it?
  • How did the vendor/target/whoever respond to the situation and how is this reflected in the article?

That’s actually the same criteria I apply to original research reports and conference presentations. Now on to the articles:

First, a contact at Apple pointed me to this article by Lily Hay Newman Wired on “privacy risks” with CoreML”. (Let’s be honest; I have a real (known) sore spot against these kinds of articles so the pointer wasn’t accidental). I’ll save you some time to sum it up:

  • CoreML enables machine learning in apps.
  • These apps can have access to your photos (with permission).
  • Machine learning is hard, so bad actors can sneak in code to do things like find nudies or which products you have in the background of photos.
  • This is against the App Store guidelines, but no one really knows if Apple may or may not find them.
  • There’s one small quote at the end from an actual security researcher admitting that said app could just upload every photo to the cloud if it has this permission anyway.

Here is how I’ve been summarizing these kinds of pieces since basically the start of Securosis:

  • There is a new technology getting some decent attention.
  • Hypothetically speaking, maybe someone can do bad stuff with it.
  • Let’s put “iPhone” or “critical infrastructure in the headline so we get a lot of clicks. (This list is growing though, I’d add cars, airplanes, home automation, electronic toys, and robots/drones to the list).
  • Let’s barely mention that multiple other vendors or product categories have the same capability and often worse security controls. Because, iPhones.

I want to contrast the Wired piece with a different piece at BleepingComputer on a backdoor in a satellite Internet system heavily used in shipping. The reason this article is a good contrast is because it starts with a similar premise – a researcher finding an issue and taking it to the press (in this case clearly to get some media coverage). I’m not usually convinced this basis for articles is a good thing since a lot of companies push their researchers for “big” findings like this to get attention. However, some are legitimately important issues that do need coverage that vendors/whoever would otherwise try to cover up. In this case:

  • Most ships use a popular satellite Internet system.
  • There is a backdoor (literally named backdoor) in the system, plus another vulnerability.
  • The system is end of life, still in wide use, and will not be patched.
  • The system is for Internet traffic only, not ship control, and the networks are separated.
  • Exploiting this is hard but possible.
  • Although you can’t get into control systems, it could be used for tracking/economic malfeasance.
  • it is at least partially patched and the vendor warned everyone.

The key differences:

  • This was a real exploitable vulnerability, not a hypothetical.
  • The article clearly defined the scope of potential exploitation.
  • The piece was quickly updated with a statement from the vendor that indicates the issue may not be even as bad as reported by the security vendor. Or an issue at all anymore (but the update should be marked at the top since it totally undermines much of the rest of the piece).

Now, is this article great? No – the headline and section titles are more hyperbolic than the actually text (editors often do this after the writer submits the article). Also, I think the body of articles and refining statements should be at the top of the piece. According to Inmarsat’s statement (after release) the exploit requires physical access and remote exploitation is blocked on shoreside firewalls. The positives of the article is that it mostly balanced the risk, highlighted a really stupid mistake (the backdoor was insanely easy to exploit) and it was based… on reality.

You want to see a similar situation that involved a real exploit, real risks, a horrible vendor response, and resulting in widespread action? Check out this article on a pacemaker recall due to exploitable vulnerabilities. It even highlights issues with how it was handled by both the researchers and the vendors.

– Rich
(0) Comments
Subscribe to our daily email digest

*** This is a Security Bloggers Network syndicated blog from Securosis Blog authored by (Securosis). Read the original post at: