Apple is Bad at Software, says Google

Google’s Project Zero is back, with some worrying criticisms of Apple’s software engineering chops. Google’s Threat Analysis Group (TAG) discovered a long-running “watering hole” campaign that was thoroughly rooting iPhones. The conclusions will surprise you:

Apparently, Apple is bad at writing and testing code. Many of the exploited vulnerabilities were due to basic coding errors.

DevOps Connect:DevSecOps @ RSAC 2022

And most of the time that Apple fixed the bugs, it published the source and sample exploits first. There was typically a two-month lag before actually shipping the patch!

So who says iOS is safer than Android? In today’s SB Blogwatch, we conceive new preconceptions.

Your humble blogwatcher curated these bloggy bits for your entertainment. Not to mention: 419lulz.

TAG—You’re It

What’s the craic, Zack? Mister Whittaker reports—“Malicious websites were used to secretly hack into iPhones for years”:

 Security researchers at Google say they’ve found a number of malicious websites. … When visited, [the sites] could quietly hack into a victim’s iPhone by exploiting a set of previously undisclosed software flaws.

Google’s Project Zero said … the websites were visited thousands of times per week by unsuspecting victims, in what they described as an “indiscriminate” attack [and] the websites had been hacking iPhones over a “period of at least two years.” … The five separate attack chains allowed an attacker to gain “root” access to the device — the highest level of access and privilege.

The vulnerabilities affect iOS 10 through to the current iOS 12. … A spokesperson for Apple declined to comment.

Project Whatnow? Chris Merriman jokes, “Well this is awkward”:

 Researchers from Google’s notorious Project Zero division, home to white hat hackers who have named and shamed a number of its rivals in the past. … This is the second set of flaws that Project Zero has found in iOS this month.

The attack is already in the wild, though it is not known how many handsets have fallen prey, nor who is behind it – something we’ll perhaps never know. … The vast majority of vulnerabilities that Project Zero found (and there are 12 of the rotters) were in Apple’s Safari browser, by far the most popular choice amongst iOS users.

Apple has declined to comment. If it did, it’d probably say “Aaaaaaaagh F**********************.”

Hey, Lily Hay Newman and Andy Greenberg berg: [You’re fired—Ed.]

 Hacking the iPhone has long been considered a rarified endeavor. … But a discovery by a group of Google researchers has turned that notion on its head.

The rare and intricate chains of code exploited a total of 14 security flaws, targeting everything from the browser’s “sandbox” isolation mechanism to the … kernel, ultimately gaining complete control over the phone. … The attack is notable not just for its breadth, but the depth of information it could glean from a victim iPhone [including] live location data … photos, contacts, … passwords … communications sent through encrypted messaging services, like WhatsApp, iMessage, or Signal [and] access tokens that can be used to log into services.

Its sophistication and focus on espionage suggest state-sponsored hackers. … The campaign bears many of the hallmarks of a domestic surveillance operation.

The mass undetected hacking of thousands of iPhones should be a wake-up call to the security industry—and particularly anyone who has dismissed iOS hacking as an outlier.

Who discovered it? Ian Beer doesn’t drink the Kool-Aid—“A very deep dive into iOS Exploit chains”:

 Earlier this year Google’s Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks.

[I’ll] discuss some insights we can gain into Apple’s software development lifecycle. The root causes I highlight here are not novel and are often overlooked: We’ll see cases of code which seems to have never worked, code that likely skipped QA or likely had little testing or review before being shipped to users.

It is often possible to extract details of a vulnerability from the public source code repository before the fix has been shipped.

The fix only shipped … roughly one and a half months after details about the vulnerability were public. … The fix was then shipped to users … over two months later. … The bug was … shipped to users … over three months later. [It] presumably shipped [over two months later]. It seems likely that the fix shipped [over a month later]. The bug was … shipped [over a month later]. Fixed … on Oct 17th 2018. The fix then shipped to users [over a month later]. The bug seems to have been … shipped [over two months later].

In other words: Apple FAIL? Jason Koebler repeats himself:

 This is crazy crazy crazy crazy crazy. Upends everything I thought I knew about iPhone security.

your iPhone is safe from hackers as long as it’s an iPod, you keep it offline, you don’t use iMessages, you don’t download any apps, and you turn it off.

Could be a smart move. fortran77 is old:

 If you are under constant attack from well funded enemies who want to destroy you, attacking iOS … is a good way of getting intelligence. The enemy may be lulled into a false sense of security on iOS.

And there are other ways in which it should make you (ahem) “Think different.” Joseph Cox—@josephfcox—shifts all the paradigms:

 This genuinely changes the threat assessment of iPhones for some people. … The idea that an actor would deploy it so indiscriminately and for such a prolonged period across versions is wild.

Silver lining (??) is that the implant didn’t have persistence; it would be removed after rebooting the iPhone. [But] since the implant can also steal the victim’s keychain with their passwords, hackers could maintain access to information elsewhere. Also transferred the stolen data unencrypted, so imagine other agencies have this data now too.

We kinda always assumed that, oh, an iOS exploit chain costs millions, it’ll be deployed in a targeted fashion for that reason. That is now clearly not true, and the opposite is almost obvious now: If you’re paying millions, you may want to get your money’s worth.

But Patrick Howell O’Neill—@HowellONeill—offers these apologia:

 Despite some dumb troll-y tweets I’ve seen pass by, this does not mean iPhones are insecure. No expert is saying that.

This means there is an extraordinarily resourced threat out there. Nothing’s changed about iPhones being some of the most secure devices you can buy.

What we don’t know: What websites were exploited and who did the websites serve? That may say a lot about the group being targeted which can point toward who is behind the operation.

Come on down. The pryce is right:

 [It’s] fascinating. It would be very interesting to know what the character and subject matter of the infecting sites were. [And] what is particularly interesting … from a geopolitical perspective, is the level of restraint.

The malicious actors … leveraged zero-days for iOS for years and yet do not seem to have overextended themselves or risk exposure by overly widening their intended targets. … They clearly could have chosen to gain a massive infection rate by combining this with hacking a well-known popular site, or even pulling more visits from (say) social media. … Instead the malicious actor chose to limit their intended recipients to run the exploits for a smaller set of targets for much longer while remaining undetected.

[It] hints at a state-actor with specific intent.

Meanwhile, Arrigo Triulzi—@cynicalsecurity—speaks the Queen’s english:

 All I am going to say … is, “Bloody Hell!” In the most profound British understatement tone I can muster.

And Finally:

Uncommon scambaiting

You have been reading SB Blogwatch by Richi Jennings. Richi curates the best bloggy bits, finest forums, and weirdest websites… so you don’t have to. Hate mail may be directed to @RiCHi or [email protected]. Ask your doctor before reading. Your mileage may vary. E&OE.

Image source: Blair Stirrett (cc:by)

Richi Jennings

Featured eBook
The Dangers of Open Source Software and Best Practices for Securing Code

The Dangers of Open Source Software and Best Practices for Securing Code

More and more organizations are incorporating open source software into their development pipelines. After all, embracing open source products such as operating systems, code libraries, software and applications can reduce costs, introduce additional flexibility and help to accelerate delivery. Yet, open source software can introduce additional concerns into the development process—namely, security. Unlike commercial, or ... Read More
Security Boulevard

Richi Jennings

Richi Jennings is a foolish independent industry analyst, editor, and content strategist. A former developer and marketer, he’s also written or edited for Computerworld, Microsoft, Cisco, Micro Focus, HashiCorp, Ferris Research, Osterman Research, Orthogonal Thinking, Native Trust, Elgan Media, Petri, Cyren, Agari, Webroot, HP, HPE, NetApp on Forbes and Bizarrely, his ridiculous work has even won awards from the American Society of Business Publication Editors, ABM/Jesse H. Neal, and B2B Magazine.

richi has 370 posts and counting.See all posts by richi