SBN Is AI as Dangerous as Nuclear Tech?

Saying nuclear power is the safest energy seems wildly inaccurate and misleading, yet often I see people make this claim.

It’s always based on wild assumptions about quality control, which end up forming a logical fallacy (tautology). If absolutely everything is done perfectly by nuclear then nuclear is safest, sure. Except that’s so unrealistic as to be fantasy talk — the many nuclear accidents are the obvious proof and counterpoint.

Whether you go high or low on the nuclear disaster casualty count (high being well over 1000X the low numbers) the point is these counts for nuclear tech are extremely messy and imprecise. Can’t claim to both be safe because absolutely precise methods and then generate wildly varying estimates of harm.

And harms are common because instead of one or fewer core damage events (based on nuclear industry projections) in reality there have been at least eleven. Three Mile Island and Chernobyl both were a function of human error and the risk models have failed spectacularly to account for human error. And that’s not even to speak of massive cleanup costs for nuclear harms that weren’t even accidental.

Perhaps it helps to consider that nearly as many Americans died from radiation and fallout of the Manhattan Project than were killed by the bombs it dropped. That’s not a story often told, but it helps put to rest these odd notions that nuclear is safest just because people aren’t being very scientific about risk, casualties and total harms.

So is AI as dangerous as that?

*** This is a Security Bloggers Network syndicated blog from flyingpenguin authored by Davi Ottenheimer. Read the original post at: