Researchers from MIT’s Media Lab want to create an interdisciplinary field that addresses the unintended consequences of AI. They “theorize” that there can be unintended consequences of AI applications without human oversight. They are sadly behind the curve. This became obvious through two completely unrelated issues: child pornography and Pokémon Go.
I thought the fact I play Pokémon Go was a completely unrelated topic. I meet a lot of security professionals who also play the game, but are too embarrassed to admit it. I recently learned that the potential embarrassment might have security related benefits.
Recently, YouTube banned Nicolas “Nick” Oyzon, who has a popular YouTube channel called Trainer Tips. Nick is a professional Pokémon Go player who travels the world recording his Pokémon Go playing. With approximately 850,000 subscribers for his YouTube channel, he makes sufficient income through ads and merchandise sales to allow him to play the game as a profession. Given his visibility among Pokémon Go players, Niantic, the Pokémon Go creator, offsets some of his travel costs to attend official events.
Child pornography and Pokémon Go appear to be completely unrelated. However, one day Oyzon woke up to find his YouTube account taken down for child pornography. He was horrified on many levels. From a practical perspective, his entire livelihood was lost.
So how did this come about? It was the result of a YouTube AI program that goes through videos to find potential child pornography. Apparently, the “intelligence” used was to look for the initials, “CP”, which child pornographers use to tag videos. In the Pokémon world, CP stands for Combat Power, so it is natural that some of Oyzon’s videos contained CP in the video’s title or description. The same fate also hit Brandon Martyn, another popular Pokémon Go YouTuber, who goes by Mystic7.
Oyzon received a message that he could not initially see, as they also locked his Gmail account associated with the YouTube account that told him that YouTube flagged one of his videos as a potential concern. The message said only one video was temporarily taken down. The reality is that his entire account was locked. The video in question was named, “How to get stronger Pokémon with higher IVs/CP in Pokémon Go.” Here is the message he received:
Luckily for Oyzon, the issue was quickly resolved. His status does give him some ability to get a better response. Niantic is also a spinoff from Google, YouTube’s parent company, and Niantic provided assistance in reaching out to YouTube. You can assume that few people have this clout.
While not a life and death situation, this does prove MIT’s contention that AI without oversight can create significant damages to individuals. In this case, it would take a second for a person to realize that CP in the context of Pokémon Go is not a reference to child pornography.
Oyzon is not alone in making money on the Internet. It isn’t just him who could be affected by poor implementations of AI. In his video on the issue, even Oyzon acknowledged the importance of the effort to stop child pornography, and to his credit he as supportive of YouTube’s efforts, despite the potential damage that he could have suffered. There does however have to be a balance.
While it is not practical for a human to go through all videos on YouTube, if YouTube just locked or even deleted the video in question, there would be minimum issues. The fact that YouTube’s message said that it was confirmed the video contained child pornography is a flat out lie. As MIT describes, there was no oversight, or if there was, it was poor.
AI has the ability to immensely help security and law enforcement. However, the complete reliance on AI, without human oversight, can cause great damage to individuals. This is especially true on sites like YouTube, where monetization is one of the major growth drivers for the system. This demonstrates MIT’s concerns for the proliferation of AI without human oversight. Making the situation worse is YouTube’s notorious lack of human customer support. Oyzon and Martin were incredibly lucky to have Niantic’s help.
Efforts to flag illicit online behavior are needed. However, when a child’s game is flagged as pornography, you don’t need MIT to tell you that your use of AI has serious issues.
*** This is a Security Bloggers Network syndicated blog from RSAConference Blogs RSS Feed authored by Ira Winkler. Read the original post at: http://www.rsaconference.com/blogs/when-ai-classifies-pokmon-go-as-child-pornography