Study Proves AI Can Encourage Dishonesty | Avast
A new study published by researchers at the University of Amsterdam, Max Planck Institute, Otto Beisheim School of Management, and the University of Cologne revealed that AI-generated advice could corrupt people’s morals, even when they knew the advice is coming from a machine.
The experiment was intended to see if AI could successfully spread misinformation and disinformation in a way that would affect a person’s actions. Researchers recruited more than 1,500 volunteers for the study. The volunteers were given either “honesty-promoting” advice or “dishonesty-promoting” advice, some written by humans and some written by AI, and then tasked with an activity that allowed room for lying. The statistical result was that the AI-generated advice was indistinguishable from the human-written advice, and the volunteers generally chose the dishonest path. This led the researchers to conclude that bad actors could use AI as a force to corrupt a victim’s morals.
Should we be holding machines to higher standards than what we expect from ourselves? A panel of experts discussed the issue at an Avast virtual conference. Read their opinions in our post about tackling bias in AI algorithms. In a related story, chess grandmaster and AI authority Garry Kasparov discusses the privacy concerns surrounding the fact that AI never forgets data. Avast Security Evangelist Luis Corrons agrees that AI is a double-edged sword. “In the security field, we’ve known this from the beginning,” he commented. “As we use AI to better protect our users, cybercriminals can use it to create more efficient attacks. One way or another, AI is going to be increasingly involved in our daily lives. That means whoever is behind it will have great influence on our lives, whether we want it or not.”
North Korea accused of hacking Pfizer
According to Ars Technica, South Korean intelligence officials claim to have learned that North Korea has hacked the servers of pharmaceutical company Pfizer in search of COVID-19 vaccine information. Microsoft reported similar state-sponsored hacks back in November 2020, suspecting Russian threat group Fancy Bear and North Korean groups Zinc and Cerium. While North Korea claims to be entirely free of COVID-19 infections, it has requested vaccines from the United Nations and expects to receive about 2 million doses. Both North Korea and Pfizer have not commented yet on the alleged hack.
Parler returns to the internet
About a month after being dropped by former host Amazon Web Services (AWS), the controversial social media platform Parler is back online. Citing it as a key platform where extremist groups congregate to plan violent acts of terror, such as the January 6th siege on the U.S. Capitol, AWS cut the network from its hosting services. Following suit, Google and Apple dropped the Parler app from their app stores. However, as of this week, Parler is once again a functioning online platform. It’s being hosted by SkySilk, a small Southern California provider, and getting further help from a Russian company that once worked for Putin’s regime and a Seattle firm that has a history of far right and neo-Nazi support. For more on this story, see The New York Times.
NCSC warns about supply chain attacks
Perhaps in response to the SolarWinds hack, which Microsoft called the “largest and most sophisticated attack the world has ever seen,” the National Cyber Security Centre (NCSC) in the UK has issued guidance to software developers about protecting the software build pipeline. The SolarWinds hack, suspected to have been carried out by Russian hackers, was enabled by a software supply chain attack, which is when a bad actor injects malicious code into a component used to build the software, essentially creating a built-in malware. The NCSC urges developers to take several precautionary steps, including defending the pipeline, protecting builds from each other, establishing a chain of custody, and considering a managed service for top security.
Privacy concerns go deeper than identity theft
Wired politics writer Gilad Edelman published an article this week that brings privacy to the fore by delving into the possible long-term ramifications of having our individual preferences and biases pieced together and stored in an AI database. As entertainment, news, and advertising algorithms pigeonhole each of us deeper into the interests we’ve exposed to them and which they in turn continue to serve back to us, we lose not only the option of broadening our minds, but also the drive to interact with others. Our life experiences will become more siloed as public life all but disappears. Speaking to the overall topic of privacy, which typically conjures the fear of being spied upon, Edelman writes, “The danger is that we focus too much on the creepy individual-level stuff at the expense of the more diffuse, but equally urgent, collective concerns.”
This week’s ‘must-read’ on The Avast Blog
Avast researchers have seen a significant rise in the volume of sextortion emails sent since January 11. Read up on how our team has protected users from sextortion campaigns which could have resulted in over 500,000 incidents worldwide.
*** This is a Security Bloggers Network syndicated blog from Blog | Avast EN authored by Avast Blog. Read the original post at: http://blog.avast.com/study-proves-ai-can-encourage-dishonesty-avast