National
Puffer coat Pope. Musk on a date with GM CEO. Fake AI ‘news’ images are fooling social media users
(CNN) — Pope Francis wearing a massive, white puffer coat. Elon Musk walking hand-in-hand with rival GM CEO Mary Barra. Former President Donald Trump being detained by police in dramatic fashion.
None of these things actually happened, but AI-generated images depicting them did go viral online over the past week.
The images ranged from obviously fake to, in some cases, compellingly real, and they fooled some social media users. Model and TV personality Chrissy Teigen, for example, tweeted that she thought the pope’s puffer coat was real, saying, "didn’t give it a second thought. no way am I surviving the future of technology." The images also sparked a slew of headlines, as news organizations rushed to debunk the false images, especially those of Trump, who was ultimately indicted by a Manhattan grand jury on Thursday but has not been arrested.
The situation demonstrates a new online reality: the rise of a new crop of buzzy artificial intelligence tools has made it cheaper and easier than ever to create realistic images, as well as audio and videos. And these images are likely to pop up with increasing frequency on social media.
While these AI tools may enable new means of expressing creativity, the spread of computer-generated media also threatens to further pollute the information ecosystem. That risks adding to the challenges for users, news organizations and social media platforms to vet what’s real, after years of grappling with online misinformation featuring far less sophisticated visuals. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart.
"I worry that it will sort of get to a point where there will be so much fake, highly realistic content online that most people will just go with their tribal instincts as a guide to what they think is real, more than actually informed opinions based on verified evidence," said Henry Ajder, a synthethic media expert who works as an advisor to companies and government agencies, including Meta Reality Labs’ European Advisory Council.
Images, compared to the AI-generated text that has also recently proliferated thanks to tools like ChatGPT, can be especially powerful in provoking emotions when people view them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI, a nonprofit industry group. That can make it harder for people to slow down and evaluate whether what they’re looking at is real or fake.
What’s more, coordinated bad actors could eventually attempt to create fake content in bulk — or suggest that real content is computer-generated — in order to confuse internet users and provoke certain behaviors.
"The paranoia of an impending Trump … potential arrest created a really useful case study in understanding what the potential implications are, and I think we’re very lucky that things did not go south," said Ben Decker, CEO of threat intelligence group Memetica. "Because if more people had had that idea en masse, in a coordinated fashion, I think there’s a universe where we could start to see the online to offline effects."
A quickly evolving tool
Computer-generated image technology has improved rapidly in recent years, from the photoshopped image of a shark swimming through a flooded highway that has been repeatedly shared during natural disasters to the websites that four years ago began churning out mostly unconvincing fake photos of non-existent people.
Many of the recent viral AI-generated images were created by a tool called Midjourney, a less than year-old platform that allows users to create images based on short text prompts. On its website, Midjourney describes itself as "a small self-funded team," with just 11 full-time staff members.
A cursory glance at a Facebook page popular among Midjourney users reveals AI-generated images of a seemingly inebriated Pope Francis, elderly versions of Elvis and Kurt Cobain, Musk in a robotic Tesla bodysuit and many creepy animal creations. And that’s just from the past few days.
The latest version of Midjourney is only available to a select number of paid users, Midjourney CEO David Holz told CNN in an email Friday. Midjourney this week paused access to the free trial of its earlier versions due to "extraordinary demand and trial abuse," according to a Discord post from Holz, but he told CNN it was unrelated to the viral images. The creator of the Trump arrest images also claimed he was banned from the site.
The rules page on the company’s Discord site asks users: "Don’t use our tools to make images that could inflame, upset, or cause drama. That includes gore and adult content."
"Moderation is hard and we’ll be shipping improved systems soon," Holz told CNN. "We’re taking lots of feedback and ideas from experts and the community and are trying to be really thoughtful."
In most cases, the creators of the recent viral images don’t appear to have been acting malevolently. The Trump arrest images were created by the founder of the online investigative journalism outlet Bellingcat, who clearly labeled them as his fabrications, even if other social media users weren’t as discerning.
Developing safety practices
There are efforts by platforms, AI technology companies and industry groups to improve the transparency around when a piece of content is generated by a computer.
Platforms including Meta’s Facebook and Instagram, Twitter and YouTube have policies restricting or prohibiting the sharing of manipulated media that could mislead users. But as use of AI-generated technologies grows, even such policies could threaten to undermine user trust. If, for example, a fake image accidentally slipped through a platform’s detection system, "it could give people false confidence," Ajder said. "They’ll say, ‘there’s a detection system that says it’s real, so it must be real.'"
Work is also underway on technical solutions that would, for example, watermark an AI-generated image or include a transparent label in an image’s metadata, so anyone viewing it across the internet would know it was created by a computer. The Partnership on AI has developed a set of standard, responsible practices for synthetic media along with partners like ChatGPT-creator OpenAI, TikTok, Adobe, Bumble and the BBC, which includes recommendations such as how to disclose an image was AI-generated and how companies can share data around such images.
"The idea is that these institutions are all committed to disclosure, consent and transparency," Leibowicz said.
A group of tech leaders, including Musk and Apple co-founder Steve Wozniak, this week wrote an open letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing "profound risks to society and humanity." Still, it’s not clear whether any labs will take such a step. And as the technology rapidly improves and becomes accessible beyond a relatively small group of corporations committed to responsible practices, lawmakers may need to get involved, Ajder said.
"This new age of AI can’t be held in the hands of a few massive companies getting rich off of these tools, we need to democratize this technology," he said. "At the same time, there are also very real and legitimate concerns of having a radical open approach where you just open source a tool or have very minimal restrictions on its use is going to lead to a massive scaling of harm … and I think legislation will probably play a role in reigning in some of the more radically open models."
The-CNN-Wire
™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.
By: Pristine Villarreal
April 3, 2023