By MATT BROWN and DAVID KLEPPER (Associated Press)
WASHINGTON (AP) — When you first see the images being shared online of former President Donald Trump surrounded by smiling groups of Black people, it may seem normal. However, a closer look reveals something different.
Unusual lighting and overly-perfect details indicate that these images were all produced using artificial intelligence. These photos, which have not been associated with the Trump campaign, emerged as Trump attempts to gain support from Black voters , who statistics show remain loyal to President Joe Biden.
The fake images, which were brought to light in a recent BBC investigation, provide more evidence to support warnings that the use of AI-generated imagery will only increase as the November general election approaches. Experts said they highlight the danger that any group — Latinos, women, older male voters — could be targeted with lifelike images meant to deceive and disorient and demonstrate the need for regulation around the technology.
In a report released this week, researchers at the nonprofit Center for Countering Digital Hate used several popular AI programs to demonstrate how simple it is to create authentic deepfakes that can fool voters. The researchers managed to produce images of Trump conferencing with Russian operatives, Biden rigging a ballot box, and armed militia members at polling places, even though many of these AI programs claim to have rules prohibiting this type of content.
The center examined some of the recent deepfakes of Trump and Black voters and concluded that at least one was originally created as satire but is now being shared by Trump supporters as proof of his support among Blacks.
Social media platforms and AI companies need to do more to safeguard users from the harmful effects of AI, according to Imran Ahmed, the center’s CEO and founder.
“If a picture is worth a thousand words, then these perilously vulnerable image generators, along with the inadequate content moderation efforts of mainstream social media, represent as potent a tool for malicious actors to mislead voters as we’ve ever seen,” Ahmed said. “This is a wake-up call for AI companies, social media platforms, and lawmakers – take action now or jeopardize American democracy.”
The images raised concerns on both the right and left that they could mislead people regarding the former president’s support among African Americans. Some in Trump’s circle have expressed frustration at the spread of the fake images, believing that the fabricated scenes undermine Republican outreach to Black voters.
“If you see a photo of Trump with Black folks and you don’t see it posted on an official campaign or surrogate page, it didn’t happen,” said Diante Johnson, president of the Black Conservative Federation. “It’s nonsensical to think that the Trump campaign would have to use AI to show his Black support.”
Experts anticipate further attempts to use AI-generated deepfakes to target specific voter groups in crucial swing states, such as Latinos, women, Asian Americans, and older conservatives, or any other demographic that a campaign aims to attract, deceive, or intimidate. With many countries are having elections this year, deepfakes present challenges it's a worldwide problem.
In January, voters in New Hampshire got a robocall that copied Biden’s voice falsely saying if they voted in the state’s primary they couldn't vote in the general election. A political consultant later admitted making the robocall, which might be the first known effort to use AI to meddle in a U.S. election.
This content can have a harmful effect even if it's not believed, based on a February study by researchers at Stanford University looking at the potential impacts of AI on Black communities. When people realize they can’t trust images they see online, they may start to distrust real sources of information.
“As AI-generated content becomes more common and hard to tell apart from human-generated content, people may become more doubtful and distrustful of the information they receive,” the researchers wrote.
Even if it doesn’t succeed in fooling a large number of voters, AI-generated content about voting, candidates and elections can make it harder for anyone to distinguish fact from fiction, causing them to distrust real sources of information and contributing to a loss of trust that’s harming faith in democracy and increasing political division.
While false claims about candidates and elections are nothing new, AI makes it faster, cheaper and easier than ever to create realistic images, video and audio. When released onto social media platforms like TikTok, Facebook or X, AI deepfakes can reach millions before technology companies, government officials or legitimate news outlets are even aware of their existence.
“AI simply sped up and pushed ahead on misinformation,” said Joe Paul, a business executive and advocate who has worked to increase digital access among communities of color. Paul noted that Black communities often have “this history of mistrust” with major institutions, including in politics and media, that both make Black communities more skeptical of public narratives about them as well as fact-checking meant to inform the community.
Digital literacy and critical thinking skills are one defense against AI-generated misinformation, Paul said. “The goal is to empower folks to critically evaluate the information that they encounter online. The ability to think critically is a lost art among all communities, not just Black communities.”