By WYATTE GRANTHAM-PHILIPS (AP Business Writer)
NEW YORK (AP) — No, Katy Perry and Rihanna did not attend the Met Gala this year. But AI-generated images still tricked some fans into believing the stars showed up at the major fashion event.
Deepfake images portraying a few famous people at the Metropolitan Museum of Art’s yearly fundraiser rapidly spread online Monday and early Tuesday.
Some observant social media users noticed differences, and platforms like X’s Community Notes recognized that the images were likely made using artificial intelligence. For instance, one hint that a viral picture of Perry in a flower-covered gown was fake is that the carpeting on the stairs matched that from the 2018 event, not this year’s green-tinged fabric lined with live foliage.
However, some people were deceived — including Perry’s own mother. Shortly after at least two AI-generated images of the singer started circulating online, Perry shared them on her Instagram, along with a screenshot of a text that seemed to be from her mom complimenting her on what she thought was a real Met Gala appearance.
“lol mom the AI got to you too, BEWARE!” Perry responded in the exchange.
Representatives for Perry did not immediately respond to The Associated Press’ request for further comment and information on why Perry wasn’t at the Monday night event. But in a caption on her Instagram post, Perry wrote, “couldn’t make it to the MET, had to work.” The post also included a muted video of her singing.
Meanwhile, a fake image of Rihanna in a stunning white gown embroidered with flowers, birds and branches also made its rounds online. The multihyphenate was originally a confirmed guest for this year’s Met Gala, but Vogue representatives said that she would not be attending before they shuttered the carpet Monday night.
People magazine reported that Rihanna had the flu, but representatives did not immediately confirm the reason for her absence. Rihanna’s reps also did not immediately respond to requests for comment in response to the AI-generated image of the star.
While the source or sources of these images is hard to lock down, the realistic-looking Met Gala backdrop seen in many suggests that whatever AI tool was used to create them was likely trained on some images of past events.
The Met Gala’s official photographer, Getty Images, declined comment Tuesday.
Last year, Getty sued a leading AI image generator, London-based Stability AI, claiming that it had copied more than 12 million photographs from Getty’s stock photography collection without permission. Getty has since launched its own AI image-generator trained on its works, but blocks attempts to generate what it describes as “problematic content.”
This is far from the first time we’ve seen generative AI, a branch of AI that can create something new, used to create phony content. Image, video and audio deepfakes of well-known figures, from Pope Francis to Taylor Swift, have gained lots of attention online in the past.
Experts point out that each example highlights increasing worries about the misuse of this technology — especially concerning false information and the potential for carrying out scams, identity theft, propaganda, and even election manipulation.
Cayce Myers, a professor and director of graduate studies at Virginia Tech’s School of Communication, stated that in the past, people believed what they saw, but now it's not always the case. He highlighted the impact of Monday’s AI-generated Perry image, emphasizing the level of advancement that this technology now possesses.
While creating AI-generated images of celebrities in imaginary luxury gowns may seem relatively harmless, experts note that this kind of technology has a well-documented history of more serious or harmful uses.
Earlier in the year, sexually explicit and abusive fake images of Swift began circulating online, leading X, formerly Twitter, to temporarily block some searches. Victims of nonconsensual deepfakes extend beyond celebrities, and advocates stress particular concern for victims who have little protections. Research shows that explicit AI-generated material overwhelmingly harms women and children, including disturbing cases of AI-generated nudes circulating through high schools.
Experts also continue to point to potential geopolitical consequences that deceptive, AI-generated material could have, especially in an election year for several countries around the world.
David Broniatowski, an associate professor at George Washington University and lead principal investigator of the Institute for Trustworthy AI in Law & Society at the school, emphasized that the implications go beyond the safety of the individual, extending to the safety of the nation and the entire society.
It's a big challenge to leverage generative AI capabilities while establishing a framework that safeguards consumers, especially as the technology’s commercialization continues to grow rapidly. Experts highlight the need for corporate accountability, universal industry standards, and effective government regulation.
Tech companies largely have the power to govern AI and its risks, while governments around the world are working to catch up. However, considerable progress has been made over the last year. In December, the European Union reached an agreement on the world’s first comprehensive AI rules, but the act that won’t take effect until two years after the final approval.
_____________
AP Reporters Matt O’