Photographer Annie Leibovitz expresses no worry about the possible danger artificial intelligence poses to photography.
Numerous artists and some tech industry insiders have been alarmed by the recent fast spread of AI tools that can create realistic images from text cues, potentially infringing on artists’ copyrighted work and eliminating the necessity for human photographers.
Leading AI software companies, including Midjourny and Stability AI, have already faced lawsuits from a group of visual artists who allege that the companies unlawfully used their art to train their AI systems. They argue that users are capable of generating art with the software that is “indistinguishable” from their original works.
Leibovitz, however, told Agence France-Presse that she’s unconcerned by the potential risks posed by the technology.
“That doesn’t worry me at all,” the out photographer said in an interview timed to her induction into the French Academy of Fine Arts this week.
In reality, Leibovitz appears eager to accept AI as a tool in photography. “With each technological progress, there are hesitations and concerns,” she said. “You just have to take the plunge and learn how to use it.”
“Photography itself is not really real,” she added. “I like to use PhotoShop. I use all the tools available.”
Meanwhile, critics argue that AI can be misused to create convincingly realistic images and video of celebrities and politicians saying and doing things they never actually said or did. Experts, lawmakers, and public figures have cautioned about the danger that AI-generated “deepfakes” create in spreading misinformation, as well as the technology’s capability to produce fake explicit images and videos of celebrities and even children.
In 2019, congressional lawmakers introduced the “DEEP FAKES Accountability Act,” which would mandate creators to digitally watermark deepfake images. Another bill, the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, introduced in January, would permit victims to sue people who create deepfake images of them without consent.
During congressional testimony last year, Sam Altman, the gay CEO of OpenAI, expressed concern about the technology’s ability “to manipulate, to persuade, to provide sort of one-on-one interactive disinformation.” Altman said he supported establishing a government agency that could help establish safety standards and audits to prevent AI from violating copyright laws, instructing people on how to violate laws, illegally gathering user data, and promoting false advertising.
He wouldn’t, however, commit to re-tooling OpenAI to avoid using artists’ copyrighted works, their voices, or their likenesses without first obtaining artists’ consent.