(QUEEN CITY NEWS) — Artificial intelligence has generated viral deepfakes of images like Pope Francis wearing a stylish puffer jacket or Tom Hanks in an ad for a dental plan. The technology is now under the microscope ahead of the 2024 election when it comes to political ads.
“Leading into 2024, it seems that the concern is how candidates will be using this in the presidential race,” said The Hill technology policy reporter Rebecca Klar.
Generative AI is nothing new, but with the introduction of systems like ChatGPT, it can now create seemingly authentic images and videos depicting real people.
The concern ahead of next year’s election is the spread of misinformation. Senate Majority Leader Chuck Schumer’s AI Forum brought together tech companies like Google and Facebook-parent Meta to discuss ways of combating deceptive ads.
“AI will be a dramatic force multiplier for the spread of disinformation,” said Senate Majority Leader Chuck Schumer in a November 9th speech on the Senate floor. “We also agree that government can’t act alone to create guardrails to protect our elections.”
That’s why Google and Meta are announcing new disclosure requirements for political ads using the technology.
“It seems that the concern is how candidates will be using this in the presidential race as well as how candidates will be using it in smaller races,” Klar said. “Even state and local races where it might have a bigger impact before disclosure is added or any type of post is taken down.”
Google engineers have already implemented their disclosure requirements. Meta bosses have their program launching at the beginning of next year.
There are already a lot of unanswered questions: what mechanisms do tech giants have to spot generative AI? Will the public trust the companies to monitor ads? And how will the disclosure of generative AI content be enforced?
“If ads don’t follow the policies, they won’t accept the ad,” Klar said, “and if there’s repeated violations, an account may be removed or blocked from adding new ads.”