While the Federal Election Commission hasn’t set rules on using AI in political campaign ads, in August it voted to seek public comments on whether to update its misinformation policy to include deceptive AI ads.
The Google policy change also comes as Congress is working on comprehensive legislation to set guardrails on AI, and is meeting with leaders next week in the generative AI space, including Google CEO Sundar Pichai, which owns AI subsidiary DeepMind.
The specifics: Google’s latest rule update — which also applies to YouTube videos — requires all verified advertisers to prominently disclose whether their ads contain “synthetic content that inauthentically depicts real or realistic-looking people or events.” The company mandates the disclosure be “clear and conspicuous” on the video, image or audio content. Such disclosure language could be “this video content was synthetically generated,” or “this audio was computer generated,” the company said.
A disclosure wouldn’t be required if AI tools were used in editing techniques, like resizing or cropping, or in background edits that don’t create realistic interpretations of actual events.
Political ads that don’t have disclosures will be blocked from running or later removed if they evaded initial detection, said a Google spokesperson, but advertisers can appeal, or resubmit their ads with disclosures.
Elections worldwide: Google’s policy updates its existing election ads rules in regions outside the U.S. as well, including Europe, India and Brazil — which all have elections in 2024 as well. It will also apply to advertisements using “deepfakes,” which are videos or images that have been synthetically created to mislead, that are banned under the company’s existing misrepresentation policy.
Facebook currently doesn’t require the disclosure of synthetic or AI-generated content in its ads policies. It does have a policy banning manipulated media in videos that are not in advertisements, and bans the use of deepfakes.