The Protect Elections from Deceptive AI Act addresses the threats posed to our elections by the use of AI to generate materially deceptive content.
Washington, D.C. – U.S. Senators Susan Collins, Amy Klobuchar (D-MN), Josh Hawley (R-MO), Chris Coons (D-DE), and Michael Bennet (D-CO) introduced the Protect Elections from Deceptive AI Act, bipartisan legislation to ban the use of artificial intelligence (AI) to generate materially deceptive content falsely depicting federal candidates in political ads to influence federal elections.
“In a rapidly evolving digital landscape, we must ensure that voters are not manipulated by purposely misleading AI-generated content,” said Senator Collins. “This bill is a necessary step to strengthen the integrity of our elections while also protecting First Amendment rights.”
The bill would amend the Federal Election Campaign Act of 1971 (FECA) to prohibit the distribution of materially deceptive AI-generated audio, images, or video relating to federal candidates in political ads or certain issue ads to influence a federal election or fundraise. The bill allows federal candidates targeted by this materially deceptive content to have content taken down and enables them to seek damages in federal court. This ban extends to a person, political committee, or other entity that distributes materially deceptive content intended to influence an election or raise money fraudulently. Consistent with the First Amendment, the bill has exceptions for parody, satire, and the use of AI-generated content in news broadcasts.
Last year, after Senators Collins and Klobuchar sent a letter calling on the Election Assistance Commission (EAC) to take action to address AI-generated disinformation in elections, members of the EAC voted unanimously to allow election officials to use federal election funds to counter disinformation in our elections generated by AI.
###