Congress In Tough Spot After Robocall Ban

The Federal Communications Commission (FCC) recently voted unanimously to classify AI-generated voices as “artificial” under the Telephone Consumer Protection Act, effectively banning their use. This decision came in response to an incident where an AI-generated voice impersonating President Biden circulated in New Hampshire ahead of the state’s primary.

Julia Stoyanovich, an associate professor at New York University’s Tandon School of Engineering, emphasizes the need for a holistic approach to regulating AI-generated media. She suggests that the focus should extend beyond voice content and encompass all forms of digitally altered media.

The Telephone Consumer Protection Act prohibits using artificial or prerecorded voice messages in telemarketing calls. However, the FCC’s ability to enforce this regulation has been limited, allowing illegal robocalls to persist.

FCC Chair Jessica Rosenworcel acknowledges the confusion caused by AI-generated voice cloning and images, which often trick consumers into believing fraudulent activities are legitimate. She emphasizes recognizing this emerging technology as illegal under existing law to protect consumers from scams.

Nevertheless, advocacy groups like Public Citizen argue that the FCC’s efforts are insufficient to safeguard citizens and elections. While the ban on AI-generated robocalls is a positive step, AI-generated images and videos remain unregulated in political campaigns.

With the 2024 election approaching, experts and advocates are now pointing to the Federal Election Commission (FEC) and urging them to fill the regulatory gaps left by the FCC’s robocall ban. The focus is increasing efforts to regulate AI and its use in political advertising, a critical issue in election years.

FEC Commissioner Sean J. Cooksey has assured that thousands of public comments are being diligently reviewed, expressing hope that the rule will be resolved by early summer. However, Nick Penniman, the founder and CEO of Issue One, a nonpartisan political reform group, believes that the FCC’s rule is insufficient. Penniman has called for Congress to prohibit deceptive AI to disrupt elections and for the FEC to clarify language to ban such practices in campaign communications. He emphasizes that the unregulated use of AI poses an existential threat to democracy and the integrity of elections.

Enforcing the new ban on deceptive AI may prove challenging for the FCC. Jessica Furst Johnson, an election lawyer, has highlighted the difficulty in identifying AI-generated content and its potential impact on the rule’s effectiveness. Furst Johnson points out that the rule relies on reports from recipients of robocalls, which may lead to complaints based on personal biases rather than genuine AI use.