Federal Communications Commission FCC 24-17 STATEMENT OF COMMISSIONER GEOFFREY STARKS Re: Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts, CG Docket No. 23-362, Declaratory Ruling (February 2, 2024) We all know unwanted robocalls are a scourge on our society. But I am particularly troubled by recent harmful and deceptive uses of voice cloning in robocalls. Real world examples here are no longer theoretical. Bad actors are using voice cloning – a generative AI technology that uses a recording of a human voice to generate speech sounding like that voice – to threaten election integrity, harm public safety, and prey on the most vulnerable members of our society. In January, potential primary voters in New Hampshire received a call, purportedly from President Biden, telling them to stay home and “save your vote” by skipping the state’s primary. Steve Holland, “Fake ‘Biden’ robocall tells New Hampshire Democrats to stay home,” Reuters (Jan. 22, 2024), https://www.reuters.com/world/us/fake-biden-robo-call-tells-new-hampshire-voters-stay-home-2024-01-22/. The voice on the call sounded like the President’s, but of course it wasn’t. Those were voice cloning calls. The use of generative AI has brought a fresh threat to voter suppression schemes and the campaign season with the heightened believability of fake robocalls. Christine Chung, “The Used Robocalls to Suppress Black Votes. Now They Have to Register Voters.” The New York Times (Dec. 1, 2022), https://www.nytimes.com/2022/12/01/us/politics/wohl-burkman-voter-suppression-ohio.html; In the Matter of John M. Burkman, Jacob Alexander Wohl, J.M. Burman & Associates, Forfeiture Order, File No. EB-TCD-21-00032652 et al. (2023) (FCC’s assessment of $5,134,000 forfeiture against the perpetrators of 2020 robocall voter suppression scheme for violations of the TCPA). Another example: parents have been scared half to death hearing their child’s voice on the other end of the line, saying they’ve been kidnapped, or need money to get out of trouble. Faith Karimi, “‘Mom, these bad men have me’: She believes scammers cloned her daughter’s voice in a fake kidnapping,” CNN (Apr. 29, 2023), https://www.cnn.com/2023/04/29/us/ai-scam-calls-kidnapping-cec/index.html; Pranshu Verma, “They thought loved ones were calling for help. It was an AI scam.” The Washington Post (Mar. 5, 2023), https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/. In actuality, their children are safe and unaware of the chaos. Finally, fraudsters continue to target the elderly through “grandparent” scams, but voice cloning adds a frightening new dimension to these attacks on those who may be least able to detect them. Federal Communications Commission (Feb. 1, 2024), ‘Grandparent’ Scams Get More Sophisticated, https://www.fcc.gov/grandparent-scams-get-more-sophisticated. These are real attacks causing harm and confusion, and given their effectiveness, we can assuredly expect more to come. The whole of government is rightly focused on whether and how to regulate AI. President Biden issued a landmark executive order aimed at seizing the promise and managing the risks of AI. Senator Schumer has convened nine bipartisan AI insight forums, bringing together leading minds to advise Congress broadly on AI’s impact from the workforce to national security. Legislators have introduced bills addressing the use of AI in dozens of contexts, including election integrity: in fact, a bipartisan group of senators have introduced legislation that would ban the use of AI to generate content falsely depicting federal candidates to influence an election, and comprehensive robocall legislation announced just days ago would double the statutory penalty for calls using AI to impersonate an individual or entity with the intent to defraud. This work is critical. In the interim, agencies that have existing authority to regulate AI should work hard to address harmful uses of the technology. And they are. The Federal Trade Commission just finished accepting submissions for its Voice Cloning Challenge – an effort to encourage products, policies, and procedures aimed at protecting consumers from AI-enabled voice cloning harms. The Federal Election Commission has sought comment on prohibiting the deceptive use of AI in campaign advertisements. I am proud that the FCC is also stepping in to play its own unique role. We said it in our November Notice of Inquiry, and today’s Declaratory Ruling makes it clear: the use of voice cloning in telephone calls and texts falls within the FCC’s statutory authority under the TCPA. The Act prohibits calls using “artificial or prerecorded voice[s]” without consent. What is voice cloning, if not the use of an artificial voice? By issuing this item, we’re responding to 26 bipartisan state attorneys general, who last month emphasized to the FCC that “any type of AI technology that generates a human voice should be considered an ‘artificial voice’ for purposes of the TCPA.” Reply Comments of 26 State Attorneys General, CG Docket No. 23-362 at 1 (Jan. 16, 2024). The FCC’s collaborative work with state and local law enforcement is key to investigating and stopping robocall scams. We must do all we can to aid these partners, so where we can give state attorney general partners the certainty of authority to go after these threats, we absolutely should. I know we will work hand in glove to enforce the law against bad actors using voice cloning calls to cause harm – and as we’ve seen, the harm can range from financial fraud to threats on the sanctity of our democratic process. Too much is at stake to stand still. I want to thank the Chairwoman for her leadership in our robocall efforts and her prompt action on today’s Declaratory Ruling. My thanks also to the Commission staff who worked on this item. It has my full support. 2