Federal Communications Commission FCC 23-101 STATEMENT OF COMMISSIONER GEOFFREY STARKS Re: Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts, CG Docket No. 23-362, Notice of Inquiry (November 15, 2023) Over the last few months, I’ve been proud to see our government convene quickly and effectively to explore the implications of artificial intelligence (“AI”). Congress is deeply engaged on this issue, convening hearings and introducing bills on the implications of AI for sectors from healthcare to homeland security. The White House is as well, with President Biden issuing a landmark executive order (“EO”) aimed at seizing the promise and managing the risks of AI for the American people. Our miliary is engaged. Our scientists are engaged. And so are our agencies. This intersectionality is critical. Because while the future of AI remains uncertain, one thing is clear: it has the potential to impact, if not transform, nearly every aspect of American life. Because of that potential, each part of our government bears a responsibility to better understand the risks and opportunities presented within its mandate, while being mindful of the limits of its experience and its authority. And in this era of rapid technological change, we must collaborate, lending our learnings and sharing our expertise across agencies to better serve our citizens and consumers. That is what the Biden EO charges us with doing, and what the Chairwoman has done by circulating the item before us today. Specifically, the EO charges the FCC with examining the impact of AI on unwanted robocalls and robotexts. As the EO – and today’s notice of inquiry (“NOI”) – acknowledges, AI holds both promise and risk when it comes to our ongoing efforts against spam calls. AI technologies can be leveraged to block unwanted robocalls and robotexts. In fact, wireless carriers use various algorithms for this purpose today, and we ask them for more information about that usage in the NOI. But AI can also facilitate or exacerbate spam – and scam – calls. The clearest example of this to date is voice cloning – generative AI technology that uses a recording of a human voice to generate speech sounding like that voice. In one recent news story, a mom in Arizona believes bad actors cloned her daughter’s voice in what was ultimately a fake kidnapping phone scam. See Faith Karimi, “‘Mom, these bad men have me’: She believes scammers cloned her daughter’s voice in a fake kidnapping,” CNN (Apr. 29, 2023), https://www.cnn.com/2023/04/29/us/ai-scam-calls-kidnapping-cec/index.html. White House Deputy Chief of Staff Bruce Reed, charged with developing the administration’s AI strategy, says “[v]oice cloning is one thing that keeps me up at night.” Nancy Scola, “Biden’s Elusive AI Whisperer Finally Goes on the Record. Here’s His Warning.” Politico (Nov. 2, 2023), https://www.politico.com/news/magazine/2023/11/02/bruce-reed-ai-biden-tech-00124375. The NOI asks about the frequency and impact of voice cloning in robotexts and robocalls, and how the Commission might address it, such as by verifying the authenticity of legitimately-generated AI voice or text content from trusted sources. Of course, voice cloning is an already-known issue, and one that falls within our existing statutory authority (i.e., the Telephone Consumer Protection Act’s (“TCPA”) prohibition on calls using artificial or prerecorded voices without consent). See 47 U.S.C. § 227(b)(1)(A)-(B). AI is a powerful, and evolving, technology. We do not know all of the issues that it may trigger – or all the benefits it may hold. So this item seeks to explore and find out. It poses some questions that will be best answered by our regulatees, such as whether AI technology can be used to reduce burdens associated with TCPA compliance measures, and how AI can work effectively within telecommunications relay services. But it also seeks information from AI developers and others who may be less familiar with our regulations, yet may still find themselves within them. For example, the NOI asks how the FCC might cooperate with AI developers to ensure they are aware of the TCPA’s obligations so they can develop their products in ways consistent with the statute, and with safeguards in place to protect against bad actors using their products in ways violative of the TCPA. I want to thank my colleagues for agreeing to my additions to the item. At a time when scammers can use tools like WormGPT and FraudGPT to facilitate their crimes, See, e.g. Matt Burgess, “Criminals Have Created Their Own ChatGPT Clones,” WIRED (Aug. 7, 2023), https://www.wired.com/story/chatgpt-scams-fraudgpt-wormgpt-crime/; Michael Kan, “After WormGPT, FraudGPT Emerges to Help Scammers Steal Your Data,” PCMag (July 25, 2023), https://www.pcmag.com/news/after-wormgpt-fraudgpt-emerges-to-help-scammers-steal-your-data. it is critical that the FCC use its enforcement authority to identify what we can about the root causes of AI-driven robocall and robotext scams, and share that information with our sister agencies charged with addressing malicious uses of AI within their domains. Under the Chairwoman’s leadership, our anti-robocall work has been characterized by coordination and cooperation, including with state attorneys general and the Industry Traceback Group. I see this collaboration as following in that same vein, and hope it will be similarly successful. I also want to thank the FCC staff who worked on this item – you are a key part of this whole-of-government effort around AI, and this item has my full support. 2