We released it here but ICYMI, Neil has a great FAQ for policymakers who want to do something about AI. More than anything else, in our analysis of state bills, it’s the definition section that can cause the most trouble. That’s why Neil lays out a great alternative for how policymakers might go about defining the type of AI they’re seeking to regulate. We plan to update this accordingly as more questions and issues roll in!
I wrote a post here too. ICYMI it was about Meta’s and Google’s recent policy proposals for keeping kids and teens safe online. Most of the post is about Meta’s pitch to Congress that app stores should be responsible for verifying user age. Unfortunately, as attractive as the idea may seem, many problems persist. And while it’s not a good idea at the federal level, it’s a worse idea if state policymakers pick it up.
Media quotes, op-eds, and citations:
Referring to Utah’s latest social media bills, I told The Salt Lake City Tribune's Bryan Schott: “If policymakers want to design consumer products and applications, they should quit their jobs and become full-time app developers,” Taylor Barkley, Technology and Innovation Director at the Center for Growth and Opportunity at Utah State University, says. “They want to regulate the nitty gritty of what makes these applications useable. These new proposals go so far that they would make these platforms very unpleasant to use.”
Flipping the prominent narrative on its head, Chris Koopman argued in USA Today that the real solution to AI-generated misinformation in the 2024 election is to speed up AI’s rollout. "The most powerful defense against electorally weaponized artificial intelligence is the defensive application of artificial intelligence."
R Street's Marc Hyden cited our Tech Poll in his testimony in opposition to Georgia's SB 351. "In the end, Americans are incredibly uncomfortable with these measures. 'In a recent poll, the CGO asked Americans if they are ‘comfortable sharing a government identification document like a driver’s license with social media companies in order to verify age.’ The survey found that 2 out of 3 of Americans are not comfortable sharing their identification document with social media companies. 1 in 10 Americans were unsure.'" Thanks, Marc!
I had an op-ed in the Deseret News discussing another proposal in Utah that would require mobile devices to filter content and verify user age. "Utah’s policymakers should proudly be leaders on innovative policies. Yet when it comes to policies like those in SB104, innovation might not be the right approach. This may only put Utah’s citizens into a deeper and more expensive legal hole. Utah’s policymakers should just stop digging."
Taylor Lorenz quoted me in her The Washington Post piece about the Senate Judiciary Committee hearing on online safety for children missing the mark. “These are real stories with real tragedy and sadness,” said Taylor Barkley, director of technology and innovation policy at the Center for Growth and Opportunity, a policy research center at Utah State University. “That makes it all the more important to find solutions that work, and a lot of major leading proposals are not going to end up helping kids and teens in the long run. It’s going to end up restricting their access to information, educational content, connection, community.”
Neil's latest Substack was quoted at length in POLITICO's Digital Future Daily, "This raises obvious questions about rules, policy and privacy. In a Substack post today, Neil Chilson, senior research fellow at the Center for Growth and Opportunity, argues that we’ve already been partway down this road before when Washington thought about how to regulate the Internet of Things. That vision of a future filled with networked devices raised fresh privacy and security concerns and got quite a bit of attention in Washington several years ago."
Jessica Melugin, writing for The Dispatch, cited Aubrey and Jessica's Now+Next post on panic over videogames in the 1990s, "Studies about youth mental health and social media suffer from the same flaws as now-debunked studies attempting to link video games and real-world violence from decades ago." Thanks, Jessica!
ITIF cited Scott and Matt's work on Keeping Kids Safe Online, "As a recent paper by the Center for Growth and Opportunity highlights, the tech sector—especially small businesses and start-ups—would benefit from NIST’s guidance to help them identify the potential risks of new products and features.66 As the paper notes, NIST already has experience developing these risk frameworks in other fields such as privacy and artificial intelligence." Thanks, Juan!