Welcome to the week! I have an announcement to be made before we move onto the good stuff. I will be leaving the Center for Growth and Opportunity and moving on to new projects in the New Year. Sadly, this means that Aubs-ervations will be coming to an end on Now + Next. HOWEVER, I plan to continue publishing Aubs-ervations in the New Year on my personal Substack Tech Policy Tidbits! If you enjoy subjecting yourself to terrible puns and helpful updates on tech policy give it a follow!
Onto the news! The FCC continues their effort to safeguard the internet from…?1 Tech Policy Press has a tracker for the AI Insight Forums that is a fantastic resource for anyone following AI policy. Pew has released another round of their “Teens, Social Media, and Technology Survey.” Google has lost to Epic in one of the multiple lawsuits that they are currently facing. Though still engaged by the app itself, people seem to be losing interest in a TikTok ban.2 Finally, Apple tells the government to come back with a warrant.
Public Comments
FCC Proposed Rule: Safeguarding and Securing the Open Internet
In the News
A Little Bit of Everything
The State of State Technology Policy: 2023 Report | Scott Babwah Brennen and Matt Perault, Center on Technology Policy at the University of North Carolina at Chapel Hill
Artificial Intelligence
US Senate AI ‘Insight Forum’ Tracker | Gabby Miller, Tech Policy Press
European AI Regulations: Real Risk Reduction or Regulatory Theater? | Bronwyn Howell, AEI
Chatbot Hype or Harm? Teens Push to Broaden A.I. Literacy | Natasha Singer, New York Times
Children’s Online Safety
NetChoice Sues Utah to Keep Kids Safe Online and Protect Constitutional Rights | NetChoice Press Release
Teens, Social Media and Technology 2023 | Monica Anderson, Michaelle Faverio, and Jeffrey Gottfried, Pew Research Center
Sunak Considers Crackdown on Young Teens’ Social Media Use | Thomas Seal, Kitty Donaldson, and Jillian Deutsch, Bloomberg
New Jersey Is The Latest To Push A Harmful Moral Panic ‘Think Of The Kids’ Social Media Bill | Mike Masnick, Techdirt
Antitrust and the Market
Apple, Google Get Billions From Their App Stores. That’s Now Under Threat. | Aaron Tiley and Meghan Bobrowsky, The Wall Street Journal
Google’s Antitrust Loss to Epic Could Preview Its Legal Fate in 2024 | Nico Grant, New York Times
Content Moderation
TikTok Ban Has Lost Support, Even Among Republicans: Pew Survey | Drew Harwell, Washington Post
Illicit Content on Elon Musk’s X Draws E.U. Investigation | Adam Satariano, New York Times
Privacy
Apple Now Requires a Judge's Consent to Hand Over Push Notification Data | Raphael Satter, Reuters
Research
A Model of Behavioral Manipulation
We build a model of online behavioral manipulation driven by AI advances. A platform dynamically offers one of n products to a user who slowly learns product quality. User learning depends on a product’s “glossiness,’ which captures attributes that make products appear more attractive than they are. AI tools enable platforms to learn glossiness and engage in behavioral manipulation. We establish that AI benefits consumers when glossiness is short-lived. In contrast, when glossiness is long-lived, users suffer because of behavioral manipulation. Finally, as the number of products increases, the platform can intensify behavioral manipulation by presenting more low-quality, glossy products.
Can Socially-Minded Governance Control the AGI Beast?
This paper robustly concludes that it cannot. A model is constructed under idealised conditions that presume the risks associated with artificial general intelligence (AGI) are real, that safe AGI products are possible, and that there exist socially-minded funders who are interested in funding safe AGI even if this does not maximise profits. It is demonstrated that a socially-minded entity formed by such funders would not be able to minimise harm from AGI that might be created by unrestricted products released by for-profit firms. The reason is that a socially-minded entity has neither the incentive nor ability to minimise the use of unrestricted AGI products in ex post competition with for-profit firms and cannot preempt the AGI developed by for-profit firms ex ante.
The rise of artificial intelligence (AI) and of cross-border restrictions on data flows has created a host of new questions and related policy dilemmas. This paper addresses two questions: How is digital service trade shaped by (1) AI algorithms and (2) by the interplay between AI algorithms and cross-border restrictions on data flows? Answers lie in the palm of your hand: From London to Lagos, mobile app users trigger international transactions when they open AI-powered foreign apps. We have 2015-2020 usage data for the most popular 35,575 mobile apps and, to quantify the AI deployed in each of these apps, we use a large language model (LLM) to link each app to each of the app developer's AI patents. (This linkage of specific products to specific patents is a methodological innovation.) Armed with data on app usage by country, with AI deployed in each app, and with an instrument for AI (a Heckscher-Ohlin cost-shifter), we answer our two questions. (1) On average, AI causally raises an app's number of foreign users by 2.67 log points or by more than 10-fold. (2) The impact of AI on foreign users is halved if the foreign users are in a country with strong restrictions on cross-border data flows. These countries are usually autocracies. We also provide a new way of measuring AI knowledge spillovers across firms and find large spillovers. Finally, our work suggests numerous ways in which LLMs such as ChatGPT can be used in other applications.
Unclear, but I’m sure they’ll figure it out eventually.
Interest in a TikTok ban tends to wax and wane in my experience, but perhaps the Montana bill being blocked has done it in for good this time.