Welcome to the week! It’s time to break out the hot chocolate here in Northern Virginia. Last night’s snow managed to last just long enough for me to see patches of it when I woke up. It melted pretty soon after, but I’m counting it. Onto the news!
Last week a number of organizations (including the CGO) filed amicus briefs in the related cases of NetChoice v. Paxton and Moody v. NetChoice. I’m particularly fascinated by the brief filed by Reddit moderators of r/law and r/SCOTUS. Give it a read when you get the chance!1 After an extensive battle royale of lawmaking, the EU has agreed upon the new AI Act. Social media CEOs will be testifying once again in the new year. FISA discussions heat up as we near its impending expiration. Finally, Meta sues the FTC on the grounds that its in-house trials are unconstitutional.
In the News
Content Moderation and Free Speech Online
Amicus Briefs: NetChoice v. Paxton and Moody v. NetChoice
If you want a quick refresher before diving into the Amicus Briefs, I recommend this CRS report.
Texas and Florida Social Media Laws Violate the First Amendment | Thomas A. Berry, Cato Institute
Could Social Media Regulation Stifle Our Future? | Vance Ginn and Taylor Barkley, ArcaMax
Artificial Intelligence
E.U. Agrees on AI Act, Landmark Regulation for Artificial Intelligence | Adam Satariano, New York Times
The OpenAI Board Member Who Clashed With Sam Altman Shares Her Side | Meghan Bobrowsky and Deepa Seetharaman, The Wall Street Journal
Does Section 230 Cover Generative AI? | Jennifer Huddleston, Cato Institute
Is AI Policy Compromise Possible? A Look at the Thune-Klobuchar AI Bill | Adam Thierer, R Street Institute
How Do We Prepare the Government for AI? | Matt Mittelsteadt, Mercatus Center
Children’s Safety Online
The CEOs of Meta, X, TikTok, Snap, and Discord Will Testify Before the US Senate on Child Safety | Emma Roth, The Verge
Of Meta and Minors, Filters and Filings: An Uncertain Path Forward | Clay Calvert, AEI
Surveillance
FISA Reform: Dueling Proposals,Ticking Clock | Patrick G. Eddington, Cato Institute
Re-authorizing FISA: Options for Reform | Joshua Levine and John Belton, American Action Forum
Antitrust
Meta Sues FTC on Privacy Move in Challenge to In-House Court | Sabrina Willmer and Leah Nylen, Bloomberg
Gus Hurwitz on Meta's Constitutional Challenge of the FTC | Gus Hurwitz, ICLE
Calling the FTC’s Actions Unconstitutional | Montather Rassoul, The Fifth Skill
Where Are the New FTC Rules? | Alden Abbott, Truth on the Market
Research
Much discussion about large language models and language-and-vision models has focused on whether these models are intelligent agents. We present an alternative perspective. First, we argue that these artificial intelligence (AI) models are cultural technologies that enhance cultural transmission and are efficient and powerful imitation engines. Second, we explore what AI models can tell us about imitation and innovation by testing whether they can be used to discover new tools and novel causal structures and contrasting their responses with those of human children. Our work serves as a first step in determining which particular representations and competences, as well as which kinds of knowledge or skills, can be derived from particular learning techniques and data. In particular, we explore which kinds of cognitive capacities can be enabled by statistical analysis of large-scale linguistic data. Critically, our findings suggest that machines may need more than large-scale language and image data to allow the kinds of innovation that a small child can produce.
After you read ours first, of course.
This just in....our new White Paper: https://www.stop-child-predators.org/_files/ugd/a9bd5d_97cdcc739f4a453482426033297ff94a.pdf