Protecting Kids Requires Smart, National AI Policy—Not Fifty Shades of Red Tape
Why a Smart Federal Pause on AI Regulation Beats a 50-State Free-for-All
Jonathan Haidt recently argued that a moratorium on state AI regulation is dangerous to kids, invoking the specter of AI-powered deepfakes, predatory AI companions, and manipulative social media algorithms. While his concerns for child safety are sincere and shared, the idea that a state-by-state patchwork of rules will better protect our children is fundamentally flawed.
First, let's clarify what the proposed moratorium does—and, crucially, what it does not do. It pauses only new state laws specifically targeting AI, but it explicitly preserves existing, powerful protections. States remain fully able to enforce current laws on privacy, consumer protection, civil rights, product liability, and anti-fraud measures. Those laws protect kids now and will continue doing so without interruption.
Critics fear the moratorium means no regulation at all. But the reality is precisely the opposite: it means a smarter, clearer approach. Right now, states are rushing forward with nearly a thousand separate AI proposals. This chaotic sprint threatens to produce inconsistent, contradictory standards that create loopholes, confuse enforcement, and weaken protection for kids. History has shown repeatedly that inconsistent laws are harder to enforce effectively, leaving kids more vulnerable—not less.
Moreover, complex state-by-state AI regulations would heavily favor Big Tech, not curb it. Large companies can afford sprawling compliance teams; small, innovative companies cannot. The unintended consequence? Fewer innovations designed to protect kids, fewer educational tools, and fewer alternatives to the current platforms critics rightly worry about. In contrast, a single coherent national approach provides clear standards that innovators can quickly meet, facilitating rapid deployment of safer, child-focused technologies.
This is not theoretical. Previous federal moratoria, such as the Internet Tax Freedom Act, directly boosted innovations that benefit kids, leading to wider access to educational resources and safer digital environments. A coherent federal approach similarly ensures the fastest possible response to new threats like deepfakes or manipulative AI tools.
It's true that protecting our children is not a partisan issue. It demands practical solutions, not symbolic gestures. We need smart policy that works—policy that doesn’t simply signal our good intentions, but actually makes our kids safer. The proposed AI moratorium offers exactly that: a pragmatic pause to craft a single, strong federal framework that provides clarity, ensures accountability, encourages innovation, and most importantly, protects children nationwide.
Let's not gamble our kids’ safety on a patchwork of red tape. Instead, let’s get AI policy right from the start.