State Policymakers Should Pause Before Adopting Proposals from Meta and Google
Six considerations for the app store age verification model
Social media use and its effect on mental health outcomes is a growing concern. And in response a number of states, including Texas and Utah, have sought to restrict teen access. While in other states, like California and Arkansas, lawmakers are defending their own social media bills in the courts. With so much going on, there is little clarity as to what will and won’t work in practice.
That’s where Meta, the parent company of Facebook and Instagram, and Google, the parent company of YouTube, have joined the fray with legislative proposals.
In a November 2023 blog post, Meta announced that they are backing federal legislation that “requires app stores to get parents’ approval whenever their teens under 16 download apps.” Meta’s CEO Mark Zuckerberg endorsed the idea in this week’s Senate hearing during member questioning. The idea seems simple, but it is more complex than it appears, aims to shift responsibility, and proposes an unviable solution instead of leading with an effective solution.
There is no doubt that Meta should be part of this broader conversation. According to the latest data from Pew, teens use Meta’s Instagram heavily (although Facebook much less). The decisions made by Meta have ramifications for tens of millions of users and serve as an industry example. Now that the company is making serious legislative proposals, it’s worth our time to analyze.
Requiring “app stores to get parents’ approval whenever their teens under 16 download apps” is both an attractive and seemingly simple solution. The other leading idea to date—adopted in Arkansas and Utah—is to require age verification for all users at the platform level. This method requires the covered sites to collect and validate the age and location for each and every site visitor to determine which users are minors. It’s a lot of information for every website to process and potentially store.
But, app store age verification tries to take advantage of app stores’ central role. Google Play and the Apple App Store are the obvious central players, but there are 100s of other app stores, including Meta’s own. Rather than every website verifying a user’s age, the idea is to complete just one transaction with the app store as a gatekeeper.
However, that’s where the benefits of the proposal end.
First, an app store requirement would only affect mobile users and PC users who use an app store to access social media apps. Users of any age at any time could access these sites via a web browser on a mobile device or a PC. Should Meta’s proposal become law, teens would likely flock to web browsers on their same mobile device or on a nearby PC, mooting the majority of the proposal and its benefits.
Second, this mandate would affect all app store users. A common misunderstanding about age verification is that it would only apply to the targeted population—in this case users under 16. However, to comply with such proposals app stores would need to know the age of every user in order to determine who is under 16. This means anyone within the US would need to submit some form of identification that the app store could use to determine their age, a proposal that “[t]he majority of Americans remain uncomfortable with” according to a September 2023 CGO poll.
Third, this approach would also affect marginalized populations. Should Meta’s proposal require the most accurate form of age verification, a government ID, marginalized populations would be disproportionately impacted. (Although IDs are far from perfect, other forms of age assurance are even less accurate.) According to a recent study at the University of Maryland, nearly 29 million Americans are without an ID. Meta’s proposal, if requiring ID, would exclude 1 in 10 Americans from using app stores. Making it more problematic, according to the same report, “Members of underrepresented racial and ethnic groups were less likely to have a current driver’s license or other government-issued photo ID.” Additionally, lower income communities and demographics with a high school degree or less fell into the same category of least likely to have a government-issued ID. Meta’s proposal, if requiring a government ID, would impact these marginalized and vulnerable populations the most.
Fourth, although it’s an apparently simple solution, the technical mechanics remain complicated and represent severe privacy risks. There are 100s of app stores of varying kinds and levels of security. While Apple might preserve privacy well, another covered app store might not. Either way, personal identification that’s uploaded is at risk of a data breach. The levels of risk might comport with the type of verification that a new law would require, such as a government ID or biometric information, but there’s a risk whenever data is transferred online. There are many unanswered issues such as at what point a user would be required to prove their age? At the point of sale? What if that sale is online? When they first start their device? What about devices that are shared among family members? What kind of age verification would it require? Biometric data like face or palm scan? Government ID? Where does that data go and who collects it and when?
Fifth, this proposed law is unnecessary because parents and caregivers who want this functionality can already enable these settings themselves. Google Play has a range of controls for parents including the ability to approve all app downloads. The Apple App Store has a similar feature with Ask to Buy. Indeed, Meta has its own parental approval process for apps that parents and caregivers can utilize. Instead of lobbying the federal government to force all users to get permission before downloading apps, Meta and other big tech companies should focus on informing parents and caregivers how they can use these existing tools. Indeed, this approach best recognizes that not every child or certainly 16-year-old is the same. Their families and caregivers know them best and can work with them to provide the tools that will most effectively keep them safe.
Finally, this proposal remains squarely in the belief that age verification is the best approach to managing online risks for teens. This belief assumes age verification can be done effectively, accurately, and respects civil and legal rights for all users. However, the reality is the opposite. As noted above, in prior court decisions, recent court opinions, and technical analysis, age verification will not be the best route to manage teen online safety. Meta could have instead used this opportunity to study and experiment with solutions that would empower teens and caregivers. Their proposal only furthers the thinking that age verification is the best route forward, when it won’t. What’s necessary to fix the problems teens face on social media cannot be neatly packaged in a blog post. It will require changes in cultural, individual, and business practices, most of which cannot be regulated in ways that promote innovation and human freedom. Such a complex solution set should be something a company as innovative and resourced as Meta is well-positioned to propose.
It’s worth noting that Meta is not totally alone, although they have provided the most concrete legislative proposal out of the large social media companies. In October 2023, a month before Meta made its announcement, Google published a blog post outlining their “Legislative Framework to Protect Children and Teens Online.” Like Meta, it makes sense for Google to be a part of the conversation. According to the latest data from Pew, YouTube, owned by Google, is used by nearly every American teen.
Google’s proposal includes 11 ideas that policymakers “around the world” can use. These recommendations mostly encourage regulators and policymakers to legally formalize many of the current services and practices that Google is already implementing on YouTube and their other products. Although softer in tone than Meta’s proposal, it is still troubling to see a major company propose problematic ideas such as banning targeted advertising to minors and recommending limited age verification on the one hand but then advising lawmakers to craft policies appropriate for younger and older teens. (An impossible task to legally comply with if platforms don’t know the ages of their users).
So far Meta has aimed their recommendations at federal policymakers and not endorsed this idea in the states. This is the correct move, even if their proposal has problems. State lawmakers should resist taking Meta’s and even Google's ideas and introducing them in the states. There are enough proposals being adjudicated in the courts, such as those from California, Arkansas, and Utah, that state lawmakers should refrain from introducing yet another legally dubious issue before the other proposals are resolved. Indeed, key components in the proposals from Meta and Google will be affected by these court decisions. Policymakers, especially those in the states, should let those issues play out before adding more time and expense to taxpayers, confusion for innovators, and delays for concerned parents.
Despite these critiques, Meta’s (and Google’s) products and services entertain, connect, and educate over a billion people around the world. Although these companies have made many errors they have also done much good. That is why their proposals are worthy of critique. Meta and Google should redirect to approaches that actually empower parents, teens, and caregivers instead of ones that could restrict access for all users and are unlikely to help the teens who need help most.