Part 2: Evidence in the Violent Video Game Debate
How a widely accepted theory was rejected by the courts for lack of evidence
Don’t miss Part 1: Is the Debate Around Social Media Another Tech Panic?
As seen in Scalia’s critique, one of the main flaws in the research used by the state of California in Brown v. Entertainment Merchants Association was its lack of causational evidence. Most of the research concerning the relationship between violent video games relied upon correlational evidence. Correlational studies measure two variables and their relationship to one another. Establishing a correlation between two variables often serves as a starting point for research, but it does not prove causation. Consider the case of ice cream sales and violent crime—as one rises so does the other. The two are correlated, but no one is seriously considering banning ice cream. That is because, upon deeper analysis, it was determined that there is a third cause that drives an increase in both: higher temperatures.
Even if we eliminate the possibility that there is a third variable, it can be difficult to determine which variable drives the other. Perhaps playing violent video games makes children more aggressive, or perhaps aggressive children are more attracted to violent video games. This is why it is so important for researchers to produce causational evidence as well as correlational. It is also why California’s case failed.
Scalia’s critique also notes that the research suffers from several methodological flaws. While not detailed in the opinion of the Court, other researchers at the time went into depth on the subject. One methodological flaw that they point to is the use of poor proxies for aggression.
[G]iving participants words with blank spaces and evaluating whether they make “aggressive” or “nonaggressive” words with the letters they fill in (i.e., “explo_e” could be completed as “explore” or “explode”), as Anderson did in his experiment, JA 496, has no known validity for measuring aggressive behavior (or even aggressive thinking).
Since these proxies do not accurately measure violence or aggression towards others, they are not representative of real-world aggression. Furthermore, the lack of consistent definitions and study designs made it difficult to directly compare the results. Thus, researchers were limited in the insights that they could draw from comparing studies.
Other studies on violent video games relied heavily upon self-reported data to measure time spent playing video games and whether they were violent. The use of self-reported data brings into question the reliability of the findings. Researchers have since noted that self-reported media use does not accurately reflect time spent due to respondents commonly over or under-reporting their use.
In addition, the questions asked by researchers can be interpreted differently by the respondents and often have unclear scales. For example, in a study attempting to gauge the impact that violent video games may have on a child's empathy, researchers provided the following scales for children to rate their responses:
(1=No, 2=Maybe, 3=Probably, 4=Yes) for how much they agree with a given statement;
(0=Never, 1=Sometimes, 2=A lot) for how often they have encountered a real-life violent event; and
(0=Not at all upsetting, 1=Somewhat upsetting, 2=Very upsetting) for how much that event impacted them.
Each of these leave room for interpretation. What constitutes a “maybe” versus a “probably”? How often is “a lot”? Where is the line between “somewhat upsetting” and “very upsetting”? Even more questions arise when you begin to consider whether children in a school setting feel comfortable answering these questions honestly.
With these methodological flaws in mind, researchers at the time noted that the quality meta-analyses on the topic of violent video games and aggression that the State referred to should be taken with a grain of salt. After all, when it comes to meta-analyses it is important to remember that, “the end product will never be better than the individual studies that make up the meta-analysis.”
Finally, even if we assume them to be methodologically flawless, the studies only found small effect sizes. Meaning, that even if violent video games did increase aggression, the effect would be quite small. Hardly enough to drive violent crime. With an effect size this small, it is also possible that the correlation found could be a result of statistical noise rather than any real-world connection.
Individually, each of these issues is problematic. Together, they undermine the case entirely. As a result, it is unsurprising that the Supreme Court ruled against the ban on violent video games stating that, “the State’s evidence is not compelling.”
While the courts rejected the evidence, it is important to note how widely accepted the theory was by the public, policymakers, and certain institutional bodies. This was not a case of fringe researchers pushing a theory, but an extensively studied issue. Violent video games were the subject of congressional hearings, scrutiny from the US Surgeon General, and the American Psychological Association (APA). This was despite strenuous push back from other respected researchers at the time. This just goes to show that, in the moment, what we later come to view as a tech panic, can feel very real at the time.