Part 3: Flawed Evidence in the Social Media Debate
Correlational studies alone can't dictate policy impacting fundamental freedoms
In our last post, we examined the numerous reasons for which the Supreme Court found the State’s evidence to be lacking in the case of Brown v. Entertainment Merchants Association. This week we will examine the evidence in the social media debate keeping in mind the qualities that the Supreme Court found to be problematic in establishing a connection between video games and violence.
As seen with the studies concerning violent video games, the body of research concerning social media and its impact on the mental health of minors is largely correlational. As established in Brown, this is not enough to prove to the courts that social media is the cause of the increased rates of depression. Researchers themselves often warn against using correlational studies to draw causal connections. Correlational studies are a good starting point for researchers, but they alone are not enough to support a policy that would infringe upon fundamental freedoms such as speech.
Further, the majority of highly-cited correlational research tends to rely heavily upon self-reported data to measure social media use. This reporting method has been shown to be systematically biased and unreliable. These findings raise concerns about the validity of research relying solely upon self-reported data. In other words, in addition to failing to establish a causal relationship, studies relying upon self-reported data may not be reliable evidence of a correlation.
A more accurate way to collect screen time data would be to use data logs. In this case, researchers could gain objective data through the implementation of screen time apps or by referencing server data. This method is limited; however, as it is difficult to implement in large-scale studies. In this case, it may be more appropriate to use time use diaries (TUD) in which respondents track their social media use throughout a given period in small increments of time.
Beyond the measurement of time usage, the use of self-report data to measure well-being can be problematic. The questions are often subjective and may reflect the respondents’ current mood more than their overall state of mental well-being. Again, this raises the question as to whether the studies are using an appropriate proxy and whether the results can be applied to the real world.
Across studies, researchers often use varying definitions for and measurements of mental health. Depending on the study, researchers may be attempting to measure depression, anxiety, or even well-being generally. Even when limited to one category, such as depression, what qualifies according to a study can vary dramatically. In one case, symptoms that would typically qualify as a minor depressive episode were categorized as “severe depression.” Others used more accepted depression scales such as the Center for Epidemiological Studies Depression Scale for Children (CES-DC). Again, due to varying definitions and measurements, researchers are limited in the insights that they can draw when comparing the literature as a whole.
Again, even if we assume methodologically flawless research, the effect sizes found by researchers have been small. This suggests that, even if social media is contributing to a rise in mental health issues among children, it would only be responsible for a small portion of the rise. These small effect sizes could just be “methodological noise” rather than a true effect.
There are researchers who hold that there is evidence to establish a causal connection between social media and poor mental health among adolescents. Jonathan Haidt, a social psychologist and respected voice in his field has pointed to both longitudinal and experimental studies which help establish causation. He has compiled a number of these studies in a Google Doc to promote transparency. Aaron Brown, a professor of statistics at NYU, recently reviewed this document. He noted that of the twenty-nine longitudinal studies listed which supported a connection between social media and poor mental health, only three avoided major errors. Unfortunately, the remaining three “relied on self-reports of social media usage and indirect measures of depression.”
Brown noted that the experimental studies cited by Haidt in the document faced similar issues. In the case of No More FOMO: Limiting Social Media Decreases Loneliness and Depression he noted poor design and evidence of coding errors. Another, Taking a One-Week Break from Social Media Improves Well-Being, Depression, and Anxiety: A Randomized Controlled Trial, asked but never confirmed that its subjects stopped using social media. The remaining studies on the list did not directly study social media or depression.
Christopher Ferguson, a professor of psychology at Stetson University, has noted a larger problem with experiments in this field. Studies concerning social media and mental health tend to have a basic design. Typically, researchers have participants limit social media use for an assigned period of time and a control group which does not. They then compare the change in depression, anxiety, etc. of the two groups. With this type of design, participants are able to guess the researchers’ hypothesis as a result, they may subconsciously change their behavior to fit the hypothesis. This heightens the chance that researchers will find false positives.
There is, however, one major difference between the case of violent video games and social media. While communities fretted over the rise in violence and crime that violent video games could bring by turning children into degenerate criminals, violent crimes among youth actually decreased. In the case of the social media hypothesis, rates of suicide and poor mental health have been rising. The growing problem with teen mental health needs to be researched and addressed. But in order to truly address the problem, we must understand what is truly causing it rather than falling into a tech panic. If we are focusing our efforts in the wrong area, then that is time lost that could be spent finding a true solution.