Understanding “Science” as Not Simple, Not Settled: Meta-Analysis Edition

A powerful but often harmful relationship exists among research/science, mainstream media, and public policy.

One current example of that dynamic is the “science of reading” (SOR) movement that is driving reading legislation and policy in more than 30 states (see HERE and HERE).

Mauren Aukerman, who has posted two of three planned posts on media coverage of SOR (HERE and HERE), identifies in that second post a key failure of media: Error of Insufficient Understanding 3: Spurious Claims that One Approach is Settled Science.

For example, Aukerman details with citations to high-quality research/science: “In short, there is insufficient evidence to conclude that any single approach, including the particular systematic phonics approach often elided with ‘the science of reading,’ is most effective.” And therefore, Aukerman recommends: “Be skeptical of ‘science of reading’ news that touts ‘settled science,’ especially if such claims are used to silence disagreement.

What makes media a dangerous mechanism for translating research/science into policy is that journalists routinely oversimplify and misrepresent research/science as “settled” when, in reality, most research/science is an ongoing conversation with data that presents varying degrees of certainty about whatever questions that research/science explores.

In education, research/science seeks to identify what instruction leads best to student learning—such as in the reading debate.

The other problem with media serving as a mechanism between research/science and policy is that journalists are often trapped in presentism and either perpetuate or are victims of fadism.

Despite no settled research/science supporting media’s coverage of the current reading “crisis,” the initial “science of reading” narrative created by Emily Hanford has now become standard media narratives without any effort to check for validity (again, I highly recommend Aukerman’s first post).

Regretfully, education (and students, teacher, parents, and society) is regularly the victim of fadism at the expense of research/science. The list of recent edu-fads that were promoted uncritically by media only to gradually lose momentum because, frankly, they simply never were valid policies is quite long: charter schools (notably no-excuses models), value-added methods for evaluating/paying teachers, school choice, Common Core, etc.

Two fads that represent well how the misuse of “science” helps this failed cycle in education are “grit” and growth mindset. Both gained their introduction to mainstream education because media portrayed the concepts are research/science-based (even justified, as “grit” was, by the Genius grant).

While schools fell all over themselves, uncritically, to embrace and implement “grit” and growth mindset, the research community gradually revealed that both concepts have some important research and ideological problems. Scholars have produced research/science that complicates claims about “grit” and growth mindset, and many critical scholars continue to call for interrogating the racist/classist groundings of both concepts.

Growth mindset has been in the news again (and discussed on social media) because two recent meta-analyses reach different conclusions; see this Twitter thread for details:

Tipton and co-authors, in fact, have published an analysis and commentary on this problem: Why Meta-Analyses of Growth Mindset and Other Interventions Should Follow Best Practices for Examining Heterogeneity.

The issue raised about meta-analyses parallels the exact problem with media coverage of research/science—scientific methodologies that fail due to oversimplification. See this Tweet, for example, about meta-analyses:

Especially in education, when individual student needs greatly impact what is “best” for teaching and learning in any given moment, Tipton’s final Tweet cannot be over-emphasized:

The use of “science” in research is necessarily limiting (see HERE) when that “science” is restricted to experimental/quasi-experimental designs seeking proof of cause (does instructional approach X cause students to learn better than instructional approach Y).

While causal conclusions and research methods that address populations and controls are the Gold Standard for high-quality research/science, this type of “science” is often less valuable for the practical day-to-day messiness of teaching and learning.

Educators are better served when research/science is used to inform practice, not to mandate one-size-fits-all practice (see HERE).

The media and journalists more often than not turn research/science into oversimplified truisms that then are used as baseball bats to beat policy advocates into submission. The conversation and nuance are sacrificed along with effective policy.

The public and policymakers are left with a challenge, a way to be critical and careful when either the media or researchers present research/science.

As Aukerman warns, if journalists or researchers start down the “simple, settled” path, then they are likely not credible (or they have an agenda) because the real story is far more complicated.

See Also

The misdirection of public policy: comparing and combining standardised effect sizes, Adrian Simpson