Category Archives: education reform

Republish: Phonics isn’t working – for children’s reading to improve, they need to learn to love stories, The Conversation

Willem Hollmann, Lancaster University; Cathie Wallace, UCL, and Gee Macrory, Manchester Metropolitan University

Government data has shown that in 2022-23, 30% of five-year-olds in England were not meeting the expected standard for literacy at the end of their reception year at school. Literacy was the area of learning in which the lowest proportion of children reached the target level.

Now, recent research from think tank Pro Bono Economics has found that this lack of early reading skills could result in a £830 million cost to the economy for each year group over their lifetimes.

A 2023 report from the National Literacy Trust found that less than half of children aged eight to 18 say they enjoy reading. Enjoyment is at its lowest level since 2005. Part of learning to read should be learning to love books – and enjoyment in reading is linked to higher achievement. If children don’t like reading, how we teach it to them isn’t working.

Our view, as academic linguists, is that part of the reason why so many children do not experience joy in reading is the excessive focus on synthetic phonics in early education.

Synthetic phonics teaches reading by guiding children to decode words by linking letters (graphemes) to their corresponding sounds (phonemes). For instance, children are taught that the letter “g” corresponds to the initial sound in “get”.

Synthetic phonics is often referred to in everyday language simply as “phonics”. That is useful shorthand but technically speaking “phonics” is a broader term, which refers to all methods of teaching reading that emphasise relations between letters and sounds. Phonics, in this broader sense, also includes analytic phonics, for example. But in analytic phonics whole words are analysed, with the pronunciation of individual letters and groups of letters deduced from that – not the other way around.

Synthetic phonics has always played a role in teaching children how to read, alongside other methods. However, following recommendations by former headteacher and Ofsted Chief Inspector Sir Jim Rose in 2006, it rapidly became the main approach in England, more so than in other Anglophone nations.

The government has pointed to England’s high ranking in the comparative Progress in International Reading Study (PIRLS) as evidence that phonics is working. Unfortunately, other research does not support this narrative around synthetic phonics and literacy.

Another international comparison of student achievement, PISA (Programme for International Student Assessment), looks at 15-year-olds. Here, UK students’ performance in reading was at its highest in 2000, before the heavy emphasis on phonics. Children in the Republic of Ireland and Canada, where synthetic phonics isn’t as central, outperform their British peers in reading.

And in general, England’s PIRLS scores – as well as other data – show that achievement in reading has stayed fairly stable since 2001, rather than showing the improvement that might be expected if phonics was indeed so effective.

Processing language

In synthetic phonics, children do not focus on texts or even paragraphs or sentences. Instead, they process language word by word, letter by letter. An extreme but real example of this is when they are asked to read word lists that even include nonsense words, such as “stroft” or “quoop”. The goal here isn’t to expand vocabulary but to practice blending letter sounds, turning each word into a challenging task.

Children are also given “decodable books”, intended to help them practice a few specific sounds. A genuine example of a story designed to make children practice just eight phonemes, starts as follows: “Tim taps it. Sam sits in. Tim nips in. Sam tips it.” Many of these artificial sentences sound unlike anything children would ever hear or read in a real-life context.

To be fair, the images in this decodable book make it clear that Tim taps the door of a house, that Sam sits inside that house, and so on. But it’s difficult to imagine that children’s attention will be captured by these stories – it certainly wasn’t in the case of one of us, Willem’s, own children.

This is not a good start if we wish to encourage kids to read for pleasure, as the National Curriculum rightly suggests we should.

Educational researchers have argued that the government’s focus on synthetic phonics is not warranted by the research literature. And the relation between sounds and spelling in English is devilishly difficult compared to many other languages, such as Spanish or Polish. For instance, “g” sounds very different in “gel” than it does in “get”. This makes exceedingly high reliance on synthetic phonics a poor decision to begin with.

Broader comprehension

There are alternatives to England’s focus on synthetic phonics. In the Republic of Ireland and Canada, for instance, phonics is integrated into an approach that emphasises reading whole texts and includes strategies other than just synthetic phonics. Children are taught to consider the wider context to look for meaning and identify words.

Take the sentence “Sam sits in his house”. A child may not have learnt the sound corresponding to “ou” and not been taught that an “e” at the end of a word isn’t always pronounced. But if they have genuinely understood the preceding sentences in the story, they have a good chance of figuring out that the word is “house” knowing that Tim has just knocked on a front door and that Sam must sit inside something.

And we know from a study that has examined the findings of many research papers that a phonics-led approach is less effective than one that focuses on comprehension more broadly, by getting children to engage with the text and images in different ways.

We believe the government’s plan for literacy isn’t working. Focusing on stories that children like to read would be a better place to start.

Willem Hollmann, Professor of Linguistics, Lancaster University; Cathie Wallace, Emeritus Professor, Institute of Education, UCL, and Gee Macrory, Visiting Scholar in Education, Manchester Metropolitan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Another Cautionary Tale of Education Reform: “Improving teaching quality to compensate for socio-economic disadvantages: A study of research dissemination across secondary schools in England”

Linked in her article for The Conversation is Sally Riordan’s “Improving teaching quality to compensate for socio-economic disadvantages: A study of research dissemination across secondary schools in England.”

This analysis is another powerful cautionary tale about education reform, notably the “science of reading” (SOR) movement sweeping across the US, mostly unchecked.

As I do a close reading of Riordan’s study, you should also note that the foundational failure of the SOR movement driving new and reformed reading legislation in states is that the main claims of the movement are dramatically oversimplified or misleading. I strongly recommend reviewing how these SOR claims are contradicted by a full examination of the research and science currently available on reading acquisition and teaching: Recommended: Fact-checking the Science of Reading, Rob Tierney and P David Pearson.

This close reading is intended to inform directly how and why SOR-based reading legislation is not only misguided but likely causing harm, notably as Riordan addresses, to the most vulnerable populations of students that education reform is often targeting.

First, here is an overview of Riordan’s study:


Similar to public, political, and educator beliefs in the US, “QFT [quality first teaching] is a commonly held belief amongst school staff” in the UK, Riordan found. In other words, despite evidence that student achievement is overwhelmingly linked to out-of-school factors, teacher quality and instructional practices are often the primary if not exclusive levers of education reform designed to closed so-called achievement gaps due to economic inequities.

This belief, however, comes with many problems:


Riordan’s analysis is incredibly important in terms of how the SOR movement and overly simplistic messaging (see Tierney and Pearson) have been translated into reductive legislation, adopting scripted curriculum, and banning or mandating practices that are not, in fact, supported by science or research.

Riordan identifies bureaucracy and simplistic messaging as the sources of implementation failure:


Nonetheless, “[t]his explicit demand [belief in QFT] is an example of the growing pressure on education practitioners to ensure their practices are supported by evidence (of many kinds),” Riordan explains, adding, “School staff believe that high-quality teaching reduces SED attainment gaps and that their belief is backed by research evidence.”

The research/science-to-instruction dynamic is often characterized by narrow citations or cherry-picking evidence: “Because school leaders cited the same references to research evidence to justify very different policies and practices, I conducted a review of the literature that led to these citations.”

One key problem is that while the evidence base may be narrow and “[a]lthough there is agreement that high-quality teaching is important to tackle SED, principles of QFT are nevertheless being implemented in a myriad of ways across secondary schools in England.”

In the US, many scholars have noted that the SOR movement uses “science” rhetoric but depends on anecdotes for evidence; and, in the UK:

Although many school staff (and particularly school leaders) are aware of the EEF resources and believe that there is evidence supporting principles of QFT, no interviewee described this evidence in any further detail. When asked why QFT works, staff reasoned intuitively. The line of reasoning that can be reconstructed from their replies is independent of the research evidence.

…This intuitive argument, reasoned by school staff, is limited but I do not challenge its validity. The main point here is that this line of reasoning does not reflect the research evidence (which is described in detail below ‘The weakness of the evidence for QFT’). It is not the strength of the evidence base that has convinced school leaders to implement QFT practices. This highlights the importance of the psychological aspects of bringing research evidence to bear on practice. It also raises the possibility that a message was disseminated that was already widely believed. I turn to this bureaucratic concern next.

Improving teaching quality to compensate for socio-economic disadvantages: A study of research dissemination across secondary schools in England

That intuitive urge, again, however, is linked to limited evidence: “Just five studies are being relied upon to disseminate the message that high-quality teaching is the most effective way to reduce SED attainment gaps.”

What may also be driving a misguided reform paradigm is convenience, or a lack of political imagination:


Evidence- or science-based reform, then, tends to be reduced to a “sham” (consider the misleading “miracle” rhetoric around Mississippi, also addressed in Tierney and Pearson):


The unintended consequence is a “misdirection of energy and time of school staff” driven by “pressure to conform to the policies promoted.”

Key to recognize is Riordan identifies that QFT reforms not only fail to close gaps but also cause harm: Some “attempts to improve the quality of teaching are contributing to a large attainment gap,” including: “It is by turning to a more refined measure of SED that we find evidence that the school’s innovations in teaching and learning over the last five years have benefitted its most affluent students most of all.”

Riordan’s conclusion is important and damning:

It has reviewed the wider picture in which school leaders are choosing to implement (or at least justifying the implementation of) particular practices based on a generic message instead of the specific research supporting those practices. The problem here is that the mechanisms operating to connect research with practice are too crude to acknowledge the richness and messiness of social science research. The message, ‘high-quality teaching is the most effective way to support students facing SED’, is too simple to be meaningful. 

Improving teaching quality to compensate for socio-economic disadvantages: A study of research dissemination across secondary schools in England

For the US, education reform broadly and the SOR movement can also be described as grounded in messages that are “too simple to be meaningful” and thus too simple to be effective and even likely to be harmful.


Republish: Schools are using research to try to improve children’s learning – but it’s not working (The Conversation)

[Note: Follow links to research cited and note the recommended links after the republished article.]


Sally Riordan, UCL

Senior Research Fellow in the Centre for Teachers and Teaching Research, UCL

2 April 2024


Evidence is obviously a good thing. We take it for granted that evidence from research can help solve the post-lockdown crises in education – from how to keep teachers in the profession to how to improve behaviour in schools, get children back into school and protect the mental health of a generation.

But my research and that of others shows that incorporating strategies that have evidence backing them into teaching doesn’t always yield the results we want.

The Department for Education encourages school leadership teams to cite evidence from research studies when deciding how to spend school funding. Teachers are more frequently required to conduct their own research as part of their professional training than they were a decade ago. Independent consultancies have sprung up to support schools to bring evidence-based methods into their teaching.

This push for evidence to back up teaching methods has become particularly strong in the past ten years. The movement has been driven by the Education Endowment Foundation (EEF), a charity set up in 2011 with funding from the Conservative-Liberal Democrat coalition government to provide schools with information about which teaching methods and other approaches to education actually work.

The EEF funds randomised controlled trials – large-scale studies in which students are randomly assigned to an educational initiative or not and then comparisons are then made to see which students perform better. For instance, several of these studies have been carried out in which some children received one-on-one reading sessions with a trained classroom assistant, and their reading progress was compared to children who had not. The cost of one of these trials was around £500,000 over the course of a year.

Trials such as this in education were lobbied for by Ben Goldacre, a doctor and data scientist who wrote a report in 2013 on behalf of the Department for Education. Goldacre suggested that education should follow the lead of medicine in the use of evidence.

Using evidence

In 2023, however, researchers at the University of Warwick pointed out something that should have been obvious for some time but has been very much overlooked – that following the evidence is not resulting in the progress we might expect.

Reading is the most heavily supported area of the EEF’s research, accounting for more than 40% of projects. Most schools have implemented reading programmes with significant amounts of evidence behind them. But, despite this, reading abilities have not changed much in the UK for decades.

This flatlining of test scores is a global phenomenon. If reading programmes worked as the evidence says they do, reading abilities should be better.

And the evidence is coming back with unexpected results. A series of randomised controlled trials, including one looking at how to improve literacy through evidence, have suggested that schools that use methods based on research are not performing better than schools that do not.

In fact, research by a team at Sheffield Hallam University have demonstrated that on average, these kinds of education initiatives have very little to no impact.

My work has shown that when the findings of different research studies are brought together and synthesised, teachers may end up implementing these findings in contradictory ways. Research messages are frequently too vague to be effective because the skills and expertise of teaching are difficult to transfer.

It is also becoming apparent that the gains in education are usually very small, perhaps because learning is the sum total of trillions of interactions. It is possible that the research trials we really need in education would be so vast that they are currently too impractical to do.

It seems that evidence is much harder to tame and to apply sensibly in education than elsewhere. In my view, it was inevitable and necessary that educators had to follow medicine in our search for answers. But we now need to think harder about the peculiarities of how evidence works in education.

Right now, we don’t have enough evidence to be confident that evidence should always be our first port of call.

Sally Riordan, Senior Research Fellow in the Centre for Teachers and Teaching Research, UCL

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Recommended

Close Reading: Evidence, schmevidence: the abuse of the word “evidence” in policy discourse about education, Gary Thomas

Recommended: Fact-checking the Science of Reading, Rob Tierney and P David Pearson

Big Lies of Education: International Test Rankings and Economic Competitiveness

“Human development is an important component of determining a nation’s productivity and is measured in employee skills identified by employers as critical for success in the modern global economy,” claims Thomas A. Hemphill, adding:

The United States is obviously not getting a sufficient return on investment in elementary and secondary education, as it has mediocre scores in mathematics literacy and declining scores for science literacy for 15-year-old students surveyed in 2022. The only significant improvement for 15-year-olds is in reading, where the United States finally entered the top 10 in 2022.

Commentary: We must improve students’ math, science skills to boost US competitiveness

Hemphill reaches an unsurprising conclusion:

If these educational trends continue, the United States will not have an adequate indigenous workforce of scientists, engineers and technologists equipped to maintain scientific and technological leadership and instead will become perpetually reliant on scientifically and technologically skilled immigrants. We must demand that elementary and secondary education systems reorient efforts to significantly improve mathematical and scientific teaching expectations in the classroom.

Commentary: We must improve students’ math, science skills to boost US competitiveness

However, for decades, evidence has shown that there is no causal link between international rankings of student test scores and national economic competitiveness.

This Big Lie is purely rhetorical and relies on throwing statistical comparisons at the public while drawing hasty and unsupported causal claims from those numbers.

If you really care about the claim, see Test Scores and Economic Growth by Gerald Bracey.

Bracey offers this from researchers on the relationship between international education rankings and economic competitiveness:

Such countries [highest achieving] “do not experience substantially greater economic growth than countries that are merely average in terms of achievement.”

The researchers then lay out an interpretation of their findings that differs from the causal interpretation one usually hears:

“We venture, here, the interpretation that much of the achievement ‘effect’ is not really causal in character. It may be, rather, that nation-states with strong prodevelopment policies, and with regimes powerful enough to enforce these, produce both more economic growth and more disciplined student-achievement levels in fields (e.g., science and mathematics) perceived to be especially development related. This idea would explain the status of the Asian Tigers whose regimes have been much focused on producing both economic growth and achievement-oriented students in math and science.”

Test Scores and Economic Growth

Bracey quotes further from that research:

“From our study, the main conclusion is that the relationship between achievement in science and mathematics in schoolchildren and national economic growth is both time and case sensitive. Moreover, the relationship largely reflects the gap between the bottom third of the nations and the rest; the middle of the pack does not much differ from the rest. . . . Much of the obsession with the achievement ‘horse race’ proceeds as if beating the Asian Tigers in mathematics and science education is necessary for the economic well-being of other developed countries. Our analysis offers little support for this obsession. . . .

“Achievement indicators do not capture the extent to which schooling promotes initiative, creativity, entrepreneurship, and other strengths not sufficiently curricularized to warrant cross-national data collection and analysis. Unfortunately, the policy discourse that often follows from international achievement races involves exaggerated causal claims frequently stress- ing educational ‘silver bullets’ for economic woes. Our analyses do not offer defini- tive answers, but they raise important ques- tions about the validity of these claims. In an era that celebrates evidence-based policy formation, it behooves us to carefully weigh the evidence, rather than use it simply as a rhetorical weapon.”

Test Scores and Economic Growth

A key point to note here is Bracey is writing in 2007, and the OpEd above is March 2024. The Big Lie about international education rankings and economic competitiveness is both a lie and a lie that will not die.

I strongly recommend Tom Loveless exposing a similar problem with misrepresenting and overstating the consequences of NAEP data: Literacy and NAEP Proficient.

Bracey offers a brief but better way to understand test data and economic competitiveness: “education is critical, but among the developed nations differences in test scores are trivial.”

Instead of another Big Lie, the US would be better served if we tried new and evidence-based (not ideological) ways to reform our schools and our social/economic structures.


Education Journalism and Education Reform as Industry

[Header Photo by Patrick Hendry on Unsplash]

Regardless of your level of optimism, there simply is no other conclusion to draw from over forty years of educational crisis and reform: Education reform has almost nothing to do with improving education for students, but education reform has become an industry.

And one of the most powerful engines driving the crisis/reform industry in education is education journalism.

Education journalists only write two kinds of stories about education: Education as failure (Crisis!), and education miracles.

Interesting is that both are provable false narratives.

Yet, these stories endure because “[v]iewing education as being in more or less permanent crisis” (Edling, 2015) creates market and political profit for media, private education corporations, and politicians.

As a simple fact of logic, consider that for at least the last century, on every standardized test in every content area, students in poverty as a group have scored significantly lower than more affluent students.

Over that century, the teachers, instructional practices, programs, and curriculums/standards have changed dozens of times and have never been coherent at any one point across the US.

The only constant, of course, has been the economic inequity of the populations of students being taught and tested.

Yet, what have the media and political leaders focused on almost exclusively in the forty-plus-years of intense accountability reform (over what can be called at least half a dozen cycles in those decades)?

Instruction, curriculum/standards, and programs.

Why?

Teacher training churn, standardized test churn, and program churn are industries, and there is profit in constant churn.

Never has it been more clear that education reform is industry: “The administrations in charge,” writes Gilles Deleuze in Postscript on the Societies of Control, “never cease announcing supposedly necessary reforms: to reform schools, to reform industries, hospitals, the armed forces, prisons” (p. 4).

And never has it been more clear that education journalism is deeply invested in that churn, in manufacturing perpetual crisis:

Just within the past twenty years, Gates+ money has incubated several new education-only media outlets, such as Chalkbeat, EdReportsEdSurge, Education Next, Ed Post, FutureEd, and The 74. Gates+ money has also substantially boosted the efforts of preexisting education-only media organizations, such as EdSourceEducation Week, the Education Writers Association, the Thomas B. Fordham Institute, and the Hechinger Report. All told, this accounts for almost all large-audience, US, K–12-education-only print media outlets, other than those tied to the traditional public education establishment.

Have the Gates Foundation and Its Allies Purchased US Education Journalism?

I suspect no one embodies education reform as industry as well as Bill Gates, education reform hobbyist.

And, for example, nothing better exemplify what the true commitment in education journalism is better than the Education Writers Association and the current darling of education misinformation, Emily Hanford:

Manufacture a crisis with a melodramatic story, and then steel the troops for the inevitable outcome so that everyone can circle back to yet another crisis and more reform.

There is the sound of profit in the background as journalists as “watchdogs” announce more and more failure and crisis among school, teachers, and students.

The real literacy crisis is that too many people cannot read the writing on the wall. Or more likely, too many people are blinded by the profit of education reform as industry to even see the writing on the wall.


Thought Experiment 1

If you wonder if or how money matters, consider that the Department of Education Reform at the University of Arkansas is funded by Walton money. The Walton’s are significant school choice and charter school proponents. The “research” coming out of the DER is overwhelming positive about school choice and charter schools.

Coincidence?

Thought Experiment 2

England passed reading reform mandating systematic phonics for all students in 2006 (although their media and political leaders persist in crying “reading crisis”).

And thus: Ruth Miskin Literacy makes nearly £10 million profit in four years – taking cash pile for its sole shareholder to approaching £15 million


The Reading League: Science or Grift?

Along with Decoding Dyslexia, The Reading League is likely the largest advocacy group for the “science of reading” (SOR). As I have detailed, however, there is a serious problem with the “science” in their advocacy.

Let me remind you of the standards for “science” that The Reading League has proposed:

Now, consider the following:

The problem lies in promoting decodable texts under the guise of SOR when here is the evidence on decodable texts [1]:

The Science of Reading: A Literature Review

The Reading League represents that, for the most part, the SOR movement is less about science and more a grift.

SOR advocates practice “science for thee, but not for me.”

This is yet another education reform fad that is certain to do more harm than good—except for the grifters.


[1] The Case Against Decodable Texts, Jeff McQuillan, Language and Language Teaching, Issue No. 21, January 2022 

See Also

Where’s Evidence from The Reading League’s Corporate Sponsors?

Here are the sponsors that promoted The Reading League’s recent conference. Some simply sell decoding books. If aware of peer-reviewed studies that I may have overlooked for any of these programs or assessments, please share.

Jobs’ Reading Scam


Where’s the Science?

For those of us of a certain age, well before the era of trending on social media, a simple ad for Wendy’s prompted the catch phrase “Where’s the beef?”

The ad made Clara Peller a star in her 80s, and it certainly helped create a national distinction among fast-food hamburger restaurants in the US.

On a much more serious note, we now find ourselves at a moment in reading reform in the US—when media stories have compelled public beliefs and prompted political legislation—that we must begin to ask, “Where’s the science?”

As early as 2020, literacy scholars identified the bait-and-switch approach being used in the “science of reading” (SOR) movement—demanding science while relying on anecdotes:

Hoffman, J.V., Hikida, M., & Sailors, M. (2020). Contesting science that silences: Amplifying equity, agency, and design research in literacy teacher
preparation. Reading Research Quarterly, 55(S1), S255–S266. https://doi.org/10.1002/rrq.353

Here are two recent posts on Twitter/X that provide an entry point into that bait-and-switch coming true:

Gilson asks a key and foundational question about the basis of the SOR movement—the unsupported claims of a reading crisis caused by balanced literacy and a few identified reading programs (primarily by Lucy Calkins and Fountas and Pinnell).

To be blunt, there is no scientific research showing a causal relationship between any reading theory or specific programs and a reading crisis. Notably, there simply isn’t any evidence that reading achievement is coherent enough or that reading programs are consistently used across the entire nation in ways that even make that claim possibly true.

And then, more insidious perhaps, SOR advocates not only bait-and-switch with science/anecdotes, the claims of “science” or “research” are often linked to journalism, not cited at all, cherry-picked evidence, or as Flowers calls out, misrepresentations of evidence.

The resulting legislation, then, is forcing successful schools to change programs and practices by sheer fiat, such as in Connecticut, or imposing bans and mandates that are wildly arbitrary.

Note the practices from a literature review of the science of reading below; please note that “not scientific” can mean either that scientific research has shown the practice to be ineffective or that no scientific research yet exists:

Not only must we ask “Where’s the science?” we must also ask why is three-cueing being banned in the same states mandating O-G phonics (multi sensory approaches), decodable texts, and LETRS training although all of there are technically not scientific?

The answer, of course, is that the SOR movement is mostly rhetorical ideological, and commercial.

Bans and mandates are about serving a narrow set of reading ideologies and lining the pockets of certain education markets.

Teachers, parents, and even students are starting to acknowledge that the SOR tsunami is causing great harm to teaching and learning reading.

This is late, but we simply all must start demanding that SOR advocates practice what they preach. When they make their condescending claims about teachers of reading, teacher educators, student reading achievement, and reading programs, we absolutely must ask, Where’s the science?


“Science of” Movement Repeating Mistakes of Education Reform Cycles

Many years ago, I was unusually excited to hear the keynote speaker at the annual SCCTE conference on Kiawah Island, SC—Harvey Smokey Daniels.

For many years in my methods courses for secondary ELA certifiers and practicing teachers, I used Best Practice by Steven Zemelman, Daniels, and Arthur Hyde.

Daniels surprised the attendees by noting that he was moving away from the term “best practice” because it had become ubiquitous and thus meaningless. He warned that many, if not most, books being published with “best practice” in the title were anything except best practice.

The term had moved from careful scholarship (the book Daniels co-authored is a wonderful and cautious attempt to translate a wide body of research into classroom practice among the major disciplines) to branding.

And thus, as Daniels lamented, “best practice” was lost in the abyss that is educational marketing.

Much more quickly and recently, Common Core experienced a meteoric rise and sudden crash and burn. In the mean time, classrooms materials were quickly labeled “Common Core” even as the movement was hastily erased before some states even implemented the standards or the national tests (my home state of SC did exactly that as a knee-jerk Republican maneuver to reject Obama, they believed).

Spurred in early 2018 with the rise of the “science of reading,” the “science of” movement appears to be in full swing with the addition of the “science of learning,” the “science of writing,” and the “science of math”—mostly following frantic claims of crisis based on test scores (usually NAEP data).

You likely won’t have to wait long because the soar/collapse cycle is already in front of us as the state of Alabama was first included as one of the “soaring” Deep South states adopting the “science of reading” like Mississippi, and then this: Alabama reading scores drop in latest state test results. How many students can read?

While both media assessments lack credibility, the rhetoric itself harkens yet another education reform movement destined for the garbage bin. We seem unable to learn that the crisis/miracle reform cycle never works because the problems are always misrepresented and then the solutions are always mandates that will fail.

Let me note here that what made the original best practice approach a wonderful methods text is that the instructional practices were recommended as “increase” or “decrease”—not mandate or ban:

This helps show how the “science of reading” movement—grounded in media false stories and political mandates—is repeating the mistakes of dozens of reform movements before this “science of” nonsense.

The essential mistakes are framing “science of” as a mandate/ban or science/not science dichotomy.

Legislation across most states is now banning specific reading practices and programs while mandating other practices and programs.

While legislation should never ban or mandate specific practices in education, “science of reading” (SOR) legislation also fails by cherry-picking what counts as science/not science.

To be blunt, SOR legislation is driven by ideology and marketing, not science; the mandate/ban line is subject to cherry picking.

For example, many states are simultaneously banning three cueing as not scientific but mandating or funding decodable texts and multi-sensory approaches such as O-G phonics.

In The Science of Reading: A Literature Review (prepared for Connecticut), however, this literature review shows all of those practices lack scientific evidence:

Unlike the careful work done on best practice by Daniels and others, the “science of” movement suffers from ham-fisted mandates and essential failures to understand what “science” means for classroom practice.

The Reading League, for example, limits what counts as “scientific” to experimental/quasi-experimental research that is published in peer-reviewed journals. While this is a ridiculously narrow use of evidence and research, it also poses several problems.

First, as the literature review above notes, the science/not science dichotomy can include both practices/programs that have scientific research supporting or not the practice/program or practice/programs that do not yet have any or enough scientific evidence (such as the programs LETRS).

Next, and more importantly, many people fundamentally misunderstand what the science/not science distinction means for classroom practice.

If we use medicine as an analogy, once a medication is found to be effective, that means that medicine X under Y conditions will produce Z outcomes for most people (a generalization).

What is often ignored is that there are at least two outlier groups in that claim; one group will not experience the positive outcome, and one group can experience negative outcomes.

As a teen, I fell into that latter group with both Tylenol (a reaction that can be life threatening) and penicillin.

If we insist on using the science/not science distinction, then, for classroom practices we must not translate that into mandate/ban.

The “science of” movement could be effective if we did two things: (a) expand the use of research to include more than narrowly “scientific” evidence, and (b) replace mandate/ban with implement with confidence/implement with caution.

Let me end by briefly considering what implement with confidence/implement with caution should accomplish.

If we use research/evidence/science to drive implement with confidence, that means those practices and programs can be used to plan broadly (year-long and unit plans prepared before teaching and before having evidence from students to guide instruction).

Those practices and programs, like medication, can be trusted to work for most students under defined conditions—recognizing that there will be outliers and conditions can change, thus changing outcomes.

Practices and programs that can be implemented with caution augment those initial plans and can serve the outliers as well as when conditions change.

Here is the key that the “science of” movement is failing most significantly: This process must honor the autonomy of the teacher to serve the individual needs of students.

As the swing in rhetoric about Alabama reveals (see also the realities about Florida and Mississippi), the “science of” movement is doomed to fail, doomed by repeating the mistakes of reform cycles we have blindly followed for over four decades.


Close Reading: Evidence, schmevidence: the abuse of the word “evidence” in policy discourse about education, Gary Thomas

[Header Photo by thom masat on Unsplash]

Before the close reading below, let me offer several examples for context concerning how media have weaponized “science” resulting in misguided and even harmful reading legislation.

First, here is an example of a journalist posting an article by a journalist praising a journalist. What is missing? Actual research, evidence, or science.

Gottlieb’s article, oddly, repeats three times at the end that he is a journalist, but in the piece, he seems most concerned about advocating for Hanford:

As brilliantly illuminated by education journalist Emily Hanford’s articles over the past several years, and her 2023 “Sold a Story” podcast, the education establishment in this country — which includes textbook and curriculum publishers, schools of education and school districts — has been guilty of educational malpractice for decades, using now-discredited Whole Language methods for teaching reading.

Too little progress in teaching Colorado kids to read

See this for a critical unpacking of Hanford’s false claims repeated by Gottlieb: How Media Misinformation Became “Holy Text”: The Anatomy of the SOR Movement.

Gottlieb refers to a report and data, but offers no links to any science or research to support any of his claims, again primarily supported by Hanford’s “brilliant” podcast.

Next, Hanford’s There Is a Right Way to Teach Reading, and Mississippi Knows It demonstrates again the lack of science or research and the self-referential nature of media’s false claims about reading and the “science of reading.”

Note that the subhead, written by editors, not the journalist (“The state’s reliance on cognitive science explains why”) is directly contradicted by Hanford, although the article itself implies the opposite of what she acknowledges:

What’s up in Mississippi? There’s no way to know for sure what causes increases in test scores, but Mississippi has been doing something notable: making sure all of its teachers understand the science of reading.

There Is a Right Way to Teach Reading, and Mississippi Knows It

When Hanford makes huge claims about teachers being unprepared to teach reading (“But a lot of teachers don’t know this science“), the link provided circles back to her own journalism, not research, not science.

The consequences of this media cycle of using “science” to give stories credibility while omitting the actual science is reading policy grounded in misinformation, but also given the veneer of “science”:

Legislation that would require Michigan schools to use a reading curriculum and interventions for students with dyslexia that are backed by science has taken a different shape to satisfy school administrators who questioned the timeline in the bills.

Michigan eyes reforms to teach those with dyslexia. Critics say more is needed

And with the rise in reading legislation labeled as “scientific,” the education marketplace has eagerly jumped on board (“story,” “data,” “science”):

And thus, let’s do a close reading:

Gary Thomas (2023) Evidence, schmevidence: the abuse of the word “evidence” in policy discourse about education, Educational Review, 75:7, 1297-1312, DOI: https://tandfonline.com/doi/full/10.1080/00131911.2022.2028735

Thomas explains the essay purpose as follows:

I focus in this essay on the way that policymakers in education may promote policy through the use of words and terms used by academics and by the public about education topics – words and terms such as “evidence”, “what works”, “evidence-based policy” and “gold standard”. In particular, I examine ways in which vernacular and specialist meanings of “evidence” and “evidence-based” may become hybridised; ways in which technical terms may be appropriated by politicians and their advisers for public consumption, and, in the process, become degraded and corrupted in the service of their own policy agendas.

One issue with the use of “evidence” (and synonyms) is that policymakers are apt to resort to “’cherry-picking, obfuscation or manipulation.’”

Terms such as “evidence” (and “science”) are designed to create “the ‘almost magical power’ that certain words acquire to ‘… make people see and believe.'”

Thomas’s analysis found:

In not one of the 100 uses was “evidence” used prefatory to an actual itemisation of data in support of a proposition, and in all cases in the non-specific category, “evidence” was used with verbs – e.g. “there is evidence”, “England possesses evidence” – which simultaneously conferred authority via the supposed status of “evidence” at the same time as acting as a proxy for detailed enumeration of specific data. The authority of the non-specific “evidence” was amplified with many qualifications of the word, which, without detail of the data for which “evidence” was a proxy, appeared merely to add rhetorical weight rather than empirical support. These qualifiers included words/terms such as incriminating, overwhelming, strong, weak, little, hard, fresh, preliminary, sufficient, inadmissible, no, verifiable, hearsay, prima facie, disturbing, concrete.

As Thomas walks the reader through a few examples, he highlights: “’Evidence’ is here prefaced with ‘scientific’, seemingly to elevate its status in the absence of specificity – a strategy frequently employed in general discourse, as the analysis of the corpora revealed.”

“Evidence” (like “science” and “research”) is commonly used in place of citing actual evidence throughout media and political discourse. [As my examples above show, US media often link to other media when terms such as “science” and “research” are used.]

“Evidence” is weaponized, then, as Thomas explains:

All the examples given here reveal the fashioning of semiotics, the creation of meaning, and the dissemination of messages to non-specialist audiences in an outlet that, while widely read, offers no obvious route for scholarly interrogation or critique – at least, within a timeframe that might allow meaningful challenge. The putative “evidenced reality” proves on examination not to exist and the attempt is – in the world of retail politics – to craft an illusion of “evidence” in support of particular political agendas, employing devices such as the “negative other-representation” to attempt to augment the writer’s position.

And thus:

“Evidence”, in the pieces examined here, is used often with only a superficial allusion to any kind of research, and the research “evidence”, where any is cited, is often highly selectively sampled, with unconcealed deprecation of alternative interpretations.

Thomas then addresses the need for scholars to correct the misleading stories of media and political leaders instead of jumping on the bandwagon of reform for financial gain or prestige:

Academics must take a share of responsibility in the way that this process proceeds unimpeded. Such is the pressure inside universities for staff to be winning research grants and earning research income that there is inevitably willing involvement in con- tract research involving the kind of steering groups I have just mentioned.

Yet, Thomas ends by acknowledging that the weaponizing of “evidence” (and “science” along with other synonyms) immediately frames anyone challenging the stories negatively [1]:

In realising this, astute politicians can kill two birds with one stone. The knack is to enlist conspicuously with “science”, ostensibly adhering firmly to principles of reason and empiricism, while simultaneously projecting silliness, unreason and disengagement from research findings onto one’s interlocutor – as did Gibb in the phrase cited in illustrative case study 2: “The evidence is clear – however much it may shock the pre-conceived expectations of some education experts”, or as did Cummings in declaring that the “education world” handles scientific developments “badly”. Utter the phrase “the evidence is clear” and one straightaway affiliates oneself with reason, wisdom and unequivocal allegiance to empirical inquiry. One’s interlocutors, by contrast, are immediately forced onto the back foot, compelled to defend themselves against charges of not engaging with evidence – of subjectivity, sloppiness, credulity and narrow-mindedness borne of ideology.

Therefore, as Thomas concludes about “evidence,” here in the US we too must accept about “science” in media rhetoric and political policy”

On the basis of the analysis here, “evidence-based” is next to meaningless, given that the evidence in question is habitually unspecified and given that any evidence that is actually specified is carefully selected and/or offered as if it were superior to other evidence which suggests conclusions at variance to those being proffered. Protean and manoeuvrable, terms such as “evidence-based” are powerful rhetorically. They drop easily into conversation, speeches and documents to add weight to an assertion. Filling any gap, taking any shape, as instruments of retail politics they serve politicians’ purpose perfectly, but in any discourse with pretensions to scholarly independence and disinterestedness, their mutability ought to be troubling. Our responsibility as an academy is surely consistently to question these terms, to call for specification of evidence, to be ready to provide alternative evidence, to engage energetically with a broad range of media and social media (i.e. not just peer review and academic publications) and to question the validity of concepts such as “impact”.


[1] Compare this framing with how the Education Writers Association and Hanford frame the role of journalists and the expectation that implementing the “science of reading” may fail: