The LTT is different than regularly reported NAEP testing, as explained here:
As I will highlight below, it is important to emphasize LTT is age-based and NAEP is grade-based.
LTT assesses reading for 13-year-old students, and by 2023, these students have experienced school solidly in the “science of reading” (SOR)-legislation era, which can be traced to 2013 (30+ states enacting SOR legislation and growing to almost every state within the last couple years [1]).
Being age-based (and not impacted by grade retention), the trends tell a much different story than the popular and misleading SOR movement.
Consider the following [2]:
Here is the different story:
There is no reading crisis.
Test-based gains in grades 3 and 4 are likely mirages, grounded in harmful policies and practices such as grade retention.
Age 13 students were improving at every percentile when media and politicians began crying “crisis,” but have declined in SOR era, notably the lowest performing students declining the most.
Reading for fun and by choice have declined significantly in the SOR era (a serious concern since reading by choice is strongly supported by research as key for literacy growth).
Here are suggested readings reinforced by the LTT data:
The US has been sold a story about reading that is false, but it drives media clicks, sells reading programs and materials, and serves the rhetorical needs of political leaders.
Students, on the other hand, pay the price for false stories.
[1] Documenting SOR/grade-three-intensive reading legislation, connected to FL as early as 2002, but commonly associated with 2013 as rise of SOR-labeled legislation (notably in MS):
Cummings, A. (2021). Making early literacy policy work in Kentucky: Three considerations for policymakers on the “Read to Succeed” act. Boulder, CO: National Education PolicyCenter. Retrieved May 18, 2022, from https://nepc.colorado.edu/publication/literacy
Cummings, A., Strunk, K.O., & De Voto, C. (2021). “A lot of states were doing it”: The development of Michigan’s Read by Grade Three law. Journal of Educational Change. Retrieved April 28, 2022, from https://link.springer.com/article/10.1007/s10833-021-09438-y
Collet, V.S., Penaflorida, J., French, S., Allred, J., Greiner, A., & Chen, J. (2021). Red flags, red herrings, and common ground: An expert study in response to state reading policy. Educational Considerations, 47(1). Retrieved July 26, 2022, from https://doi.org/10.4148/0146-9282.2241
Reinking, D., Hruby, G.G., & Risko, V.J. (2023). Legislating phonics: Settle science of political polemic? Teachers College Record. https://doi.org/10.1177/01614681231155688
Thomas, P.L. (2022). The Science of Reading movement: The never-ending debate and the need for a different approach to reading instruction. Boulder, CO: National Education Policy Center. http://nepc.colorado.edu/publication/science-of-reading
[2] Despite claims of a “miracle” MS grade 8 NAEP in reading remains at the bottom after a decade of SOR legislation:
Yesterday, I spent an hour on the phone with the producer of a national news series.
I realized afterward that much of the conversation reminded me of dozens of similar conversations with journalists throughout my 40-year career as an educator because I had to carefully and repeatedly clarify what standardized tests do and mean.
Annually for more than the first half of my career, I had to watch as the US slipped into Education Crisis mode when SAT scores were released.
Throughout the past five decades, I have been strongly anti-testing and anti-grades, but most of my public and scholarly work challenging testing addressed the many problems with the SAT—and notably how the media, public, and politicians misunderstand and misuse SAT data.
Over many years of critically analyzing SAT data as well as the media/public/political responses to the college entrance exam, many key lessons emerged that include the following:
Lesson: Populations being tested impact data drawn from tests. The SAT originally served the needs of elite students, often those seeking Ivey League educations. However, over the twentieth century, increasingly many students began taking the SAT for a variety of reasons (scholarships and athletics, for example). The shift in population of students being tested from an elite subset (the upper end of the normal curve) to a more statistically “normal” population necessarily drove the average down (a statistical fact that has nothing to do with school or student quality). While statistically valid, dropping SAT scores because of population shifts created media problems (see below); therefore, the College Board recentered the scoring of the SAT.
Lesson: Ranking by test data must account for population differences among students tested. Reporting in the media of average SAT scores for the nation and by states created a misleading narrative about school quality. Part of that messaging was grounded in the SAT reporting average SAT scores by ranking states, and then, media reporting SAT average scores as a valid assessment of state educational quality. The College Board eventually issued a caution: “Educators, the media and others should…not rank or rate teachers, educational institutions, districts or states solely on the basis of aggregate scores derived from tests that are intended primarily as a measure of individual students.” However, the media continued to rank states using SAT average scores. SAT data has always been strongly correlated with parental income, parental level of education, and characteristics of students such as gender and race. But a significant driver of average SAT scores also included rates of participation among states. See for example a comparison I did among SC, NC, and MS (the latter having a higher poverty rate and higher average SAT because of a much lower participation rate, including mostly elite students):
Lesson: Conclusions drawn from test data must acknowledge purpose of test being used (see Gerald Bracey). The SAT has one very narrow purpose—predicting first-year college grades; and the SAT has primarily one use—a data point for college admission based on its sole purpose. However, historically, media/public/political responses to the SAT have used the data to evaluate state educational quality and the longitudinal progress of US students in general. In short, SAT data has been routinely misused because most people misunderstand its purpose.
Recently, the significance of the SAT has declined, students taking the ACT at a higher rate and more colleges going test-optional, but the nation has shifted to panicking over NAEP data instead.
The problem now is that media/public/political responses to NAEP mimic the exact mistakes during the hyper-focus on the SAT.
NAEP, like the SAT, then, needs a moment of reckoning also.
Instead of helping public and political messaging about education and education reform, NAEP has perpetuated the very worst stories about educational crisis. That is in part because there is no standard for “proficiency” and because NAEP was designed to provide a check against state assessments that could set cut scores and levels of achievement as they wanted:
Since states have different content standards and use different tests and different methods for setting cut scores, obviously the meaning of proficient varies among the states. Under NCLB, states are free to set their own standards for proficiency, which is one reason why AYP school failure rates vary so widely across the states. It’s a lot harder for students to achieve proficiency in a state that has set that standard at a high level than it is in a state that has set it lower. Indeed, even if students in two schools in two different states have exactly the same achievement, one school could find itself on a failed-AYP list simply because it is located in the state whose standard for proficient is higher than the other state’s….
Under NCLB all states must administer NAEP every other year in reading and mathematics in grades 4 and 8, starting in 2003. The idea is to use NAEP as a “check” on states’ assessment results under NCLB or as a benchmark for judging states’ definitions of proficient. If, for example, a state reports a very high percentage of proficient students on its state math test but its performance on math NAEP reveals a low percentage of proficient students, the inference would be that this state has set a relatively easy standard for math proficiency and is trying to “game” NCLB.
In other words, NAEP was designed as a federal oversight of state assessments and not an evaluation tool to standardize “proficient” or to support education reform, instruction, or learning.
As a result, NAEP, as the SAT/ACT has done for years, feeds a constant education crisis cycle that also fuels concurrent cycles of education reform and education legislation that has become increasingly authoritarian (mandating specific practices and programs as well as banning practices and programs).
With the lessons from the SAT above, then, NAEP reform should include the following:
Ending state rankings and comparisons based on NAEP average scores.
Changing testing population of students by age level instead of grade level (addressing impact of grade retention, which is a form of state’s “gaming the system” that NAEP sought to correct). NAEP testing should include children in an annual band of birth months/years regardless of grade level.
Providing better explanations and guidance for reporting and understanding NAEP scores in the context of longitudinal data.
Developing a collaborative relationship between federal and state education departments and among state education departments.
While I remain a strong skeptic of the value of standardized testing, and I recognize that we over-test students in the US, I urge NAEP reform and that we have a NAEP reckoning for the sake of students, teachers, and public education.
Similar to ExcelinEd, Ohio Excels has entered the grade retention advocacy movement as part of the larger disaster reformreading policy movement occurring in the US for about a decade.
There is a pattern emerging in grade retention advocacy that contrasts with decades of research showing that grade retention, on balance, disproportionately impacts marginalized populations of students without improving academic achievement but correlating strongly with students dropping out of high school. [1]
The key aspects of the new advocacy reports include the following:
Funding and support by conservative think tanks.
An emphasis on early test score increases (grades 3 and 4) and claims of no negative impacts on students.
One problem is that these grade retention reports are often promoted in the media in incomplete and misleading ways, fitting into a similar pattern of education journalism.
The omissions, what is not reported, are the most important aspects of this advocacy, however.
Just as ExcelinEd uses one or two reports to endorse grade retention (again, see here for why that is misleading), this report connected to OSU has some key elements and one fatal flaw.
First, as is true about almost all grade retention, the reality of retention in OH is that it disproportionately impacts vulnerable populations of students:
The retained students were between 2.7% to 4.0% of all students subject to the retention policy. Numerically the largest group were retained in 2017 (4,590) and the smallest in 2016 (2,892).4 Overall, some 55% of retained students were male (versus 50% of not retained students), and 91% were economically disadvantaged (versus 50% of not retained students). Of the 20,870 retained some 17% had a disability (versus 10% of not retained students). In terms of race and ethnic characteristics, the largest fraction (48%) of students retained were African American (versus 14.3% of not retained students), 34% were White, Non-Hispanic (versus 72% of not retained students), 11% were Hispanic (versus 6% of not retained students), and 7% were Multiracial or Other Races (versus 5% of not retained students).
This report then concludes positive academic growth in math and reading for retained students. However, as with other recent grade retention advocacy reports, these positive academic gains remain linked to grade-level performance, and not age-level performance.
In short, retained students are always performing academically at an older age that non-retained students (note that this report carefully compares retained to nonretained students without controlling for age).
This is a key problem since even one month of age difference correlates strongly with phonics checks (and early literacy assessments tend to focus heavily on decoding and not comprehension):
Therefore, none of the recent grade retention advocacy reports show a causal relationship between retention and academic achievement. In fact, there is no evidence that the retained students’ gains are not simply being a year older.
These advocacy reports depend on the public confusing correlation and causation, and media fails to make that scientific distinction.
Decades of research as well have shown great emotional harm in grade retention; the grade retention advocacy reports simply ignore the personal and emotional consequences of grade retention by hyper-focusing on narrow measures of academic gain.
Grade retention is a punitive policy that disproportionately impacts Black and brown children, poor children, special needs children, and multi-lingual learners. [2]
Endorsing grade retention is ideological, neither scientific nor ethical.
The rise is grade retention advocacy reports are failing by omission and children are suffering the consequences of using reading legislation for political gain.
From CNN: How long you breastfeed may impact your child’s test scores later, study shows.
This sounds really compelling; it fits into a cultural narrative that breast feeding is superior to using baby formula.
This sounds really compelling until about ten paragraphs in and then:
“Though the results are certainly interesting, you have to bear in mind the limitations that inevitably arise in research using observational data from major cohort studies,” McConway added….
The fact that the study was observational means it followed people’s behavior rather than randomly assigning the behavior in question, McConway noted.
Consequently, the results only show a correlation between breastfeeding and test scores — not causation.
“It’s not possible to be certain about what’s causing what,” he said.
Few people will read that far, and even most who do will likely take away a careless claim that the research doesn’t justify.
Therefore, this article should never have been written—similar to many articles about educational research.
One enduring example of media repeating a misunderstanding of educational research is the word gap myth. Media repeat that number of words in children’s vocabulary is connected to economic status (again, this sounds right to most people).
Yet, the Hart and Risley study this myth is based on has been debunked often, and the word gap myth itself is based on flawed logic about literacy [1].
Media has ben shown, in fact, to cover education quite badly, typically overemphasizing think tank research versus university-based research (the former far less credible than the latter) and featuring the voices of non-educators (reformers and innovators) over educators:
Malin, J. R., & Lubienski, C. (2015). Educational Expertise, Advocacy, and Media Influence. Education Policy Analysis Archives, 23, 6. https://doi.org/10.14507/epaa.v23.1706
Currently, the misinformation campaign, ironically, related to education is the “science of reading” (SOR) movement that repeatedly misrepresents NAEP data, makes claims that have no scientific evidence (relying on anecdote [2]), and repeatedly relies on think tank “reports” (NCTQ, for example) that are also not scientific [3].
A subset of the SOR movement is also grade retention. High-profile coverage of Mississippi has made the exact breast feeding mistake from above: “’It’s not possible to be certain about what’s causing what,’ he said.”
Recently in the NYT, a think-tank funded report on MS grade retention is cited; however, the report itself notes that outcomes cannot be linked to grade retention itself [3].
In short, the report proves nothing about retention—just as the study on breast feeding proves nothing about student achievement.
The breast feeding story, the word gap myth, and the SOR story are all compelling because they sound true, but they are all false narratives that fails educational research—and public education.
[2] Hoffman, J.V., Hikida, M., & Sailors, M. (2020). Contesting science that silences: Amplifying equity, agency, and design research in literacy teacher preparation. Reading Research Quarterly, 55(S1), S255–S266. Retrieved July 26, 2022, from https://doi.org/10.1002/rrq.353
Thomas, P.L. (2022). The Science of Reading movement: The never-ending debate and the need for a different approach to reading instruction. Boulder, CO: National Education Policy Center. http://nepc.colorado.edu/publication/science-of-reading
In 2023, Republicans have continued to manufacture educational crises in order to reform education, where “reform” is a veneer for dismantling education.
The twin conservative attacks on schools include the anti-CRT/curriculum gag order movement and the “science of reading” (SOR) movement—both depending on false claims of educational failures by teachers and public schools.
What flies under the radar is that anti-CRT and reading legislation are being promoted by conservative organizations and ideologies in the form of “model legislation” and fact sheets that are devoid of facts.
In the context of the crisis/miracle narratives about education in the media, among the public, and by politicians, disaster reform has evolved into its own powerful and harmful machine.
The disaster education reform organization is Orwellian in its claims but insidious in its carefully packaged information and templates for policy. The key point here is that the SOR movement as a media and parent advocacy event has now fully been folded into the existing Republican education reform machine that is more about dismantling education than supporting student learning or teacher quality.
In short, the materials about reading presented by ExcelinEd are false but very well designed and compelling to the general public and politician looking for ready-made legislation and effective talking points.
As the NCLB/NRP era showed us with Reading First, however, the entire Bush family is driven by market interests, not a pursuit of democratic education for all.
The short version of concern here is that nearly all of the information above is misinformation; however, as the SOR movement has shown, most people remain easily targeted by claims of a reading crisis and a set of simplistic blame and solutions.
As I have shown, there simply is no reading crisis in the US, but there is a very long history of political negligence in terms of providing marginalized students and their teachers with the learning and teaching environments as well as social conditions that would support earlier and more developed reading in our students.
Two aspects of the materials above deserve highlighting (again).
First, the Republican commitment to SOR is grounded in doubling-down on punitive policy, grade retention.
The two states identified over and over in the materials above are Florida and Mississippi; however, those states are examples of mirages, not miracles.
ExcelinEd only cites work by Winters [1] to “prove” the effectiveness of grade retention. This strategy is cherry picking “research” by a conservative “scholar” who (surprisingly) only finds positive results for the conservative reform of the day—school choice, charter schools, VAM evaluations of teachers, and now, grade retention.
The research on grade retention is complicated but politically attractive since grade retention (the likely sources of “success” in FL and MS) can raise reading scores in grades 3 or 4, but those “gains” disappear by middle school.
Grade retention distorts the population of students being tested by removing the lowest scoring students and reintroducing older students to grade-level testing. As I have noted before, students achievement can vary significantly by just a month of age difference:
A review of the Florida Model that depends on grade retention has concluded that research does not show whether any short term gains are from retention or additional services. Further, a comprehensive study still notes that grade retention is harmful, especially to marginalized populations of students:
The negative effect of retention was strongest for African American and Hispanic girls. Even though grade retention in the elementary grades does not harm students in terms of their academic achievement or educational motivation at the transition to high school, retention increases the odds that a student will drop out of school before obtaining a high school diploma.
Hughes, J. N., West, S. G., Kim, H., & Bauer, S. S. (2018). Effect of early grade retention on school completion: A prospective study. Journal of Educational Psychology, 110(7), 974–991. https://doi.org/10.1037/edu0000243
A second problematic aspect is the hyper-focus on three-cueing, which fits into the Rufo “caricature” approach to attacking CRT.
Republicans have latched onto the SOR misinformation campaign that perpetuates a cartoon version of three-cueing and fabricates a crisis around claiming that teachers are telling students to guess words instead of using phonics/decoding strategies.
Three-cueing, in fact, is a research-based approach better referred to as “multiple cueing”:
Compton-Lilly, C.F., Mitra, A., Guay, M., & Spence, L.K. (2020). A confluence of complexity: Intersections among reading theory, neuroscience, and observations of young readers. Reading Research Quarterly, 55(S1), S185-S195. https://doi.org/10.1002/rrq.348
ExcelinEd’s prepackaged misinformation campaign and templates for legislation are yet more proof that the SOR movement is another nail in the coffin of public education, an anti-teacher and anti-public school movement that depends on crisis rhetoric and fulfills the goals of disaster reform driven by Republicans and conservatives who serve the needs of the educational marketplace—not students, or teachers.
First, this is a working paper supported by Mississippi Department of Education and the acknowledgements add: “This project was made possible by a grant from ExcelinEd.”
Here are some key additional caveats beyond how biased this report likely is in terms of meeting the ideological aims of ExcelinEd:
The policy brief concedes: “That said, though the results are distinctly positive for the policy treatment overall, the analysis cannot entirely disentangle the extent to which the observed benefits in ELA are due to the additional year of instruction or to other specific features of the approach Mississippi took to providing literacy-focused supports and interventions to students.”
In the full working paper, section “2.1 Within-Age vs Within-Grade Comparisons” details a common failure of analyzing grade retention: “Comparing the later outcomes of students retained at a point in time to students in their cohort who were promoted is complicated by the fact that the two groups are enrolled in different grade levels during later years.” The findings of this working paper must be tempered by this fact of the study: “Unfortunately, within-age comparisons of student test scores are not possible in Mississippi because scores on the state’s standardized tests are comparable within grades over time but not across grades.” In other words, as noted above, higher test scores may be the result of students simply being older in a tested grade level, and not because grade retention or any of the services/instructional practices were effective. Again, these “gains” are likely mirages.
Westall, Utter, and Strunk find much more problematic outcomes with retention, findings that fit within decades of research:
Early literacy skills are critical to the educational outcomes of young students. Accordingly, 19 states have early literacy policies that require grade retention for underperforming readers at the end of third grade. However, there is mixed evidence about retention’s effectiveness and concerns that retention may disproportionately impact traditionally disadvantaged student groups. Using regressions and a regression discontinuity design, we examine retention outcomes under Michigan’s early literacy law, the Read by Grade Three Law. We find that Black and economically disadvantaged students are more frequently eligible for retention and retained than their peers. While controlling for students’ test performance, particularly their math scores, eliminates this disparity for Black students, it persists for economically disadvantaged students. We show that differences in average math performance, exemption characteristics, district characteristics, and eligibility-induced student mobility across districts do not explain the disparities in the implementation of retention by economic disadvantage status.
In the US, the crisis/miracle obsession with reading mostly focuses on NAEP scores. For the UK, the same crisis/miracle rhetoric around reading is grounded in PIRLS.
The media and political stories around the current reading crisis cycle have interested and overlapping dynamics in these two English-dominant countries, specifically a hyper-focus on phonics.
Now I will share data on NAEP and PIRLS that shows media and political responses to test scores are fodder for their predetermined messaging, not real reflections of student achievement or educational quality.
A key point is that the media coverage above represents a bait-and-switch approach to analyzing test scores. The claims in both the US and UK are focusing on rank among states/countries and not trends of data within states/countries.
Do any of these state trend lines from FL, MS, AL, or LA appear to be “soar[ing]” data?
The fair description of the “miracle” states identified by AP is that test scores are mostly flat, and AL, for example, appears to have peaked more than a decade ago and is trending down.
The foundational “miracle” state, MS, has had two significant increases, one before their SOR commitment and one after; but there remains no research on why the increases:
Scroll up and notice that in the UK, PIRLS scores have tracked flat and slightly down as well.
The problematic elements in all of this is that many journalists and politicians have used flat NAEP scores to shout “crisis” and “miracle,” while in the UK, the current flat and slightly down scores are reason to shout “Success!” (although research on the phonics-centered reform in England since 2006 has not delivered as promised [1]).
Many problems exist with relying on standardized tests scores to evaluate and reform education. Standardized testing remains heavily race, gender, and class biased.
But the greatest issue with tests data is that inexpert and ideologically motivated journalists and politicians persistently conform the data to their desired stories—some times crisis, some times miracle.
Once again, the stories being sold—don’t buy them.
[1] Wyse, D., & Bradbury, A. (2022). Reading wars or reading reconciliation? A critical examination of robust research evidence, curriculum policy and teachers’ practices for teaching phonics and reading. Review of Education, 10(1), e3314. https://doi.org/10.1002/rev3.3314
UPDATE
Mainstream media continues to push a false story about MS as a model for the nation. Note that MS, TN, AL, and LA demonstrate that political manipulation of early test data is a mirage, not a miracle.
All four states remain at the bottom of NAEP reading scores for both proficient and basic a full decade into the era of SOR reading legislation:
To Curriculum Coordinators in South Carolina School Districts:
I was a professor at USC-Columbia for 18 years before I retired in 2017; I was a professor in other states for 12 years before that. My area of expertise is reading assessment and instruction. In the last couple of years, I have heard from several SC educators about proposed changes to literacy practices in SC schools. These changes were recently detailed in Senate Bill 418 which has now been held over until the next legislative session. It is my understanding that the bill was held over because a number of individuals and organizations disagreed with parts of what was proposed.
As a professor, I had the opportunity to work with legislators and came to understand that because no one legislator can have a broad and deep knowledge on all topics, they regularly end up having to vote on legislation which is outside their area of expertise. When I became aware of Senate Bill 418, I wrote to members of the House and Senate Education and Public Works Committees providing them with some information about reading process, assessment, and instruction. I also suggested changes to the wording of the bill so that it could reflect current knowledge in the field. If you wish to read that letter, it is attached.
Here though is the basic information:
There is a science of reading. By this I mean that there have been thousands of studies published about reading process, assessment, and instruction. This body of research is quite wide and includes research on many different aspects of reading. Indeed, the International Reading Association recently devoted two entire issues of Reading Research Quarterly to this topic.
While there are differences of opinion on some particulars, the research conducted by reading researchers and which appears in peer-reviewed literacy journals has found that many factors contribute to reading success including:
(1) Knowledgeable teachers who know how to assess the strengths and needs of their students and then provide instruction – whole group, small group, and one-on-one – based on what they know about their students. Because children vary, they do not use a one-size-fits all approach.
(2) Children who understand that reading is supposed to make sense. The alternative is for education to produce students who can read every work fluently but who cannot retell the story or answer questions about what they read. (Teachers sometimes refer to this as students “who can read but not understand what they read.”)
(3) Children who believe they are capable of making sense of print and so willingly spend time reading. This is often referred to as agency and Dr. Peter Johnston has a very helpful chapter on that in his book Choice Words. Like all of us, children do not choose to do things at which they believe they will fail.
(4) Children who have access to books with which they can be successful both at home and in school. Just as we do not expect athletes to improve without appropriate equipment (like soccer balls for soccer players), we cannot expect children to grow as readers if they do not have books to read.
(5) Children who have time to read both at home and in school. Research has shown conclusively that there is a link between volume of reading and reading achievement.
(6) Children who have a variety of skills and strategies to problem-solve meaning. Those skills and strategies include knowing about written language. For the youngest children, this includes understanding that books in English are read left to right and top to bottom. Children also need to understand that words can be segmented and blended and that there are some reliable sound/symbol relationships. Some consonants, for example, can be counted on to make just one sound, while seven of them (e.g., the letter C) make two sounds. Similarly, there are combinations of letters – /an/ for example – which almost always “says” the same thing as in man, tan, fan. As children progress, children then recognize that /an/ appears in words such as manufacturing, slant and fantastic. Children also learn about grammar and punctuation and story structure and genre. This list goes on and on.
Sometimes, some journalists and salespersons) assert that there is one correct sequence of skill and strategy instruction. Those individuals also argue that phonics instruction should precede the opportunity to read. In addition, they claim that a one-size-fits-all approach is best. This approach is often referred to as Science of Reading (SOR).
It is important to note that the SOR is not the same as the science of reading discussed earlier in this letter.
What reading research (the science of reading) has shown is that there are no differences in outcomes among the various approaches to teaching phonics and that a one-size-fits-all approach is not effective. Knowledgeable teachers know best about what instruction is needed at what time for their students. In addition, as authors Reinking, Hruby and Risko (2023) explain in their research article, phonics instruction has been shown to be “more effective when embedded in a more comprehensive program of literacy instruction that accommodates students’ individual needs and multiple approaches to teaching phonics—a view supported by substantial research.”
There simply is no research support for SOR or for a product, called LETRS, often associated with it. There have not been controlled studies in which the progress of students in classrooms taught by SOR teachers were compared to the progress of students taught by teachers whose practices were consistent with research on best practices. And there is absolutely no research which shows that LETRS is an effective instructional approach. (See HERE).
SOR advocates also suggest that phonemic awareness (PA) be taught orally while the National Reading Panel found that PA is best taught using letters, as a part of phonics instruction.
In the midst of what are often media-created reading wars, it is particularly important that decision-makers rely on the wide body of research on reading (the science of reading) and not on an approach with the misleading title, SOR.
It is also very important not to be misled by unsubstantiated claims. Reading-research published in peer-refereed journals and teacher expertise should guide decisions about reading process, reading assessment, and reading instruction. Our focus as educators should be on assuring that all students have knowledge teachers, access to books, time to read and instruction based on the strengths and needs of the children in our care.
Please contact me if you would like further information. Meanwhile, a consistently reliable resource about best practices in reading is the federal What Works Clearing House.
Thank you for your attention to this.
Diane Stephens
Distinguished Professor Emerita
John E. Swearingen, Sr. Professor Emerita in Education
University of South Carolina
educator, public scholar, poet&writer – academic freedom isn't free