[NOTE: A PDF of this post as a presentation can be accessed HERE. See also a slightly revised presentation HERE. Please do not edit and please acknowledge this is my work if you use for instructional or public purposes.]
The answer is the anatomy of how media misinformation in 2018 wrapped in sensationalistic anecdotes has been replicated uncritically by dozens and dozens of journalists, resulting in that misinformation becoming “holy text,” or in other words, sacrosanct Truth.
Here, I offer the template that “Hard Words” created, and unlike journalists, I include links to research showing why the claims throughout the piece (and in its cousin, “Sold a Story”) are both false and shoddy journalism.
I.
The article begins with the Big Lie, one of the three biggest lies (along with citing the NRP report and NCTQ reports) in the “science of reading” (SOR) movement:
This is a fundamental misunderstanding of NAEP data. NAEP “proficient” is well above grade level, but “basic” is approximately what most states consider “grade level,” and thus, if anything, about 60-65% of students for several decades have been at or above grade level. That isn’t sensational enough for reporters, however.
The Evidence:
From NAEP:
NAEP student achievement levels are performance standards that describe what students should know and be able to do. Results are reported as percentages of students performing at or above three NAEP achievement levels (NAEP Basic, NAEP Proficient, and NAEP Advanced). Students performing at or above the NAEP Proficient level on NAEP assessments demonstrate solid academic performance and competency over challenging subject matter. It should be noted that the NAEP Proficient achievement level does not represent grade level proficiency as determined by other assessment standards (e.g., state or district assessments). See short descriptions of NAEP achievement levels for each assessment subject.
As I have pointed out about NCTQ (see more below on NCTQ and LETRS/Moats), much of the SOR advocacy has a market interest behind it and the SOR movement is grounded in the myth of the bad teacher, attacking classroom teachers and teacher educators:
Here and throughout mainstream media, including “Sold a Story,” the SOR movement relies on anecdotes, regardless of how well those stories reflect accurate claims:
The Evidence:
Claims of miracles in Pennsylvania (similar to those made about Mississippi) fall apart once the full picture is examined. Inflated gains at early grades routinely disappear in later grades; this score increases are mirages, not miracle, and ironically, the NRP report showed that reality despite SOR advocates ignoring that fact; see again: Cryonics Phonics: Inequality’s Little Helper, Gerald Coles.
V.
A persistent set of lies in the SOR media campaign concerns misrepresenting “guessing” and three cueing:
Compton-Lilly, C.F., Mitra, A., Guay, M., & Spence, L.K. (2020). A confluence of complexity: Intersections among reading theory, neuroscience, and observations of young readers. Reading Research Quarterly, 55(S1), S185-S195. https://doi.org/10.1002/rrq.348 [access HERE]
The SOR misinformation campaign relies on making false claims and definitions about balanced literacy (and whole language, see below):
The Evidence:
Spiegel, D. (1998). Silver bullets, babies, and bath water: Literature response groups in a balanced literacy program. The Reading Teacher,52(2), 114-124. www.jstor.org/stable/20202025
VII.
The misrepresentation of whole language also has a marketing element; Moats markets SOR-branded materials and thus has a financial interest in discrediting BL and WL:
Semingson, P. & Kerns, W. (2021). Where is the evidence? Looking back to Jeanne Chall and enduring debates about the science of reading. Reading Research Quarterly, 56(S1), S157-S169. https://doi.org/10.1002/rrq.405.
VIII.
Misrepresenting WL/BL is solidly linked to a complete misreading of the NRP reports (another Big Lie):
Shanahan, T. (2005). The National Reading Panel report: Practical advice for teachers. Learning Point Associates. Retrieved June 7, 2022, from https://files.eric.ed.gov/fulltext/ED489535.pdf
Shanahan, T. (2003, April). Research-based reading instruction: Myths about the National Reading Panel report. The Reading Teacher, 56(7), 646-655.
Bowers, J.S. (2020).Reconsidering the evidence that systematic phonics is more effective than alternative methods of reading instruction. Educational Psychology Review, 32(2020), 681-705. Retrieved July 26, 2022, from https://link.springer.com/article/10.1007%2Fs10648-019-09515-y
Collet, V.S., Penaflorida, J., French, S., Allred, J., Greiner, A., & Chen, J. (2021). Red flags, red herrings, and common ground: An expert study in response to state reading policy. Educational Considerations, 47(1). Retrieved July 26, 2022, from https://doi.org/10.4148/0146-9282.2241
Garan, E.M. (2001, March). Beyond smoke and mirrors: A critique of the National Reading Panel report on phonics. Phi Delta Kappan, 82(7), 500-506. Retrieved July 26, 2022, from https://doi.org/10.1177/003172170108200705
Seidenberg, M.S., Cooper Borkenhagen, M., & Kearns, D.M. (2020). Lost in translation? Challenges in connecting reading science and educational practice. Reading Research Quarterly, 55(S1), S119–S130. Retrieved July 26, 2022, from https://doi.org/10.1002/rrq.341
Yatvin, J. (2002). Babes in the woods: The wanderings of the National Reading Panel. The Phi Delta Kappan,83(5), 364-369
As many scholars have noted, the SOR movement including “Sold a Story” is driven by sensationalistic anecdotes, stories:
The Evidence:
Hoffman, J.V., Hikida, M., & Sailors, M. (2020). Contesting science that silences: Amplifying equity, agency, and design research in literacy teacher preparation. Reading Research Quarterly, 55(S1), S255–S266. https://doi.org/10.1002/rrq.353
X.
SOR advocacy regularly demands only a narrow use of “scientific” in reading instruction while also endorsing practices and programs not supported by that same rigor, such as LETRS:
The Evidence:
Hoffman, J.V., Hikida, M., & Sailors, M. (2020). Contesting science that silences: Amplifying equity, agency, and design research in literacy teacher preparation. Reading Research Quarterly, 55(S1), S255–S266. https://doi.org/10.1002/rrq.353
A third Big Lie is using unscientific and discredited reports from the conservative think tank NCTQ to claim that teacher educators are incompetent and/or willfully misleading teacher candidates.
Fuller, E. J. (2014). Shaky methods, shaky motives: A critique of the National Council of Teacher Quality’s review of teacher preparation programs. Journal of Teacher Education, 65(1), 63-77. https://journals.sagepub.com/doi/10.1177/0022487113503872
Cochran-Smith, M., Stern, R., Sánchez, J.G., Miller, A., Keefe, E.S., Fernández, M.B., Chang, W., Carney, M.C., Burton, S., & Baker, M. (2016). Holding teacher preparation accountable: A review of claims and evidence. Boulder, CO: National Education Policy Center. http://nepc.colorado.edu/publication/teacher-prep
One of the most damning aspects of the SOR movement has been the embracing of and rise in grade retention policies; grade retention is not supported by research and both creates false test score gains while harming children:
The SOR movement grossly overstates brain science as well as the essential nature of science:
The Evidence:
Seidenberg, M.S., Cooper Borkenhagen, M., & Kearns, D.M. (2020). Lost in translation? Challenges in connecting reading science and educational practice. Reading Research Quarterly, 55(S1), S119-S130. https://doi.org/10.1002/rrq.341
Yaden, D.B., Reinking, D., & Smagorinsky, P. (2021). The trouble with binaries: A perspective on the science of reading. Reading Research Quarterly, 56(S1), S119-S129. https://doi.org/10.1002/rrq.402
XV.
The SOR movement has hyper-focused on dyslexia, but again, mostly offering misinformation:
The Evidence:
Johnston, P., & Scanlon, D. (2021). An examination of dyslexia research and instruction with policy implications. Literacy Research: Theory, Method, and Practice, 70(1), 107-128. https://doi.org/10.1177/23813377211024625
Stevens, E. A., Austin, C., Moore, C., Scammacca, N., Boucher, A. N., & Vaughn, S. (2021). Current state of the evidence: Examining the effects of Orton-Gillingham reading interventions for students with or at risk for word-level reading disabilities. Exceptional Children, 87(4), 397–417. https://doi.org/10.1177/0014402921993406
Hall, C., et al. (2022, September 13). Forty years of reading intervention research for elementary students with or at risk for dyslexia: A systematic review and meta-analysis. Reading Research Quarterly. https://doi.org/10.1002/rrq.477
Odegard, T. N., Farris, E. A., Middleton, A. E., Oslund, E., & Rimrodt-Frierson, S. (2020). Characteristics of Students Identified With Dyslexia Within the Context of State Legislation. Journal of Learning Disabilities, 53(5), 366–379. https://doi.org/10.1177/0022219420914551
“Sold a Story” became a “holy text” because dozens of journalists and politicians repeated the misinformation and lies begun in “Hard Words,” identified above.
This is not good journalism, but it does prove that sensationalistic stories will ultimately trump evidence, even the “science” SOR advocates are so apt to reference.
Fuller, E. J. (2014). Shaky methods, shaky motives: A critique of the National Council of Teacher Quality’s review of teacher preparation programs. Journal of Teacher Education, 65(1), 63-77. https://journals.sagepub.com/doi/10.1177/0022487113503872
Cochran-Smith, M., Stern, R., Sánchez, J.G., Miller, A., Keefe, E.S., Fernández, M.B., Chang, W., Carney, M.C., Burton, S., & Baker, M. (2016). Holding teacher preparation accountable: A review of claims and evidence. Boulder, CO: National Education Policy Center. http://nepc.colorado.edu/publication/teacher-prep
Let me start with a caveat: Don’t debate “science of reading” (SoR) advocates on social media.
Ok, so I suspect some of you will enter the fray, and I must caution that you are not going to change the minds of SoR advocates; therefore, if you enter into a social media debate, you must keep your focus on informing others who may read that debate, others who genuinely want a discussion and are looking to be better informed (SoR advocates are not open to debate and do not want an honest discussion).
First, expect to be attacked and swarmed.
Next, keep focused on the claims made by SoR advocates, and you can anticipate those pretty easily (see below). An important way to hold SoR advocates accountable is to point out the contradictions between calling for a narrow view of “science” and then referring to reports that are released with no peer review (not scientific), such as reports released by NCTQ, and also misrepresenting challenged reports, such as the reports from the National Reading Panel (NRP) under George W. Bush.
Finally, I recommend making evidence-based challenges to the two broad claims of SoR advocacy—that the “science of reading” is simple and settled.
Your best approach is to counter with “not simple, not settled.”
Here, then, let me offer the main claims you will likely confront and resources for responding (also see resources linked after the post).
SoR Claim: Dyslexia is under-diagnosed and students with dyslexia need intensive systematic phonics (likely Orton-Gillingham–based approaches).
Counter: Research does not support one way to address or diagnose dyslexia, there isn’t a strong consensus on what constitutes dyslexia (no unifying definition), and research does not support O-G phonics for all dyslexia issues.
As yet, there is no certifiably best method for teaching children who experience reading difficulty (Mathes et al., 2005). For instance, research does not support the common belief that Orton-Gillingham–based approaches are necessary for students classified as dyslexic (Ritchey & Goeke, 2007; Turner, 2008; Vaughn & Linan-Thompson, 2003). Reviews of research focusing solely on decoding interventions have shown either small to moderate or variable effects that rarely persist over time, and little to no effects on more global reading skills. Rather, students classified as dyslexic have varying strengths and challenges, and teaching them is too complex a task for a scripted, one-size-fits-all program (Coyne et al., 2013; Phillips & Smith, 1997; Simmons, 2015). Optimal instruction calls for teachers’ professional expertise and responsiveness, and for the freedom to act on the basis of that professionalism.
Currently, there is a well-organized and active contingent of concerned parents and educators (and others) who argue that dyslexia is a frequent cause of reading difficulties, affecting approximately 20 percent of the population, and that there is a widely-accepted treatment for such difficulties: an instructional approach relying almost exclusively on intensive phonics instruction. Proponents argue that it is based on “settled science” which they refer to as “the science of reading” (SOR). The approach is based on a narrow view of science, and a restricted range of research, focused on word learning and, more recently, neurobiology, but paying little attention to aspects of literacy like comprehension and writing, or dimensions of classroom learning and teacher preparation. Because the dyslexia and instructional arguments are inextricably linked, in this report, we explore both while adopting a more comprehensive perspective on relevant theory and research.
JOHNSTON, P., & SCANLON, D. (2021). AN EXAMINATION OF DYSLEXIA RESEARCH AND INSTRUCTION WITH POLICY IMPLICATIONS. LITERACY RESEARCH: THEORY, METHOD, AND PRACTICE, 70(1), 107–128. HTTPS://DOI.ORG/10.1177/23813377211024625
Johnston and Scanlon answer 12 questions and then offer these important policy implications (quoted below):
There is no consistent and widely accepted basis – biological, cognitive, behavioral, or academic – for determining whether an individual experiencing difficulty with developing word reading skill should be classified as dyslexic. (Questions 1 and 10).
Although there are likely heritable and biological dimensions to reading and language difficulties, there is no way to translate them into implications for instructional practice. (Questions 2 and 11).
Good first instruction and early intervention for children with a slow start in the word reading aspect of literacy, reduces the likelihood they will encounter serious difficulty. Thus, early screening with assessments that can inform instruction, is important. Screening for dyslexia, particularly with instructionally irrelevant assessments offers no additional advantage. (Questions 5 and 6).
Research supports instruction that purposely develops children’s ability to analyze speech sounds (phonological/phonemic awareness), and to relate those sounds to patterns of print (phonics and orthographics), in combination with instruction to develop comprehension, vocabulary, fluency, and a strong positive and agentive relationship with literacy. (Questions 7 and 12).
Evidence does not justify the use of a heavy and near-exclusive focus on phonics instruction, either in regular classrooms, or for children experiencing difficulty learning to read (including those classified as dyslexic). (Questions 7, 8 and 12).
Legislation (and district policies) aligned with the SOR perspectives on dyslexia will necessarily require tradeoffs in the allocation of resources for teacher development and among children having literacy learning difficulties. These tradeoffs have the potential to privilege students experiencing some types of literacy learning difficulties while limiting instructional resources for and attention available to students whose literacy difficulties are not due (exclusively) to word reading difficulties. (Question 12).
SoR Claim: SoR advocates rely on a narrow definition of “science,” emphasizing cognitive science and brain research over a broad range of research covering a century in literacy.
Counter: A complex and full understanding of the term “science,” and recognizing evidence on teaching reading must include more than cognitive science and brain research.
Hoffman, J.V., Hikida, M., & Sailors, M. (2020). Contesting Science That Silences: Amplifying Equity, Agency, and Design Research in Literacy Teacher Preparation. Reading Research Quarterly, 55(S1), S255-S266. https://doi.org/10.1002/rrq.353
Abstract:
In this article, we argue that the “science of reading” (SOR) construct is being used to shape the future of literacy teacher preparation and silence the voices and work of literacy teacher education researchers to the detriment of quality science, quality teaching, and quality teacher preparation. First, we briefly inspect the SOR movement in terms of its historical roots in experimental psychology. Next, we examine the claims being made by SOR advocates regarding the absence of attention to the SOR literature in teacher preparation programs, and the related claims for the negative consequences that occur when these so-called underprepared teachers enter the workforce. Then, we present literature reviews, drawn from a large and dynamic database of research on literacy teacher preparation (over 600 empirical studies that were published between 1999 and 2018); the studies in the database have been excluded from the SOR. Finally, we conclude with a discussion of equity, agency, and design as a pathway forward in improving literacy teacher preparation. (p. S255)
Hoffman, J.V., Hikida, M., & Sailors, M. (2020). Contesting Science That Silences: Amplifying Equity, Agency, and Design Research in Literacy Teacher Preparation. Reading Research Quarterly, 55(S1), S255-S266. https://doi.org/10.1002/rrq.353
Note also about the lack of science behind LETRS:
A growing number of U.S. states have funded and encourage and/or require teachers to attend professional development using Moats’s commercial LETRS program, including Alabama, Arkansas, Kansas, Mississippi, Missouri, Oklahoma, Rhode Island, and Texas. This is despite the fact that an Institute of Education Sciences study of the LETRS intervention found almost no effects on teachers or student achievement (Garet et al., 2008). (p. S259)
Hoffman, J.V., Hikida, M., & Sailors, M. (2020). Contesting Science That Silences: Amplifying Equity, Agency, and Design Research in Literacy Teacher Preparation. Reading Research Quarterly, 55(S1), S255-S266. https://doi.org/10.1002/rrq.353
See also:
Specifically, we address limitations of the science of reading as characterized by a narrow theoretical lens, an abstracted empiricism, and uncritical inductive generalizations derived from brain-imaging and eye movement data sources….
Unfortunately, we believe that in many cases, the cloak of science has been employed to elevate the stature of SOR work and to promote the certainty and force of its advocates’ preferred explanations for what reading is and how it should be taught (e.g., Gentry & Ouellette, 2019; Schwartz & Sparks, 2019). What we suggested in this article is that the SOR, when so used in the reading wars, is not science at all in its fullest sense. It neglects an entire domain that influences and shapes human experience. It does so with an unmitigated confidence that evidence from one side of a binary can establish a final truth and that such a truth creates a single prescription for all instruction. Taking that stance, however, is outside the pale of science and dismisses work that has both merit on its own terms and a critical role in advancing the aims motivating reading research and instruction.
Yaden, D.B., Reinking, D., & Smagorinsky, P. (2021). The Trouble With Binaries: A Perspective on the Science of Reading. Read Res Q, 56(S1), S119– S129. https://doi.org/10.1002/rrq.402
When it comes to reading instruction, an “all or nothing” approach is actually unscientific.
Every January, my social media feeds fill with ads, free trials, and coupons from the diet and wellness industry, promising to help me with my (presumed) resolutions to be better, faster, leaner, and healthier. Every diet program claims some type of relationship to science.
The same is true with reading instruction. Most programs or approaches claim to be based on “science.” But consider the many possible meanings of this claim. Some approaches to reading instruction are developed as part of rigorous, peer-reviewed research and are continuously evaluated and refined. Others are designed by practitioners who draw on experience, and whose insights are validated by inquiry after development. Many are based on well-known principles from research or assumptions about learning in general, but haven’t themselves been tested. Some “research-based” instructional tools and practices have been shared, explained, interpreted, misinterpreted, and re-shared so many times that they bear little resemblance to the research on which they were based (Gabriel, 2020). Others rack up positive evidence no matter how many times they’re studied. Then there are practices that have no evidence behind them but are thought to be scientific—because they’ve always been assumed to be true.
SoR Claim: SoR advocates attack misrepresentations of balanced literacy and whole language.Neither WL nor BL can credibly be called “failures” in any distinct way from other philosophies or practices in literacy.And claiming WL or BL does not include teaching of phonics is false (see Krashen farther below on types of phonics).
Counter: Detail strong historical context and accurate definitions of BL and WL; also note that programs labeled as “BL” may not be BL, and may be implemented poorly.
In this historical analysis, we examine the context of debates over the role of phonics in literacy and current debates about the science of reading, with a focus on the work and impact of the late literacy scholar Jeanne Chall. We open by briefly tracing the roots of the enduring debates from the 19th and 20th centuries, focusing on beginning reading, decoding, and phonics. Next, we explore insights drawn from the whole language movement as understood by Kenneth Goodman and Yetta Goodman, as well as a synthesis of key ideas from Chall’s critique of the whole language approach. We then analyze the shifts across the three editions of Chall’s Learning to Read: The Great Debate and summarize major ideas from her body of work, such as the stage model of reading development. We suggest that reading instruction should be informed by a broader historical lens in looking at the “science of reading” debates and should draw on a developmental stage model to teaching reading, such as the six-stage model provided by Chall. We describe implications for educators, textbook publishers, researchers, and policymakers that address the current reading debates and provide considerations of what Chall might say about learning to read in a digital era given the pressures on teacher educators and teachers to align their practice with what is deemed to be the science of reading.
Semingson, P., & Kerns, W. (2021). Where Is the Evidence? Looking Back to Jeanne Chall and Enduring Debates About the Science of Reading. Read Res Q, 56(S1), S157– S169. https://doi.org/10.1002/rrq.405
SoR Claim:Reading programs, such as those by Lucy Calkins, and Fountas and Pinnell, have failed students because they rely on balanced literacy.(SoR advocates tend to rely on reviews by EdReports, which has been challenged for biased analyses skewed by the interests of publishers.)
Counter: The problem is strict and misguided dependence on any reading program. After NCLB and the National Reading Program required schools to adopt “scientifically-based reading programs,” evidence shows that scripted, phonics-intensive programs such as Open Court “failed.”
See:
This means teachers did actually implement the program as it was intended, so we can’t blame the results on teachers not doing what they were supposed to do. The randomized design helps ensure (but not guarantee, of course) that the results are due to the treatment and not some other factor. Random assignment is sometimes called the “gold standard” in research design….
This is the key finding: no “main” effects means that the overall impact of the program on reading scores during the first year of the study was zero, nada. By year two of the program, it was slightly negative. Oops.
SoR Claim:SoR advocates support the “simple” view of reading as “settled science.”
Counter: “[T]he simple view of reading does not comprehensively explain all skills that influence reading comprehension, nor does it inform what comprehension instruction requires” (see Filderman, et al., 2022).
The simple view of reading is commonly presented to educators in professional development about the science of reading. The simple view is a useful tool for conveying the undeniable importance—in fact, the necessity—of both decoding and linguistic comprehension for reading. Research in the 35 years since the theory was proposed has revealed additional understandings about reading. In this article, we synthesize research documenting three of these advances: (1) Reading difficulties have a number of causes, not all of which fall under decoding and/or listening comprehension as posited in the simple view; (2) rather than influencing reading solely independently, as conceived in the simple view, decoding and listening comprehension (or in terms more commonly used in reference to the simple view today, word recognition and language comprehension) overlap in important ways; and (3) there are many contributors to reading not named in the simple view, such as active, self-regulatory processes, that play a substantial role in reading. We point to research showing that instruction aligned with these advances can improve students’ reading. We present a theory, which we call the active view of reading, that is an expansion of the simple view and can be used to convey these important advances to current and future educators. We discuss the need to lift up updated theories and models to guide practitioners’ work in supporting students’ reading development in classrooms and interventions.
Duke, N.K., & Cartwright, K.B. (2021). The Science of Reading Progresses: Communicating Advances Beyond the Simple View of Reading. Read Res Q, 56(S1), S25– S44. https://doi.org/10.1002/rrq.411
Theoretical models, such as the simple view of reading (Gough & Tunmer, 1986), the direct and inferential mediation (DIME) model (Cromley et al., 2010; Cromley & Azevedo, 2007), and the cognitive model (McKenna & Stahl, 2009) inform the constructs and skills that contribute to reading comprehension. The simple view of reading (Gough & Tunmer, 1986) describes reading comprehension as the product of decoding and language comprehension. The simple view of reading is often used to underscore the critical importance of decoding on reading comprehension; however, evidence suggests that the relative importance of decoding and language comprehension changes based on students’ level of reading development and text complexity (Lonigan et al., 2018). Cross-sectional and longitudinal studies demonstrate that decoding has the largest influence on reading comprehension for novice readers, whereas language comprehension becomes increasingly important as students’ decoding skills develop and text becomes more complex (e.g., Catts et al., 2005; Gough et al., 1996; Hoover & Gough, 1990; Proctor et al., 2005; Tilstra et al., 2009). However, the simple view of reading does not comprehensively explain all skills that influence reading comprehension, nor does it inform what comprehension instruction requires.
Filderman, M. J., Austin, C. R., Boucher, A. N., O’Donnell, K., & Swanson, E. A. (2022). A Meta-Analysis of the Effects of Reading Comprehension Interventions on the Reading Comprehension Outcomes of Struggling Readers in Third Through 12th Grades. Exceptional Children, 88(2), 163–184. https://doi.org/10.1177/00144029211050860
Reading a philosophical investigation, Andrew Davis
SoR Claim: SoR advocates argue SoR-based reading policies will accomplish what no other programs or standards have (consider NCLB and Common Core, both of which claimed “scienticfic”).[SoR advocates will reference Mississippi and the 2019 NAEP scores as “proof” of this.]
Counter:State legislation and policy are often deeply flawed, and prone to failure.No research has been conducted on 2019 reading scores on NAEP for MS, but the likely cause of the score bump is grade retention:
In many U.S. states, legislation seeks to define effective instruction for beginning readers, creating an urgent need to turn to scholars who are knowledgeable about ongoing reading research. This mixed-methods study considers the extent to which recognized literacy experts agreed with recommendations about instruction that were included on a state’s reading initiative website. Our purpose was to guide implementation and inform policy-makers. In alignment with the initiative, experts agreed reading aloud, comprehension, vocabulary, fluency, phonological awareness, and phonics all deserve a place in early literacy instruction. Additionally, they agreed some components not included on the website warranted attention, such as motivation, oral language, reading volume, writing, and needs-based instruction. Further, experts cautioned against extremes in describing aspects of early reading instruction. Findings suggest that experts’ knowledge of the vast body of ongoing research about reading can be a helpful guide to policy formation and implementation.
Collet, Vicki S.; Penaflorida, Jennifer; French, Seth; Allred, Jonathan; Greiner, Angelia; and Chen, Jingshu (2021) “Red Flags, Red Herrings, and Common Ground: An Expert Study in Response to State Reading Policy,” Educational Considerations: Vol. 47: No. 1. https://doi.org/10.4148/0146-9282.2241
Recommended:
Cummings, A. (2021). Making early literacy policy work in Kentucky: Three considerations for policymakers on the “Read to Succeed” act. Boulder, CO: National Education Policy Center. https://nepc.colorado.edu/publication/literacy
To make his case, Reeves — much like the Mississippi Department of Education itself — is chronically selective in his statistics, telling only part of the story and leaving out facts that would show that many of these gains are either illusory or only seem to be impressive because the state started so far behind most of the rest of the nation.
SoR Claim: All students should receive intensive systematic phonics instruction.
Counter: Research does not support intensive systematic phonics for all students. Research does support basic phonics (see Krashen below) and a balanced approach to literacy instruction (see Wyse & Bradbury).
Intensive Systematic Phonics
[abstract] The aims of this paper are: (a) to provide a new critical examination of research evidence relevant to effective teaching of phonics and reading in the con-text of national curricula internationally; (b) to report new empirical findings relating to phonics teaching in England; and (c) examine some implications for policy and practice. The paper reports new empirical findings from two sources: (1) a systematic qualitative meta-synthesis of 55 experimental trials that included longitudinal designs; (2) a survey of 2205 teachers. The paper concludes that phonics and reading teaching in primary schools in England has changed significantly for the first time in modern history, and that compared to other English dominant regions England represents an outlier. The most robust research evidence, from randomised control trials with longitudinal designs, shows that the approach to phonics and reading teaching in England is not sufficiently under-pinned by research evidence. It is recommended that national curriculum policy is changed and that the locus of political control over curriculum, pedagogy and assessment should be re-evaluated.
[from the full report] Our findings from analysis of tertiary reviews, systematic reviews and from the SQMS do not support a synthetic phonics orientation to the teaching of reading: they suggest that a balanced instruction approach is most likely to be successful.
Wyse, D., & Bradbury, A. (2022). Reading wars or reading reconciliation? A critical examination of robust research evidence, curriculum policy and teachers’ practices for teaching phonics and reading. Review of Education, 10, e3314. https://doi.org/10.1002/rev3.3314
It will help to distinguish three different views of phonics: (1) intensive, or heavy phonics, (2) basic, or light phonics, and (3) zero phonics. Basic phonics appears to have some use, but there are good reasons why intensive phonics is not the way to improve reading.
Intensive Phonics. This position claims that we learn to read by first learning the rules of phonics, and that we read by sounding out what is on the page, either out-loud or to ourselves (decoding to sound). It also asserts that all rules of phonics must be deliberately taught and consciously learned.
Basic Phonics. According to Basic Phonics, we learn to read by actually reading, by understanding what is on the page. Most of our knowledge of phonics is subconsciously acquired from reading (Smith, 2004: 152).
Conscious knowledge of some basic rules, however, can help children learn to read by making texts more comprehensible. Smith (2004) explains how this can happen (p. 152): The child is reading the sentence ‘The man was riding on the h____’ and cannot read the final word. Given the context and recognition of h, the child can make a good guess as to what the final word is: the reader will know that the word is not donkey and mule. This won’t work every time (some readers might think the missing word was ‘Harley’), but some knowledge of phonics can restrict the possibilities of what the unknown words are.
Basic Phonics is the position of the authors of Becoming a Nation of Readers, a book widely considered to provide strong support for phonics instruction: ‘…phonics instruction should aim to teach only the most important and regular of letter-to-sound relationships … once the basic relationships have been taught, the best way to get children to refine and extend their knowledge of letter-sound correspondences is through repeated opportunities to read. If this position is correct, then much phonics instruction is overly subtle and probably unproductive’ (Anderson et al., 1985: 38).
Zero Phonics. This view claims that direct teaching is not necessary or even helpful. I am unaware of any professional who holds this position.
There is a widespread consensus in the research community that reading instruction in English should first focus on teaching letter (grapheme) to sound (phoneme) correspondences rather than adopt meaning-based reading approaches such as whole language instruction. That is, initial reading instruction should emphasize systematic phonics. In this systematic review, I show that this conclusion is not justified based on (a) an exhaustive review of 12 meta-analyses that have assessed the efficacy of systematic phonics and (b) summarizing the outcomes of teaching systematic phonics in all state schools in England since 2007. The failure to obtain evidence in support of systematic phonics should not be taken as an argument in support of whole language and related methods, but rather, it highlights the need to explore alternative approaches to reading instruction.
A focus on synthetic phonics comes at a high cost. Not only in terms of the money it costs to purchase these huge, labor-intensive packages that take many hours of time for struggling readers and their teachers to complete and then test, but also in terms of being relevant to contemporary lifeworlds in which meaning-making and comprehension are critical to successfully navigating everyday life in diverse contexts. They are reductionist, simplistic, and do not provide emerging readers with the functional strategies to make meaning from multimodal texts. It elevates one aspect of our language acquisition above all others when in contemporary times we need to be able to interconnect the meaning forms (text, image, space, object, sound, and speech) and not consider them as separate entities.
A final point: While SoR advocates will rarely acknowledge the harmful consequences of their advocacy in terms of state policy being adopted that is refuted by research, anyone venturing into social media debates about SoR should emphasize that SoR is often linked with grade-retention legislation, even though grade retention has been discredited by decades of research.
See:
Short-term gains produced by test-based retention policies fade over time with students again falling behind but with a larger likelihood of dropping out of school. These unintended consequences are most prevalent among ethnic minority and impoverished students.
You can count on two things when the National Council on Teacher Quality (NCTQ) releases one of their “reports.”
First, media will fall all over themselves to report NCTQ’s “findings” and “conclusions” without any critical review of whether the “findings” or “conclusions” are credible (or peer-reviewed, which they aren’t).
Second, NCTQ’s “methods,” “findings,” and “conclusions” are incomplete, pre-determined (NCTQ has a predictable “conclusion” that teacher education/certification is “bad”), and increasingly cloaked in an insincere context of diversity and equity (now teacher education/certification are not just “bad” but especially “bad” for minority candidates).
So the newest NCTQ report has been immediately and uncritically amplified by Education Week (who loves to take stands for “scientific” evidence while also reporting on “findings” and “reports” that cannot pass the lowest levels of expectations for scientific research).
There is great irony in this report and EdWeek’s coverage that includes two gems:
“The data was effectively useless,” said Kate Walsh, the president of NCTQ….
Said Walsh: “We do think that states ought to be asking some hard questions of institutions that have really low first-time pass rates. … We shouldn’t be afraid of this data. This data can help programs get better.”
Walsh, in the first comment, is referring to data on passing rates on standardized testing, used for teacher licensure, but the irony is that she would be more accurate if she were referring to the NCTQ “report” itself.
The “report” admits that a number of states refused to cooperate (NCTQ has a long history of lies and manipulation to acquire “data” and many institutions and organizations have wisely stopped complying since the outcomes of NCTQ’s are predictable); therefore, this NCTQ “report” is similar to all their other “reports” in terms of incomplete data and slipshod methodology (a review of another NCTQ “report” by a colleague and me, for example, noted that NCTQ’s methodology wouldn’t be accepted in an undergraduate course, much less as credible scholarship to drive policy).
NCTQ and EdWeek, however, are typically not challenged since their claims and coverage fit a misleading narrative that the public and political leaders believe (again ironically in the absence of the data that Walsh claims “[w]e shouldn’t be afraid of”)—everything about U.S. public education, from teacher education to teacher quality, is total garbage.
NCTQ is a hack, agenda-driven think-tank, and EdWeek has eroded its journalistic credibility by embracing NCTQ’s “reports” when it serves their need for online traffic (see EdWeek’s obsession with the misleading “science of reading” movement where EdWeek shouts “science!” and cites NCTQ reports that fail the minimum requirements of scientific methodology).
This “report” on standardized testing in the teacher licensure process shouldn’t be viewed as in any way valuable for drawing conclusions about teacher education (teacher ed is a real problem that I have criticized extensively, but NCTQ hasn’t a clue what those problems are, and frankly, they don’t care) or for making policy.
However, what is interesting to notice is that NCTQ has chosen to use a shoddy analysis of previously hidden data on standardized testing to (once again) damn teacher education and traditional certification (both of which actually do deserve criticism and re-evaluation) even though there is another position one could take when analyzing (more rigorously and using a more robust methodology and the peer-review process) this data.
What if the problem with passing rates is not the quality of teacher education, but the inherent inequity built into standardized testing throughout the entire system of formal education?
Across the educational landscape—from NAEP to state-based accountability testing to the SAT and ACT to teacher licensure exams—standardized testing remains deeply inequitable, mostly correlated with socio-economic status, race, and gender in ways that perpetuate inequity.
In the very recent past, NCTQ was fully on board with the value-added method (VAM) for determining teacher quality, recall, and that movement eventually fell apart under its own weight since narrow forms of measurement, standardized testing, are actually a lousy way to understand teaching and learning.
If we take Walsh seriously about data (and we shouldn’t), here is a simple principle of gathering and understanding data—one data point (a standardized test score) will never be as powerful of valuable (valid/reliable) as multiple data points:
“Multiple data sources give us the best understanding of something,” said Petchauer, who was not involved in NCTQ’s report. “I get worried when a single high-stakes standardized test can trump other indicators of what a teacher knows and is able to do.”
For one example, the Holy Grail of data credibility for the SAT has always been to be as predictive as GPA (GPA is the result of dozens of data points over years, and thus, a far more robust data set that one test score). GPA is more predictive.
Teacher education, like all education, remains inadequate, especially for marginalized populations, but one of the key elements in that claim is the overused of standardized testing.
If NCTQ and EdWeek were interested in challenging the use of high-stakes testing, then there may be some value in NCTQ’s most recent “report” (although the data is incomplete and the analysis is shoddy).
NCTQ’s “report” makes a big deal out of the licensure pass rates being hidden until their “report,” but once again, NCTQ’s agenda and total lack of scientific credibility as research makes this unveiling even worse than the data being hidden.
Ultimately, NCTQ’s misinformation campaign could be averted if and when media choose to practice what they preach. EdWeek is obsessed with teachers using the “science of reading” but their journalists routinely publish articles citing “reports” that never reach the level of “scientific.”
Whether you are a journalist or a researcher/scholar, you really are no better than the data, evidence, or sources you choose to stand with.
The “science of reading” movement often claims that a systematic intensive phonics-first approach to teaching reading is endorsed by science that is settled, that the National Reading Panel (NRP) is a key element of that settled science, and that teacher education is mostly absent of that “science of reading” (a message that has been central to NCTQ for many years).
These claims, however, misrepresent what evidence actually shows. Here, then, are some evidence-based fact-checks of phonics, NRP, and NCTQ.
There is a widespread consensus in the research community that reading instruction in English should first focus on teaching letter (grapheme) to sound (phoneme) correspondences rather than adopt meaning-based reading approaches such as whole language instruction. That is, initial reading instruction should emphasize systematic phonics. In this systematic review, I show that this conclusion is not justified based on (a) an exhaustive review of 12 meta-analyses that have assessed the efficacy of systematic phonics and (b) summarizing the outcomes of teaching systematic phonics in all state schools in England since 2007. The failure to obtain evidence in support of systematic phonics should not be taken as an argument in support of whole language and related methods, but rather, it highlights the need to explore alternative approaches to reading instruction.
[For context, note some of the problems remaining in how whole language is addressed in this post.]
An Ever-So-Brief Summary with Book Recommendations
1. Phonemic Awareness. According to the studies cited in the NRP report, this is best taught to very young children (K–1) using letters, and when letters are used, PA instruction is considered to be phonics. Therefore, it is not necessary to have a separate instructional time for PA. Rather, children should have opportunities to learn about how language is made up of parts (e.g., onsets and rimes, or word families) as part of phonics instruction. An effective way to do this in the classroom? Provide time for students to write using invented spelling (pp. 2-1 through 2-86). (See Strickland, 1998, for further information about invented spelling.)
2. Phonics. According to the studies cited in the NRP report, there is no evidence that phonics instruction helps in kindergarten or in grades 2 to 9. It does help first graders learn the alphabetic principle—that there is a relationship between letters and sounds. No one method is better than any other. For example, for at-risk first graders, a modified whole language approach and one-on-one Reading Recovery–like instruction both helped children with comprehension (pp. 2-89 through 2-176). This phonics instruction should be conducted in the context of whole, meaningful text. (See Moustafa, 1997, for information on embedded, whole-part-whole instruction.)
3. Fluency. According to the authors of the Fluency report, the practice of round robin (at any age) does not help children and can indeed hurt them. However, according to the studies cited in the Fluency report, repeated oral reading (K–12) helps with comprehension because reading to readers fluidly instead of word-by-word reading helps them better understand the text. Ways to help with this? Try such things as readers theater (pp. 3-1 through 3-43). (See Opitz and Rasinski’s Good-bye Round Robin: 25 Effective Oral Reading Strategies [1998] for additional instructional suggestions.)
4. Vocabulary (grades 3 to 8). One method is not better than another. Children learn most of their vocabulary incidentally (pp. 4-15 through 4-35). (For further information about vocabulary learning, see Nagy, 1988.)
5. Comprehension (grades 3 to 6). Children need to learn that print makes sense and to develop a variety of strategies for making sense of print (pp. 4-39 through 4-168). (For further information on teaching for comprehension, see the references listed in Chapter 8: Beers, 2002; Sibberson & Szymusiak, 2003; Taberski, 2000; Tovani, 2000; see also Harvey & Goudvis, 2000.)
Across all of these recommendations? According to the studies cited in the NRP report, if we want children to learn something, we need to teach them that something. Want great readers? Then teach children what great readers do.
NCTQ (Teacher Education)
NOTE: This is more complicated, but first I am posting an older (2006) and new (2020) report from NCTQ both making essentially the same claims that teacher education fails to teach the “science of reading.” Then, I include a link to several reviews that show that NCTQ’s “reports” are methodologically flawed and essentially propaganda, not “science.”
His work and career have shifted since then, but I have remained in contact through his public writing. Coinciding with a mostly fruitless Twitter debate about how the media continues to misrepresent the challenges and realities of teaching reading, then, I was strongly drawn to DeWitt’s 3 Reasons I Do Not Engage In Twitter Debates.
Much of his examination of the paradox that is social media is extremely compelling to me; his three reasons, in fact, resonate powerfully: They’re rarely about common understanding, they make you look really crazy to onlookers, and he’s not good at them.
When I find myself crossing (foolishly) DeWitt’s pointed line, I try to justify the effort by this (mostly idealistic and probably misguided) justification: Making a nuanced and detailed case, even through the limitations of Twitter, will likely not persuade the Twitter thread members, but can provide a platform for learning to those observing the discussion.
However, I find DeWitt’s conclusions hold fast, and thus, offering here the details and the nuance has a better, although also limited, potential for changing the dialogue and reaching more understanding.
Instead of providing yet another discrediting of yet another media misrepresentation of the “science of reading” (see some of that work listed below), I want to offer here a checklist for those who want to navigate the media coverage in an informed and critical way.
Mainstream media education journalism is routinely bad because of some broad problems inherent in journalism: journalists tend to be generalists and media assume a journalist can and should cover specialized fields, journalism remains bound to a “both sides” coverage of topics that misrepresents the actual balance of evidence in those specialized fields, and as I outline below, mainstream media tend to be trapped in a sort of presentism that lacks historical context.
Below with additional sources to support and illuminate the problems is a checklist for navigating mainstream media’s coverage of the “science of reading”:
Mainstream media’s errors in science of reading include the following:
[ ] Misrepresenting balanced literacy (BL), whole language (WL) to discredit them. To evaluate media coverage of reading instruction, know that reading ideologies such as balanced literacy and whole language suffer very complex realities. First, as links below detail, even when teachers or schools claim to be implementing BL or WL, there is ample evidence that traditional and more isolated practices are actually in place. Second, and extremely important to the current and historical versions of the reading wars, both BL and WL recognize and endorse a significant place for phonics instruction in early literacy; as Stephen Krashen explains pointedly: “Zero Phonics. This view claims that direct teaching is not necessary or even helpful. I am unaware of any professional who holds this position.”
[ ] Misrepresenting the complex role of phonics in reading in order to advocate for phonics programs. Related to the first point above, phonics advocacy tends to suggest falsely that some literacy experts support no phonics instruction and that all children must receive systematic intensive phonics instruction; these extreme polarities distort, ironically, what the broad and complex research base does show about how children learn to read as well as the role of phonics in that process.
[ ] Lacking historical context about the recurring “reading wars” and the false narratives of failing to teach children to read. The media, the public, and political leaders have chosen a crisis narrative for teaching reading throughout the twentieth and into the twenty-first century. That framing as crisis has mostly obscured both the problems that do stunt effective reading instruction and the complex nature of teaching reading as well as the current research base on teaching and literacy development.
[ ] Overemphasizing/ misrepresenting National Reading Panel (NRP) value, ignoring it as a narrow and politically skewed report. A central component of No Child Behind was the NRP; however, as a key member of the panel has detailed, that report was neither a comprehensive and valid overview of the then-current state of research on teaching reading nor a foundational tool for guiding reading practices or policy. Yet, media coverage routinely references the NRP as gold-standard research and laments its lack of impact (although the NRP report did spawn a disturbing scandal concerning federal funding and textbook adoptions).
[ ] Citing bogus reports from discredited think tanks such as NCTQ. Well over a decade ago, Gerald Bracey warned about the growing influence of agenda-driven think tanks aggressively promoting reports before they are peer reviewed; since the mainstream media and most journalists are under-funded and overworked, press-release journalism has become more and more common, especially regarding education and often in terms of how so-called research is framed for the public. With the recent focus on the “science of reading,” the scapegoat of the day is teacher education; the narrative goes that teachers today do not know the science of reading because teacher education programs do not teach the science of reading. Often as proof, the mainstream media resorts to anecdote (they talk to a teacher or two who claims not to have been taught the science of reading) and citing bogus reports masquerading as research—notably the work of NCTQ, a think-tank that has aggressively and falsely attacked teacher education in report after report using slip-shod methods and devious processes to gather the data claim to analyze.
[ ] Scapegoating teacher education while ignoring two greatest influences on reading: poverty and reading programs adopted to comply with standards and high-stakes testing. There is ample room to criticize teacher education, particularly focusing on the problems with credentialing and the flaws inherent in the accreditation process, but the current media urge to blame teacher education for either how reading is taught or the errors in how reading is taught distracts from some hard facts about measurable reading achievement: first, standardized testing of all kinds are more strongly correlated with socio-economic and out-of-school factors than either teacher, teaching, or school quality; and this blame-teacher-education narrative glosses over that almost all reading instruction in U.S. public schools is mandated by standards, high-stakes testing, and adopted reading programs regardless of what teachers learned in their certification program.
[ ] Conflating needs of students with special needs and needs of general population of students. The genesis of the most recent version of the reading wars that focuses on the “science of reading” appears to be grounded in a growing advocacy for children either not diagnosed or misdiagnosed for issues related to dyslexia. Parents of those children have been very politically active, and while their concerns for children with special needs are valid, the media and politicians have overreacted to that narrow issue and over-generalized the needs of those students to all students. This advocacy has also run roughshod over the actual and more nuanced research base on dyslexia itself. In short, parents advocating for their children should be honored and heard, but parents should not be driving reading instruction or reading policy.
[ ] Emphasizing voices of cognitive scientists over literacy professionals. Two common patterns in media coverage of education and specifically reading are that journalists perpetuate both a gender and a discipline bias in whose voices are highlighted; notably, mostly men who are cognitive scientists are used to drive the agenda while women who are literacy practitioners and scholars are either ignored, marginalized as “critics,” or scapegoated as misguided advocates of BL or WL.
[ ] Trusting silver-bullet, one-size-fits-all claims about teaching and learning. Fundamentally, the historical and current flaw in the reading wars, even one framed as the “science of reading,” is that phonics advocacy reaches for “all students must have systematic intensive phonics programs,” buoyed recently by “but intensive phonics programs won’t hurt any students.” However, all teaching and learning proves to be far more complex that these claims. If we return to BL as a reading philosophy, we can emphasize that each child (not all children) should receive the type and amount of direct phonics instruction they need to begin and then grow as readers; that type and amount is difficult to prescribe, and often children are mis-served when systematic phonics programs are adopted because fidelity to the program typically trumps the actual goal of reading instruction, eager and autonomous readers. When a child is mandated to complete a phonics program, regardless of that child’s needs, that time would have been much better spent with the child reading by choice; therefore, systematic phonics do in fact harm students when they are implemented as “all students must.”
[ ] Feeding a false narrative blaming teachers and teacher educators both of whom are deprofessionalized /powerless in accountability structures. There are some dirty little secrets about education that discredit much of how media cover teaching and learning: as noted above, measurable teacher impact on student learning is quite small; teachers are mostly complying with mandates, and not making instructional or assessment decisions; and teacher educators have very little impact on how teachers implement teaching once they are in the classroom and required to conform to the mandates linked to standards and high-stakes testing.
Don’t believe it because NCTQ bases the claims on one weak study about what every teacher should know, and then did a review of textbooks and syllabi that wouldn’t be allowed in undergraduate research courses.
Next, despite genuinely good intentions, Kecio Greenho, regional executive director of Reading Partners Charleston, claims in an Op-Ed for The Post and Courier (Charleston, SC) that South Carolina’s Read to Succeed, which includes provision for third-grade retention based on high-stakes test scores, “is a strong piece of legislation that gives support to struggling readers by identifying them as early as possible.”
Don’t believe it because Read to Succeed is a copy-cat of similar policies across the U.S. that remain trapped in high-stakes testing and grade retention, although decades of research have shown retention to be very harmful to children.
When you are confronted with claims about education, too often the source and the claim are without merit, but you have to be aware that those with good intentions can make false claims as well.
As part of an ongoing series of reports by the National Council on Teacher Quality (NCTQ), Learning About Learning: What Every New Teacher Needs to Know makes broad claims about teacher education based on a limited analysis of textbooks and syllabi. The report argues that teacher education materials, specifically educational psychology and methods textbooks, are a waste of funds and do not adequately focus on what the report identifies as six essential strategies. These inadequacies, the report contends, result in ill-prepared teacher candidates lacking in “research-proven instructional strategies” (p. vi). The report offers recommendations for textbook publishers, teacher education programs, and state departments of education. However, it is not grounded in a comprehensive examination of the literature on teaching methods, and it fails to validate the evaluative criteria it employs in selecting programs, textbooks, and syllabi. The single source it relies on to justify its “six essential strategies” provides limited support for NCTQ’s claims. This primary source concludes, with only one exception, that the evidence supporting each of the six strategies is only moderate or weak. Limiting the analysis to one source that provides only tepid support renders the report of little value for improving teacher preparation, selecting textbooks, or guiding educational policy.
In the spirit of good journalism, let me start with full disclosure.
I am on the Editorial Board of NEPC (you’ll see why this matters in a few paragraphs), and that means I occasionally provide blind peer review of research reviews conducted by scholars for NEPC. That entails my receiving a couple very small stipends, but I have never been directly or indirectly asked to hold any position except to base my reviews on the weight of the available evidence.
Further, since this appears important, I am not now and have never been a member of any teacher or professor union. Recently, I spoke to a local union-based conference, but charged no fee (my travel from SC to TN was covered).
Finally, I have been confronting the repeatedly poor journalism covering education and education reform for several years, notably see my recent piece, Education Journalism Deserves an F: A Reader.
My key points about the failures of journalism covering education include (i) journalists assuming objective poses, that are in fact biased, (ii) the lack of expertise among journalists about the history and research base in education, and (iii) the larger tradition in journalism to dispassionately (again a pose, but not real) present “both sides” of every issue regardless of the credibility of those sides or regardless of whether or not the issue is really binary (let’s highlight also that virtually no issue is binary).
So I remain deeply disappointed when major outlets, here Education Week, and experienced journalists, specifically Stephen Sawchuk, contribute to the worst of education reform by remaining trapped in the worst aspects of covering education.
That framing pits NEPC against the Thomas B. Fordham Institute—although a number of others with stakes in the debate are listed. What is notable here is how Sawchuk chooses to characterize each; for example:
Still other commenters drew on a brief prepared by the National Education Policy Center, a left-leaning think tank at the University of Colorado at Boulder that is partly funded by teachers’ unions and generally opposes market-based education policies….
Thomas B. Fordham Institute, which generally backs stronger accountability mechanisms in education….
Only a handful of commenters were outright supportive of the rules. At press time, a coalition of groups were preparing to submit a comment backing the proposal. The coalition’s members included: Democrats for Education Reform, a political action committee; Teach Plus, a nonprofit organization that supports teacher-leadership efforts; the National Council on Teacher Quality, an advocacy group; and the alternative-certification programs Teach For America and TNTP, formerly known as The New Teacher Project.
Yet, Fordham supports “stronger accountability” and not a single group in the third listing has a “nudge” despite, for example, NCTQ entirely lacking credibility.
Also, NEPC has a hyperlink, but none of the others? And where is the link to the actual report from NEPC, and is there any credible evidence the report on the USDOE’s proposal is biased or flawed?
Since traditional faux-fair-and-balanced journalism continues to mislead, since we are unlikely to see a critical free press any time soon, let me, a mere blogger with 31 years of teaching experience (18 in a rural public SC high school, and the remainder in teacher education) and about twenty years of educational scholarship offer some critical clarifications.
On December 3, 2014, the U.S. Department of Education released a draft of proposed new Teacher Preparation Regulations under Title II of the Higher Education Act with a call for public comments within 60 days. The proposal enumerates federally mandated but state-enforced regulations of all teacher preparation programs. Specifically, it requires states to assess and rate every teacher preparation program every year with four Performance Assessment Levels (exceptional, effective, at-risk, and low-performing), and states must provide technical assistance to “low-performing” programs. “Low-performing” institutions and programs that do not show improvement may lose state approval, state funding, and federal student financial aid. This review considers the evidentiary support for the proposed regulations and identifies seven concerns: (1) an underestimation of what could be a quite high and unnecessary cost and burden; (2) an unfounded attribution of educational inequities to individual teachers rather than to root systemic causes; (3) an improperly narrow definition of teacher classroom readiness; (4) a reliance on scientifically discredited processes of test-based accountability and value-added measures for data analysis; (5) inaccurate causal explanations that will put into place a disincentive for teachers to work in high-needs schools; (6) a restriction on the accessibility of federal student financial aid and thus a limiting of pathways into the teaching profession; and (7) an unwarranted, narrow, and harmful view of the very purposes of education.
If there is anything “left-leaning” or any evidence that union money has skewed this review, I strongly urge Sawchuk or anyone else to provide such evidence—instead of innuendo masked as balanced journalism.
And let’s unpack “left-leaning” by looking at NEPC’s mission:
The mission of the National Education Policy Center is to produce and disseminate high-quality, peer-reviewed research to inform education policy discussions. We are guided by the belief that the democratic governance of public education is strengthened when policies are based on sound evidence.
A revision appears in order so I can help there also:
Still other commenters drew on a brief prepared by the National Education Policy Center, a left-leaning think tank committed to democratic and evidence-based policy at the University of Colorado at Boulder that is partly funded by teachers’ unions and generally opposes market-based education policies not supported by the current research base….
Since NEPC is balanced against Fordham, it seems important to note that NEPC has three times awarded Fordham its Bunkum Award (2010, 2008, 2006) for shoddy and biased reports; thus, another revision:
Thomas B. Fordham Institute, a free-market think tank which generally backs stronger accountability mechanisms in education regardless of evidence to the contrary.
I added the hyperlink to the Fordham mission statement, which uses code also (“options for families,” “efficient,” “innovation,” “entrepreneurship”) to mask their unwavering support not for “stronger” accountability but for market-based policy.
What does all this teach us, then?
All people and organizations—including Education Week, NEPC, and Fordham—are biased. To pretend some are and some aren’t is naive at best and dishonest at worst.
NEPC, I believe, freely admits there is a bias to what reports are selected for review (just as EdWeek chooses what issues to cover and where to place and how to emphasize those pieces), but the reviews implement the most widely accepted practices for transparency and accuracy, blind peer-review. Further, the reviews are freely available online for anyone to examine carefully and critically.
The real story that mainstream media are refusing to cover is that the USDOE (and the so-called reformers such as TFA, NCTQ, DFER, TNTP, etc.) lacks the experience and expertise to form education policy, but the actual researchers and practitioners of the field of education remain marginalized.
The greatest failure among the mainstream media is the inability of journalists to recognize and then address that their narrative about “reformers v. anti-reformers” is a straw man argument and that the real battle is between those seeking reform built on the research base (researchers and educators consistently marginalized and demonized) and the rich and powerful without credibility committed to accountability, standards, and high-stakes testing as a mask for market ideologies—despite three decades of research showing that has not worked.
And since I opened with transparency, let me end with a solid clarification that I am on record as a teacher educator that teacher education desperately needs reforming, as does public education broadly, professional education organizations, and teacher unions. And thus, I recommend the following: