Category Archives: Education

Poem: knowledge

[Header Photo by Gabriella Clare Marino on Unsplash]

i didn’t know
what i was doing

it was like
falling through the ice

when you didn’t know
you were walking on a frozen lake

and the last thing
you heard before sinking

slipped out of your mind
because of the shock

frigid water pulling you
into darkness and forgetting

you should have known
(you whisper over my shoulder)

and i suddenly realize
the stiffness of my fingers

even as i am awash
with the urge to touch your skin

i turn to reach for you
silhouetted by a setting sun

i should have known
what i was doing

—P.L. Thomas

A Man’s World (pt. 3): Gaiman Edition

[Header Photo by Museums of History New South Wales on Unsplash]

I am currently reading Haruki Murakami’s newest novel, The City and Its Uncertain Walls. In some ways, the story is not as typical of his other novels (I have read all of his work and co-edited a volume on him).

However, this novel maintains a recurring aspect of his works—men who have lost or been left by women (directly expressed in his short story collection Men Without Women).

Reading this novel comes after I recently submitted a chapter on Murakami expanded from a blog post about his 2017 story collection; in that, I address concerns about whether Murakami’s fiction slips too often into sexism and objectifying women.

While the questions about how Murakami deals with women in his fiction creates tension in me as a reader and scholar, I am more disturbed and struggle much more with the men writers and creators who persist in proving that they mistreat, abuse, and assault women in their (sometimes mostly) secret lives.

My reading and fandom life is littered with men writers I once admired but now find it hard to appreciate their work because of their failings as men, as humans—Woody Allen, J.D. Salinger, e.e. cummings, Cormac McCarthy, and Neil Gaiman (see several posts below addressing these men).

The debate about where the line is between a person’s creative work and their personal lives has a long history—and many people disagree about being able to respect that work while acknowledging or even rejecting the personal flaws (and much worse).

For example, Ryunosuke Hashimoto frets about Murakami: “The negative image that has been associated with Murakami is so frequently spotted on social media as a consequence of the new generational standard that one wrong cancels out all of the good that is contained in a work.”

The recent revelations about McCarthy and Gaiman seem to rise far above “one wrong” into predatory patterns and abhorrent abuse.

Concurrent with reading the seemingly late mainstream coverage of Gaiman in Vulture, I have been watching the series House for the first time (while my partner is re-watching one of her favorite series).

House is challenging us in similar ways, considering how much the problems with the episodes weigh against the compelling aspects of the show.

To me, House tries to be topical but can fall cartoonishly flat, such as Spin (S2E6) about a professional cyclist. The cycling and discussions around cheating (EPO and blood doping) are wildly bad, especially the scene of actual bicycle racing.

But we also had just watch Skin Deep (S2E13) a day before the Gaiman article dropped in Vulture.

Skin Deep, for me, has many of the flaws found in the Spin episode, likely from trying to hard to address then-current controversies.

The episode covers a great deal of controversial topics—sexualizing and objectifying young women (the main character is a 15-year-old supermodel), sexual abuse (the father admits sex with his daughter), and then the disturbing big reveal (the young woman is discovered to be intersex with cancerous testes).

Dr. House’s behavior is glib, offensive, and disturbing, including misinformation and not-so-subtle bigotry.

Re-watching Friends, Seinfeld, and The Office has left us cringing as well.

So from what to do about Gaiman’s work to navigating Murakami and series such as House, I remain troubled about where the line is between the creative works and the flawed to despicable humans, those men.

I also must stress that we are in a political moment where the consequences for being a sexual predator or committing sexual assault are being lessened, even erased. The rights of women are being eroded; yes, it is more and more a man’s world, a world hostile and calloused to the lives of girls and women.

The Gaiman moment is an(other) opportunity to say there is a line, it has been crossed, and there must be consequences.

There are thousands of wonderful creative works by people who do not have these transgressions, these failures to respect the humanity of others, hanging over them and their works.

I’ll keep watching House, and I am pretty comfortable with how I understand and appreciate Murakami (and I could be wrong). But Gaiman deserves consequences of a magnitude from which he will not recover as an artist—and others will (maybe) learn as well.


See Also

“He knows, or thinks he knows”: It’s Still a Man’s (Hostile) World

True Detective: It’s Still a Man’s (Hostile) World, pt. 2

Flawed Men Artists and Their Crumbling Art

The Woody Allen Problem Is Our Problem

Recommended: Larcenet’s Graphic Adaptation of McCarthy’s The Road

Writing Process: Scholarly/Academic Writing Edition

[Header Photo by 🇸🇮 Janko Ferlič on Unsplash]

Like many academics working in higher education, I spent several days over my holiday break preparing my courses for spring (two first-year writing seminars and one upper-level writing/research course) and then an intense three days writing and submitting a scholarly chapter on growth mindset and grit for an upcoming book.

I am fortunate, I think, because my teaching life and my writing life continually inform each other. Especially when teaching my writing-intensive courses, I teach as a writer and scholar, fore-fronting my writing/scholarship in my teaching.

My chapter on mindset and grit gave me a perfect opportunity to think deeply about and prepare new materials for my courses this spring (access those artifacts in this folder: Scholarly Essay Process).

As a writing teacher, I have been struggling throughout my 40-plus-year career with the negative impact of templates and scripts for students developing the skills and knowledge they need to be autonomous and compelling writers.

I have rejected, for example, the five-paragraph essay model, and I have challenged the mechanical implementation of the writing process as a sequential series of steps.

The problem is that this crusade against templates and scripts is not as simple (or effective) as I initially believed many decades ago.

Another problem with rejecting templates and scripts is that a significant amount of scholarly and academic writing is bound by scripts, word-count limits, formatting requirements, and citation/style guidelines.

My evolved and more nuanced position on templates and scripts in writing instruction and assignments acknowledges that beginning writers need opportunities to read widely in order to develop their own “scripts” for a wide variety of writing types. Of course they also need structure, but starting with the rigid template does far more harm than good for emerging writers.

Then, as students-as-writers move into high school and college, they need more experiences with authentic templates and guidelines found in much of academic and scholarly writing.

The editors of the chapter I just completed, for example, sent writers an content outline for chapters to follow as well as a word count limit and citation/style sheet requirements (APA).

When I write reviews for a think tank, I receive the same structures and very rigid expectations for staying within those limits (including their own in-house style sheet).

The irony, then, is that this spring, my first-year writing seminar will focus on challenging scripts and “rules” for essays and writing while my upper-level writing/research class will be writing a strictly scripted major scholarly essay (I assure upper-level students that this experience will prepare them for graduate school, and I recently received an email from a former student in this course telling me “thank you” for just that).

While I feel like my teaching of writing has better bridged the gap between helping students acquire the broad concepts of effective and compelling writing (versus imposing on them artificial templates and “rules”), I continue to struggle with fostering in students the sort of writing process that would better serve them.

Similar to my stance on the essay form, I teach that the writing process is not sequential or a rigid template, but a set of concepts that most writing addresses to help produce a writing final product needed for the purposes of that project. In other words, these broad concepts are fairly stable but the so-called “steps” may differ and the time spent on each “step” likely will vary for different writing purposed.

For scholarly writing, the writing process includes much more than composing sentences and paragraphs. Here, then is a brief overview of my recent process this week writing and submitting my book chapter on mindset and grit.

Let me start with a caveat that I think should be shared with students.

When most scholars start a writing project, we are dealing with content that we have expertise in; my project on mindset and grit has years of blogging and gathering research behind the brief process I followed over three days.

My first steps included revisiting the chapter guidelines sent by the editors, confirming formatting, citation, word count, etc.

Then, as I stress to students, I prepared my Word document, conforming to APA guidelines and inserting the subheads, etc., along with the chapter template required by the editors. One concern I have with students is they tend to address formatting last, and I urge them to address this tediousness first. (See my submitted and not yet edited copy here.)

Next, I put the required page break in the end of the document and prepared my working references list. To create the list, I reviewed my many blogs on the topics, searched through my library data bases, reached out to other scholars for recommendations, and carefully culled sources from the references of the sources I had gathered (working from the most recent publications).

Let me stress here that as I detail my process, as I worked, these “steps” became more and more recursive in that as I worked in one “step” I would invariably return to and revise other “steps” (I caught simple formatting edits and typos, for example, in many of the “steps” being detailed here even as that is considered the editing “step”).

One goal as I worked was to create a compelling opening that included a thesis paragraph clearly aligned with the subheads and organization of the chapter. I drafted that opening on the first day and then I carefully edited and formatted all of my references, checking APA and loosely thinking about removing or adding needed sources. See the opening here:

Literacy educator and scholar Lou LaBrant (1947) asserted almost eight decades ago: “A brief consideration will indicate reasons for the considerable gap between the research currently available and the utilization of that research in school programs and methods” (p. 87). While valid in the mid-2020s, a slightly more nuanced argument needs to be proposed: Scientific research on teaching and learning is often lost in translation once it is packaged by the education marketplace and reduced to legislation and policy. In other words, what is popular, packaged, and mandated in education is too often an oversimplified and even misguided version of scientific findings, nothing more than a fad. An even more complicating problem, as well, is that classroom practices likely should be guided by more than experimental and quasi-experimental research (Wormeli, n.d., The problem).

Over the past decade-plus, two examples of research lost in translation include growth mindset and grit. Carol Dweck (2008), often publishing with others (Dweck & Yeager, 2019) examines the role of mindset is academic success. Grit is grounded in the research and advocacy of Angela Duckworth (2018); however, a great deal of the popularization of grit occurred through the journalism and advocacy of Paul Tough (2013) who promoted “no excuses” charter school practices, specifically the Knowledge Is Power Program (KIPP) charter chain (Abrams, 2020). While growth mindset and grit are distinct concepts and educational movement, they tend to share similar spaces and problems in practice.

This chapter explores the central claims of growth mindset and grit before considering the validity of those claims in the context of the following critical questions: How are growth mindset and grit grounded while also perpetuating bootstrapping, rugged individualism and meritocracy myths? What are the roles of deficit ideologies (word gap, victim blaming, racism, sexism, classism, etc.) in popular advocacy for growth mindset and grit? As well the research and popular claims about growth mindset and grit are interrogated at three levels: (1) research validity and robustness, (2) evidence-based or ideologically based, and (3) racism and classism.


The next morning I reviewed and organized all of sources to comply with the structures required in the chapter. Here, I think, is where students are lost because of their previous experiences writing inauthentic research papers (in which many students gather the required number of sources and then simply walk the reader through their sources, writing about the sources and not their topic).

I created a table by my topics, mindset and grit, and then by the three major themes/patterns I planned to address; the key here, for students, is recognizing the need to focus on the patterns in their discussion and to cite multiple sources for those patterns.

I also created a listing of sources by my major topics, and then carefully reviewed them all to be sure I had classified them correctly and to identify the few I wanted to cite or quote more fully (I stress to students who have more experience with MLA and textual analysis that quoting is only one way to give evidence in scholarly writing and is often discouraged in many disciplines when not doing textual analysis).

Analyzing and organizing my evidence is designed to creating writing that offers a compelling generalization that is valid followed by a representative source or two to support the pattern; for example:

However, the current public discourse around mindset has made a significant turn to being critical and even skeptical (Study finds, 2018; Tait, 2020; Young, 2021a, 2021b). This shift is spurred by the growing research base that fails to replicate the primary claims of mindset advocacy or shows negative correlations or harm in implementing mindset intervention over other aspects of learning and achievement (Brez et al., 2020; Burgoyne et al., 2020; Burnette et al., 2018; Dixson et al., 2017; Ganimian, 2020; Li & Bates, 2019; Macnamara & Burgoyne, 2023; Sisk et al., 2018; Schmidt et al., 2017). Brez at al. (2020) conclude: “The pattern of findings is clear that the intervention had little impact on students’ academic success even among sub-samples of students who are traditionally assumed to benefit from this type of intervention (e.g., minority, low income, and first-generation students)” (p. 464). And Macnamara and Burgoyne (2023) make a more problematic assertion:

Taken together, our findings indicate that studies adhering to best practices are unlikely to demonstrate that growth mindset interventions bene t students’ academic achievement. Instead, significant meta-analytic results only occurred when quality control was lacking, and these results were no longer significant after adjusting for publication bias. This pattern suggests that apparent effects of growth mindset interventions on academic achievement are likely spurious and due to inadequate study design, flawed reporting, and bias. (p. 163)


Again, I need to emphasize that students must understand that several of these steps require and prompt continuous revision and editing. I returned to my title and the thesis paragraph for revision as I drafted the two major subhead sections and the subheadings under those. In other words, I was then in a constant state of seeking coherence in the chapter whereby all the parts match and create the whole (which then is reinforced by the final section/closing of the chapter).

For students, I will stress that I drafted an opening on the first day, drafted the first major subheading section the second day, and then drafted the second major section and closing the third day. But the writing parts were embedded in a great deal of reading, cataloguing, and organizing.

I also completed a full initial draft, but then let that sit for a while before doing a full re-read, revision, and editing session with the entire chapter in front of me; I did several re-read-revise-edits along the way as well.

For students, then, here is what they should see as elements of a writing process for academic/scholarly writing:

  • Identify writing assignment guidelines, formatting requirements, and citation style.
  • Prepare your Word document per those guidelines, creating your initial title, subheads, and any guiding bullet points or questions detailed in the assignment.
  • Create working references list, addressing citation formatting before working further.
  • Create an initial compelling opening (multiple paragraphs) with a thesis paragraph that correlates with the title, the organization, and subheads of the essay.
  • Read, re-read, organize, and catalogue (patterns/themes) references based on the organization of the assignment; identify the representative anchor sources that will be used to elaborate on the patterns identified and cited with multiple valid sources. Be sure to carefully identify direct quotes and include citation, page or paragraph numbers, etc., when creating a matrix of patterns and analyses of the evidence.
  • Revise and edit throughout these steps, even significant revisions such as addressing the title, the thesis paragraph, or organization if the review of the evidence prompts those revisions.
  • Create a full first draft, and then let that sit. The final step should be a careful re-read to revise and edit before submitting.

The essay form and the writing process are important concepts for developing writers and students to understand, and that understanding must come from authentically engaging with both in supportive environments.

The challenge with teaching students to write generally and then as academics/scholars is that there are too many moving parts and simply no hard and fast “rules” to govern either the essays they write or the process they use to write them.

The Outlier Story: How Education Journalism (Almost) Always Gets It Wrong

[Header Photo by Will Myers on Unsplash]

The first two decades of my career as a literacy educator were spent as a high school English teacher in rural Upstate South Carolina, the high school I had graduated from and my home town.

This began in 1984 when SC had passed sweeping education legislation that would become the standard legislative approach across the US—accountability policy grounded in state standards, high-stakes testing (grades 3 and 8 with exit exams in high school starting in grade 10), and school report cards.

SC was an early and eager adopter of the “crisis” rhetoric fueled by A Nation at Risk report released under the Reagan administration.

That high school and town were populated mostly by working-class and poor people; the town and smaller towns served by the high school were dead or dying mill towns.

Schools had far more poverty than the data showed because rural Southerners often refused to accept free and reduced meals (the primary data point for measuring poverty in schools).

However, for many years the high school ranked number 1 in the entire state for student exit exam scores in math, reading, and writing. Because of our student demographics (and notably because these students had relatively low or typical scores in grade 8 testing), we were what many people would refer to as a “high flying” or “miracle” school.

In more accurate statistical terms, we were an “outlier” data point in the state.

I have been in SC education for an ongoing five decades, and the overwhelming body of data related to student achievement in the state has matched what all data show across the US—measurable student learning is most strongly causally related to the socioeconomic status and educational levels of those students’ parents.

Further, the full story about how we achieved outlier status includes two aspects.

One is that from grade 8 to grade 10 testing, the population of students changed because of students dropping out of school (and these were among the lowest scoring students in grade 8). In fact, students were often encouraged to drop out and enroll in adult education (a two-fer win for the school because they would not be tested and enrolling in adult ed removed them from the drop-out data).

A second part of the story is that students scoring low in grade 8 were enrolled in two math and two ELA courses in grade 10. The “extra” courses were specifically designed as test-prep for state testing. We rigorously adopted a teach-to-the-test culture.

For the state writing exam, for example, we discovered that the minimum text a student could produce was an “essay” with a three-sentence introduction, a five-sentence body, and a three-sentence conclusion. Students in the “extra” ELA course wrote dozens of 3-5-3 essays in grade 10 with the teacher focusing on helping students avoid the “errors” that would flag the text as a below standard.

Many of us found the 3-5-3 approach to writing became a huge problem when students were required to write in other courses; even as students “passed” the state writing exam, they were not performing well as writers in other courses, and even refusing at times to write more than 3-5-3 essays.

For the high-stakes accountability era, we did do a great deal of good because many students across the US passed all their courses but could not receive a diploma because of exam exams. Most of our students graduated, and not because we did anything underhanded.

Yet, I must stress that how we accomplished our outlier status was likely not scalable, but more importantly, our approach should not be replicated by other schools.

Fast-forward 40 years, and education journalism has written hundreds and hundreds of stories not only in pursuit of “outlier” schools, but carelessly framing them as both proof of the on-going (permanent) education crisis and that “status quo” education refuses to implement what we know “works.”

The newest iteration of this misleading story in education is the “science of” movement grounded in the “science of reading” story first popularized by Emily Hanford, who wrote about a “miracle” school in Pennsylvania. This compelling but false story has been parlayed into an even more successful podcast as well as spawning dozens of copy-cat articles by education journalists across the country.

Media, however, never covered Gerald Coles’s careful debunking of the “miracle” school Hanford featured. Similar to my story above about the beginning of my teaching career, the full story of that school was quite different than what was covered in the media.

And as 2024 drew to a close, education journalists simply have no other lens that this: Which School Districts Do the Best Job of Teaching Math?

To be blunt, education journalists are mistakenly compelled to focus on the “exceptional” districts (outliers) while ignoring the more compelling red line that, again, shows what, in fact, is normal and what can and should be addressed in terms of educational reform—the negative impact of poverty on educational attainment.

So here is a story you likely will not read: Education journalism is failing public education, and has been doing so for decades.

Education journalists are blindly committed to the “crisis” and “outlier” stories because they know people will read and listen to them.

The “outlier” story makes for a kind of “good” journalism, I suppose, but the problem is that these stories become popular beliefs and then actual legislation and policy.

The current”science of” movement is riding a high wave because of the “science of reading” tsunami. But like all the misguided reforms since the original false education story, A Nation at Risk, this too will crash and reveal itself as a great harm to students, teachers, and our public school system.

This is boring, I know, but most outlier stories are ultimately false or they simply are not replicable or scalable, as I explained in my opening story.

If we genuinely care about student learning, teaching, and the power of public education, we need education journalists more dedicated to the full story and the not the outliers that help drive their viewing numbers.


Recommended

Big Lies of Education: A Nation at Risk and Education “Crisis”

Big Lies of Education: Reading Proficiency and NAEP

Big Lies of Education: National Reading Panel (NRP)

Big Lies of Education: Poverty Is an Excuse

Big Lies of Education: International Test Rankings and Economic Competitiveness

Big Lies of Education: Grade Retention

Poem: the floor fell out from under us (redolent)

[Header Photo by Emma Frances Logan on Unsplash]

I wanted to get rid of everything redolent of the past

The City and Its Uncertain Walls, Haruki Murakami

The floors are fallin’ down from everybody I know

“Bloodbuzz Ohio,” The National


we stood
somewhere between

we don’t know
and we don’t care

then the floor fell out
from under us

and there was a band
they all looked 12 years old

someone said
they were cute

we said
they were awfully young

isn’t that how it goes
now nearly everyone

in the room
younger than you

way more talented
and cuter

but then we were
the floor not under us

it wasn’t
the falling

it was
the landing

when the floor fell out
from under us

—P.L. Thomas

Big Lies of Education: Grade Retention

Update [November 2025]

Early Grade Retention Harms Adult Earnings, Jiee Zhong [access PDF HERE]

See also: American Economic Journal: Applied Economics (Forthcoming)


The Big Lie of grade retention in the US is that it is often hidden within larger reading legislation and policy, notably since the 2010s:

The Effects of Early Literacy Policies on Student Achievement, John Westall and Amy Cummings

Westall and Cummings, in fact, have recently found:

  • Third grade retention (required by 22 states) significantly contributes to increases in early grade high-stakes assessment scores as part of comprehensive early literacy policy.
  • Retention does not appear to drive similar increases in low-stakes assessments.
  • No direct causal claim is made about the impact of retention since other policy and practices linked to retention may drive the increases.

However, their analysis concludes about grade retention as reading reform :

Similar to the results for states with comprehensive early literacy policies, states whose policies mandate third-grade retention see significant and persistent increases in high-stakes reading scores in all cohorts. The magnitude of these estimates is similar to that of the “any early literacy policy” estimates described in Section 4.1.1 above, suggesting that states with retention components essentially explain all the average effects of early literacy policies on high-stakes reading scores. By contrast, there is no consistent evidence that high-stakes reading scores increase in states without a retention component.

Therefore, one Big Lie about grade retention is that it allows misinformation and false advocacy for the recent “science of reading” reform across most states in the US.

To be blunt, grade retention is punitive, impacts disproportionately minoritized and marginalized students, and simply is not “reading” reform [1]:

Since grade retention in the early grades removes the lowest scoring students from populations being tested and reintroduces them biologically older when tested, the increased scores may likely be from these population manipulations and not from more effective instruction or increased student learning.

Evidence from the UK, for example, suggests that skills-based reading testing (phonics checks) that count as “reading” assessment strongly correlate with biological age (again suggesting that test scores may be about age and not instruction or learning):

Another Big Lie about grade retention is that reading reform advocates fail to acknowledge decades of evidence that grade retention mostly drives students dropping out of school and numerous negative emotional consequences for those students retained.

Consequently, NCTE has a resolution rejecting test-based grade retention:

Resolved, that the National Council of Teachers of English strongly oppose legislation mandating that children, in any grade level, who do not meet criteria in reading be retained.

And be it further resolved that NCTE strongly oppose the use of high-stakes test performance in reading as the criterion for student retention.

Grade retention, then, is an effective Big Lie of Education because it allows misinformation based in test-score increases to promote policy and practices that fail to increase test scores in sustained ways (see the dramatic drop in “success” for “high-flying” states such as Mississippi and Florida, both of which taut strong grade 4 reading scores, inflated by grade retention, but do not sustain those mirage gains by grade 8).

Grade retention is a Big Lie of education reform that punishes minoritized and marginalized students, inflates test scores, and fuels politicized education reform.

In short, don’t buy it.

Recommended


Note [Updated]

[1] Consider that states retaining thousands of students each year, such as Mississippi, have not seen those retention numbers drop, suggesting that the “science of reading” reforms are simply not working but the retention continues to inflate scores.

The following data from Mississippi on reading proficiency and grade retention exposes that these claims are misleading or possibly false:

2014-2015 – 3064 (grade 3) – 12,224 K-3 retained/ 32.2% proficiency

2015-2016 – 2307 (grade 3) – 11,310 K-3 retained/ 32.3% proficiency

2016-2017 – 1505 (grade 3) – 9834 K-3 retained / 36.1 % proficiency

2017-2018 – 1285 (grade 3) – 8902 K-3 retained / 44.7% proficiency

2018-2019 – 3379 (grade 3) – 11,034 K-3 retained / 48.3% proficiency

2021-2022 – 2958 (grade 3) – 10,388 K-3 retained / 46.4% proficiency

2022-2023 – 2287 (grade 3) – 9,525 K-3 retained/ 51.6% proficiency

2023-2024 – 2033 (grade 3) – 9,121 K-3 retained/ 57.7% proficiency

2024-2025 – 2132 (grade 3) – 9250 K-3 retained/ 49.4% proficiency

Update [January 2026]

On education miracles in general (and those in Mississippi in particular), Howard Wainer, Irina Grabovsky and Daniel H. Robinson


Education Journalism Fails Education (Again): “News media often cater to panics”

[Header Photo by Markus Spiske on Unsplash]

“The available research does not ratify the case for school cellphone bans,” writes Chris Fergusonprofessor of psychology at Stetson University, adding, “no matter what you may have heard or seen or been [told].”

What Ferguson then offers is incredibly important, but also, it exposes a serious lack of awareness by Kappan considering their coverage of education:

And the media treatment has played a part in amplifying what can only be described as a moral panic about phones in schools.
 
One recent New York Times article begins with the sentence, “Cellphones have become a school scourge.” 
 
Can we expect objective coverage to follow?
 
News media often cater to panics, neglecting inconvenient science and stoking unreasonable fears. And this is what I see happening with the issue of cellphones in schools.

First, Ferguson’s characterizations of media coverage of education—”News media often cater to panics”—is not only accurate but matches a warning many scholars and educators have been offering for decades, especially during five decades of high-stakes accountability education reform uncritically endorsed by media.

The only story education journalists seem to know how to write is shouting crisis and stoking panic.

Just a couple days ago in The Hechinger Report, this headline, “6 observations from a devastating international math test,” is followed by this lede: “An abysmal showing by U.S. students on a recent international math test flabbergasted typically restrained education researchers. ‘It looks like student achievement just fell off a cliff,’ said Dan Goldhaber, an economist at the American Institutes for Research.”

And for a century, in fact, education journalism has been persistently fostering a “moral panic” about reading proficiency by students.

Here is Nicholas Kristof in the New York Times: “One of the most bearish statistics for the future of the United States is this: Two-thirds of fourth graders in the United States are not proficient in reading.”

Kristof is but one among dozens in the media repeating what constitutes at best an inexcusable mischaracterization and at worst a lie about what exactly NAEP testing data show about reading achievement in the US.

Nearly every media story about reading in the US since Emily Hanford launched in 2018 (and then repackaged as a podcast) the popular mischaracterization/lie has dutifully “amplif[ied] what can only be described as a moral panic” about reading achievement and instruction:

The stakes were high. Research shows that children who don’t learn to read by the end of third grade are likely to remain poor readers for the rest of their lives, and they’re likely to fall behind in other academic areas, too. People who struggle with reading are more likely to drop out of high school, to end up in the criminal justice system, and to live in poverty. But as a nation, we’ve come to accept a high percentage of kids not reading well. More than 60 percent of American fourth-graders are not proficient readers, according to the National Assessment of Educational Progress, and it’s been that way since testing began in the 1990s.

Ferguson’s warning about the misguided panic over cell phones in schools and the resulting rush to legislate based on that misguided panic is but a microcosm of the much larger and much more dangerous media misinformation about reading and the rise of “science of reading” (SOR) legislation.

We should heed Ferguson’s message not just about cell phones in schools but about the vast majority of media coverage of education and then how the public and political leaders overreact to the constant but baseless moral panics.

Yes, I am glad Kappan included Ferguson’s article, but I wish Kappan‘s The Grade and all education journalists would pause, take a look in the mirror, and recognize that his concern about media coverage of cell phones easily applies to virtually every media story on education.

In fact, I encourage The Grade and other education journalists to implement Ferguson’s “Red Flags” when considering education research, specifically the SOR story being sold:

RED FLAG 1: Claims that all the evidence is on one side of a controversial issue….

RED FLAG 2: Reversed burden of proof. “Can you prove it’s not the smartphones?”…

RED FLAG 3: Failing to inform readers that effect sizes from studies are tiny, or near zero, only mentioning they are “statistically significant.”…

RED FLAG 4: Comparisons to other well-known causal effects.

As I and others have repeatedly shown, the SOR stories fails all of these Red Flags.

Let’s look at just one example of Red Flag 1. Hanford quoting Louisa Moates (who has a market interest in selling SOR stories to promote her teacher training, LETRS, which, ironically, fails the scientific evidence test itself) asserts SOR is “settled science”:

There is no debate at this point among scientists that reading is a skill that needs to be explicitly taught by showing children the ways that sounds and letters correspond.

“It’s so accepted in the scientific world that if you just write another paper about these fundamental facts and submit it to a journal they won’t accept it because it’s considered settled science,” Moats said.

And this refrain is at the center of SOR advocacy, media coverage, and the work of education journalists: “Hanford pushed reporters to understand the research on how students learn to read is settled.”

However, not only is there no scientific evidence of a reading crisis caused by balanced literacy and a few targeted reading programs, the field of reading science is both complex and contested—the dominant theory, the simple view of reading, being revised by evidence supporting the active view of reading.

Ultimately, the moral panics around education have far more to do with media begging for readers/viewers, education vendors creating market churn for profit, and politicians grandstanding for votes.

In the wake of education journalists repeatedly choosing to “cater to panics,” students, teachers, and education all, once again, are the losers.

Course Grade Contracts: Assignments as Teaching and Learning, Not Assessment

[Header Photo by Diomari Madulara on Unsplash]

At the end of fall semester of year 41 as an educator, I can admit two things: (1) I may have learned more than my students (taught two new courses and continue to experiment with course grade contracts), and (2) I am excited about spring courses where I can implement what I learned (both about grade contracts and teaching students to write).

Since I entered the classroom in 1984, I am in my fifth decade as a teacher, much of that work dedicated to teaching writing to students but also using writing assignments as teaching and learning, not assessment.

Gradually and then at some point in the 1990s, I successfully eliminated traditional tests and assignment grades in my high school English courses. As a note of clarification, although I do not use tests or grades, I have always been required to assign grading period and course grades.

Thus, I have been seeking ways to better navigate a test/grade culture of traditional schooling (one my students have been conditioned to trust and even embrace) while practicing my critical philosophy that rejects both.

A few semesters ago, as part of that journey, I returned to the course grade contract, something I had tried in some fashion during my high school teaching years.

The problem I continued to have was that students were mostly unable to set aside their test/grade mentality, and thus, the absence of tests and assignment grades often negatively impacted student engagement and learning.

Initially, I envisioned course grade contracts would improve student engagement and lower stress and anxiety, thus improving learning.

Some non-traditional practices worked. I have students prepare for and participate in a class discussion for their midterm, for example. No memorization, no “cover your work,” and no exam stress.

This collaborative approach students both embraced and recognized as not assessment, but as learning experiences themselves.

However, particularly in courses that are not designated as writing courses (I do teach first-year writing and an upper-level writing/research courses), students tend to struggle significantly with the course structure and the use of a major writing assignments as an extended teaching and learning experiences (and not a way to grade them).

The first iteration of the course grade contract, then, focused on requiring students to submit, conference, and revise essays; I structured A and B course grades around minimum standards for the B-range (submit an acceptable essay, conference after receiving my feedback, and submit one acceptable revision that addresses the feedback) and additional revisions after more feedback for the A-range.

Despite the course grade being explicitly linked to minimum expectations for the process, students continue to see my feedback as negative and harsh, but also remain trapped in the possibility of submitting a perfect essay and never having to complete revisions.

In short, they see the essay assignment as a form of assessment and cannot fully engage in the submitting/revising process as individualized teaching and learning experiences.

Oddly, students continue to email me apologizing for their first submissions because they see the revision-oriented feedback, again, as negative or harsh—evaluative—and not a necessary part of essay assignment as teaching and learning.

The semester ending now, in fact, proved to me that using the course grade contract to shift assignments from forms of assessment to teaching/learning experiences (like the midterm exam period as a class discussion) needed another round of revision by me.

The problems I am still encountering include students struggling in content-focused courses (where they expect traditional tests and are not expecting to be challenged as thinkers and writers) because of the absence of tests/grades as well as the course structure that forefronts course content in the first half of the semester and mostly implements workshop the second half.

Here, then I want to share the new versions of those contracts to be implemented in spring. I have more explicitly included language about the purpose of the contract and added the final portfolio expectations in a format that also is more explicit about assignment expectations as well as fulfilling the contracted grade.

Here is the revised course grade contract for my first-year writing course:

And here is the revised course grade contract for my upper-level writing/research course:

The problem will remain, however, that I teach students conditioned for more than 2/3 of their lives in a culture of tests and grades, a culture that has taught them that assignments as by the teacher for evaluation and not for the student as teaching and learning.

I am seeking ways to shift the culture of teaching and learning as well as my students’ expectations for what it means to be a student and a teacher.

These are big asks for those students, but I am convinced they can make those shifts and benefit greatly from doing so.

Misinformation Nation: Reading Edition Reader

[Header Photo by Jorge Franganillo on Unsplash]

“Misinformation has received much public and scholarly attention in recent years,” write Ecker et al. in Why Misinformation Must Not Be Ignored, adding, “The fundamental question of how big a concern misinformation should be, however, has become a hotly debated topic.”

They argue and conclude, as noted in the abstract:

Here, we rebut the two main claims, namely that misinformation is not of substantive concern (a) due to its low incidence and (b) because it has no causal influence on notable political or behavioral outcomes. Through a critical review of the current literature, we demonstrate that (a) the prevalence of misinformation is nonnegligible if reasonably inclusive definitions are applied and that (b) misinformation has causal impacts on important beliefs and behaviors. Both scholars and policymakers should therefore continue to take misinformation seriously.

While this compelling examination of misinformation focuses broadly, their focus and conclusion are applicable to the current “science of reading” (SOR) movement that is grounded in misinformation, yet has proved to be highly compelling for the public and then has driven new and revised reading legislation across nearly every state in the US.

For example, a poll, Reading Education Messaging: Findings and Recommendations from an Online Poll of K-5 Parents in America, shows a disturbing pattern:

The misleading media claim about reading proficiency (because of the confusing categories in NAEP testing) actually changes parental opinion about reading achievement from positive to negative.

Although the SOR story about reading has become “holy text,” the foundational claims of a reading crisis and the causes of that supposed crisis are both false and mischaracterizations.

This influx of misinformation about reading proficiency and reading instruction has created a false story about reading teachers and teacher educators as “bad” teachers and imposed on students a one-size-fits-all and whitewashed set of reading reading

Further, this misinformation campaign about reading proficiency, reading instruction, and reading science is also a serious distraction from the real challenges facing learning and teaching reading.

The US is, as the authors propose, increasingly a misinformation nation, and that dynamic has reignited the corrosive “crisis” and reform cycles in US education, specifically in terms of reading and math.

The US is in a state of perpetual and manufactured crisis/reform in education that serves the interests of the media, political leaders, and the education market place, but harms teachers and students.

Here, then, is a reader that addresses that misinformation by offering a more nuanced and evidence-based examination of the outsized impact of out-of-school factors on student learning, the complicated facts about “reading proficiency” and NAEP testing, and the false stories driving the SOR movement:

Note

Ecker, U. K. H., Tay, L. Q., Roozenbeek, J., van der Linden, S., Cook, J., Oreskes, N., & Lewandowsky, S. (2024). Why misinformation must not be ignored. American Psychologist. Advance online publication. https://dx.doi.org/10.1037/amp0001448

Poem: return

[Header Photo by Tim D on Unsplash]

when you returned
did you recognize me?

i am the same me
72 hours later

i am a completely new me
63 years and counting

counting on you
counting on you and me

there are never enough fingers
you always gone far too long

when you return again
i will be the same me

a completely new me
always counting on you

minutes hours and days heavy
as the water at the bottom of a memory

—P.L. Thomas