Tag Archives: education

Media Manufactures Mississippi “Miracle” (Again) [Updated]

[Header Cropped from Photo by Miracle Seltzer on Unsplash]

I almost feel sorry for Louisiana. (See Update 2 below)

When the 2024 reading scores for NAEP were released, LA seemed poised to be the education “miracle” of the moment for the media and political leaders.

Since mainstream media seems to know only a few stories when covering education—outliers, crises, and miracles—the outlier gains by LA compared to the rest of the nation, reportedly still trapped in the post-Covid “learning loss,” was ripe for yet another round of manufacturing educational “miracles.”

However, the media is not ready to let go of the Mississippi “miracle” lie: There Really Was a ‘Mississippi Miracle’ in Reading. States Should Learn From It.

To maintain the MS “miracle” message, journalists must work incredibly hard to report selectively, and badly.

For example, Aldeman celebrates, again, MS as a outlier for for the achievement of the bottom 10% of students (carelessly disregarding that outlier data is statistically meaningless when making broad general claims):

But one state is bucking this trend: Mississippi. Indeed, there’s been a fair amount of coverage of Mississippi’s reading progress in recent years, but its gains are so impressive that they merit another look.

Next, Aldeman highlights reading gains by Black students in MS, omitting a damning fact about the achievement of Black (and poor) students in MS (which mirrors the entire nation):

That’s right, MS has the same racial and socio-economic achievement gaps since 1998, discrediting anything like a “miracle.”

But the likely most egregious misrepresentation of MS as a reading “miracle” is Aldeman “debunking” claims that MS gains are primarily grounded in grade retention, not the “science of reading.”

Notably, Aldeman seems to think linking to the Fordham Institute constitutes credible evidence; it isn’t.

So let’s look at the full picture about grade retention and MS’s reading scores on NAEP.

First, the research on increased reading achievement has found that only states with retention have seen score increases. Westall and Cummings concluded in a report on reading policy: “[S]tates whose policies mandate third-grade retention see significant and persistent increases in high-stakes reading scores in all cohorts…. [T]here is no consistent evidence that high-stakes reading scores increase in states without a retention component [emphasis added].” [Note that Aldeman selective refers to this study late in the article, but omits this conclusion.]

The positive impact of retention on test scores has not been debunked, but confirmed. What hasn’t been confirmed is that test score gains are actual achievement gains in reading acquisition.

Next, MS (like FL and SC, for example) has risen into the top 25% of states in grade 4 reading on NAEP, but then plummets into the bottom 25% of states by grade 8 (despite their reading reform having been implemented for over a decade), suggesting those grade 4 scores are a mirage and not a miracle:

And finally, MS has consistently retained about nine thousand students each year (mostly Black and poor students) for a decade; if the state was actually implementing something that works, the number of students being retained would decrease and (according the SOR claims that 95% of students can be proficient) disappear.

A final point is that media always omits the most important story, what research has shown for decades about student achievement:

Almost 63% of the variance in test performance was explained by social capital family income variables…. The influence of family social capital variables manifests itself in standardized test results. Policy makers and education leaders should rethink the current reliance on standardized test results as the deciding factor to make decisions about student achievement, teacher quality, school effectiveness, and school leader quality. In effect, policies that use standardized test results to evaluate, reward, and sanction students and school personnel are doing nothing more than rewarding schools that serve advantaged students and punishing schools that serve disadvantaged students.

High-poverty states and states with high percentages of so-called racial minorities are not, in fact, beating the odds—again, note that states have not closed the racial achievement gap or the socio-economic achievement gap.

Yes, too often our schools are failing our most vulnerable students. But the greater failures are the lack of political will to address the inequity in the lives of children and the lazy and misleading journalism of the mainstream media covering education.


Update 1

The Mississippi “miracle” propaganda is part of a conservative Trojan Horse education reform movement.

Note this commentary from the Walton-funded Department of Education Reform (University of Arkansas): Mississippi’s education miracle: A model for global literacy reform. The key reveal is near the end of the commentary:

Teaching at the right level and a scripted lessons plan are among the most effective strategies to address the global learning crisis. After the World Bank reviewed over 150 education programs in 2020, nearly half showed no learning benefit.

The goal is de-professionalizing teachers and teaching, not improving student reading proficiency.

Updated 2

The political, market, and media hype over both MS and LA are harmful because that misrepresentation and exaggeration drive the fruitless crisis/reform cycles in education and distracts reform from the larger and more impactful causes of student achievement.

To understand better education reform, I recommend the recently released Opportunity to Learn Dashboard.

According to the press release from NEPC:

Funded and maintained by the National Center for Youth Law (NCYL) and The Schott Foundation for Public Education, the Opportunity to Learn Dashboard tracks 18 indicators across 16 states. The project seeks to provide information about factors impacting the degree to which children of different ethnicities and races are exposed to environments conducive to learning.

However, indicators directly related to schools explain only a minority of the variation in achievement-related outcomes. Therefore, the dashboard includes out-of-school factors such as access to health insurance and affordable housing, as well as within-school factors such as exposure to challenging curricula and special education spending.

For both MS and LA, we must acknowledge the significant and robust systemic (out-of-school) disadvantages minoritized and impoverished students continue to face in both states:

Note here my points raised about lingering opportunity/achievement gaps exposed by NAEP scores in both states:

To emphasize again, NAEP scores do not reveal education “miracles” in either MS or LA. In fact, NAEP scores continue to show that education reform as usual is a failure.


Recommended

Does the “Science of Reading” Fulfill Social Justice, Equity Goals in Education? (pt. 1)

America Dishonors MLK By Refusing to Act on Call for Direct Action (pt. 2)

Scripted Curriculum Fails Diversity, Students, and Teachers: SOR Corrupts Social Justice Goals (pt. 3)

If We Are Scripted, Are We Literate? (Presentation)

Misreading the Outlier Distraction: Illiteracy Edition Redux

[Header Photo by Adam Winger on Unsplash]

Arthur Young graduated from high school with honors. However, as an adult, he was illiterate.

Literacy expert Helen Lowe featured Young and concluded:

Arthur could not read, even at a primer level. He could not drive a car, because he could not pass the test for a driver’s license; he could not read the street signs or traffic directions. He was unable to order from the menu in a restaurant. He could not read letters from his family and he could not write to them. He could not read the mixing directions on a can of paint or the label on a shipment of sheet rock. He had been cheated.

This story may be shocking but also sounds disturbingly familiar to a recent story on CNN:

This young woman, of course, has also been “cheated.”

But here is something important to acknowledge: The dramatic story of Young is from 1961 as part of a book on the illiteracy crisis in the US, Tomorrow’s Illiterates: The State of Reading Instruction Today.

Both problematic stories seven decades apart are outlier narratives that are both inexcusable failures but are not evidence of any generalizations about education, teaching, or literacy.

Stated bluntly, outliers can never lead to any sort of generalizations.

One of the great failures of public discourse and policy around reading and literacy in the US has been perpetual crisis rhetoric used to drive ideological agendas about what counts as literacy and how best to teach children and young adults to read and write.

If you had a time machine, you could visit any year over the past century in the US and discover that “kids today” can’t and don’t read because the education system is failing them.

These histrionic stories are compelling because they often include real children and adults whose lives have been reduced because of their illiteracy or inadequate literacy.

Ideally, of course, no person in the richest and most powerful country in the world should ever be cheated like that.

But here is the paradox: These outlier stories are distractions from doing the reform and work needed to approach all children and adults being literate.

Once again, reading test data for decades has shown exactly the same reality as all other forms of tests of student learning (math, science, civics, etc.): Over 60% of test scores are causally linked to factors beyond the walls of schools—access to healthcare, food security, housing security, access to books in the homes and communities, and thousands of factors impacting the lives and learning of children.

At best, teacher impact on measurable student literacy is only about 1-14%.

Yet, year after year, decade after decade, the US focuses on teacher quality, curriculum and standards, reading programs, and reading test scores without acknowledging or addressing the overwhelming impact of out-of-school factors on people acquiring the literacy they need and deserve to live their full humanity.

The two stories seven decades apart from above are likely far more complicated than any coverage could detail; the are both compelling and upsetting human stories that deserve our attention, in order to address their individual tragedies as well as taking greater care that others do not suffer the same fate.

However, misreading outlier distractions is not the way to honor that these people have been cheated.

Two things can be true at once: Outlier stories are heartbreaking and inexcusable; however, they prove nothing beyond the experiences they detail.

CNN uses outlier stories for traffic and profit.

Literacy ideologues use outlier stories to drive their agendas as well as to feed the education market.

We are all cheated, once again, when we play the outlier distraction game and refuse to acknowledge and address the crushing realities of inequity in the lives and learning of children.

Each child matters, and all children matter.

Yet, only the adults have the political and economic power to make that a reality.

Reading Deserves a New Story, Different Reform

[Header Photo by Gaelle Marcel on Unsplash]

You know the story: Students today can’t read.

And those who can, don’t read.

But there is more.

Children who can’t read have been cheated by their teachers, who fail to teach reading skills such as phonics.

And our national reading crisis is a threat to our very nation, especially our international economic competitiveness.

However, there are a few problems with this story.

If you were to find a Time Machine, you could travel to any year over the past century and hear the exact same story.

As well, this crisis rhetoric has been used historically and currently with math—and every other content area tested in the US.

Here is a story about reading you probably are not familiar with: There is no reading crisis, and there is no evidence that reading test scores are driven by reading instruction or programs.

Further, again, there is nothing unique or catastrophic about reading test scores or reading achievement by US students.

Historically and currently, reading test scores and achievement reflect a fact that has been replicated for decades:

Almost 63% of the variance in test performance was explained by social capital family income variables….The influence of family social capital variables manifests itself in standardized test results. Policy makers and education leaders should rethink the current reliance on standardized test results as the deciding factor to make decisions about student achievement, teacher quality, school effectiveness, and school leader quality. In effect, policies that use standardized test results to evaluate, reward, and sanction students and school personnel are doing nothing more than rewarding schools that serve advantaged students and punishing schools that serve disadvantaged students.

Now, consider a newer story: Post-Covid students are suffering a historic learning loss:

Reardon’s call for “long-term structural reform” must follow a new story about reading and a different approach to reading reform.

First, since the vast majority of causal factors reflected in reading standardized test scores are out-of-school conditions, the new reading story and different reform must address universal healthcare, food security and eliminating food deserts, home and housing stability, and stable well-paying job for parents.

Another out-of-school reform needed for reading is guaranteeing students have access to books and texts in their homes, communities (public libraries), and then in their schools (school and classroom libraries).

A simple program that gives every child from birth to high school graduation 20 books a year (10 chosen by the child/parents and 10 common texts) would build a library and ensure access to texts, one of the strongest research-based elements of reading acquisition.

Without social reform, reading scores will likely remain flat and inadequate.

The most important different aspects of a new story and reading reform is confronting traditional approaches to in-school reform in the US common since the 1980s. A different approach to reading reform must include the following:

  • De-couple reading reform and instruction from universal or prescribed reading programs and center teaching children to read (not implementing reading programs with fidelity). Admit there is no one way to teach all students to read, and provide the contexts that allow teachers to serve individual student needs.
  • Reform the national- and state-level testing of reading. The US needs a standard metric for “proficient” and “age level” (instead of”grade level”) shared on NAEP and state tests in grades 3 and 8; and that achievement level needs to be achievable and not “aspirational” (such as is the case with NAEP currently). National and state testing must be age-based and not grade-based to better provide stable data on achievement.
  • End grade retention based on standardized testing. Retention is punitive, and it harms children while also distorting test data.
  • Monitor and guarantee vulnerable populations of students who are below “proficient” to insure they are provided experienced and certified teachers and assigned to classes with low student/teach ratios.
  • Address teaching and learning conditions of schools, including teacher pay and autonomy.
  • Honor and serve students with special needs and multi-lingual learners.

While we have no unique or catastrophic reading crisis in the US—and even hand wringing over learning loss seems unfounded—we have allowed a century (or more) of political negligence to ignore the negative impact of children’s lives on their learning.

We have remained trapped in a manufactured story of reading crisis and that poverty is an excuse.

All the available evidence suggests otherwise.

Crisis, miracles, blame, and punishment have been at the center of the story everyone is familiar with. That story has never served the interests of students, teachers, or public education.

In an era of intense political hatred and fearmongering, this is a tenuous call, but if we really care about students learning to read, and if we truly believe literacy is the key to the economic and democratic survival of our country, reading deserves a new story, an accurate story, and a different approach to reform grounded in the evidence and not our cultural mythologies and conservative ideologies.

See Also

Big Lies of Education: Series

NAEP Serves Manufactured Education Crisis, Not Teaching and Learning

[Header Photo by Andy Feliciotti on Unsplash]

I teach an upper-level writing and research course for undergraduates as part of their general education requirements. The overarching project asks students to gather media coverage of an education topic in order to analyze the credibility of that coverage.

Since the course is undergraduate, I ask them to approach their analysis through critical discourse analysis, but I narrow that lens some for them. The process includes the following:

  • Identify the pattern of claims about the topics.
  • Evaluate the validity of the claims in the context of a literature review of the educational topic (limited to peer-reviewed, published recent journal articles).
  • Consider whose interests these claims serve (the CDA element).

I note that claims about education in the media tend to fall in a range of accurate, misleading, and false; however, for this analysis, identifying whose interest the claims serve is the key aspect of the evaluation.

False and accurate claims are typically easy to manage for students, but the misleading claims can be complicated.

For example, in public discourse about police shooting victims, two accurate data points are often cited: The majority of people shot and killed by police in the US are white and Black people are shot and killed at a higher rate than white people.

Failing to address both data points and clarifying why rates are more important than raw data contributes to media coverage being misleading, and thus, selectively emphasizing true data is often a form of manipulation and serves a particular population or ideology.

With another release of NAEP reading and math scores, we have an opportunity to address how media and political leaders tend to offer false and misleading claims based on NAEP score, but also, that NAEP itself serves to perpetuate the manufactured education crisis, which benefits the media (more clicks), political leaders, and the education market place.

Regardless of what national and state scores on NAEP are, the foundation of media and political claims is always “crisis.”

Ironically, perpetual “crisis” rhetoric and education reform since the early 1980s has had one clear outcomes—maintaining the status quo of educational and socioeconomic inequity in the US.

To consider this, let’s focus on Massachusetts and Tennessee.

Other than top-scoring DoDEA schools, MA sits atop reading scores in the US in both grades 4 and 8:

As a relatively low-poverty state, MA should rank above states with higher poverty students. However, MA certainly serves students in pockets of poverty as well as other vulnerable populations of students who tend to have low standardized test scores.

None the less, MA has joined the standard chorus in the US about reading. The Education Trust released a report in March 2024 providing “5 Things You Need to Know about the literacy crisis in Massachusetts.”

To be fair, MA is similar to most of the US where standardized tests scores have dropped post-Covid and those drops have coincided with MS’s Mass Literacy initiative from 2018:

Perpetual reform and perpetual crisis in education, regretfully, seems only to fuel more reform and more crisis.

Note that MA also has something in common with almost all states regardless of whether states have high or low NAEP results. Achievement gaps by race and socioeconomic status have remained fixed for almost three decades:

While a top-scoring state like MA is shouting “crisis” primarily based on a sort of national psychosis about the “science of reading,” TN is trying to have it both ways with a reading crisis and a celebration of 2024 NAEP scores.

An October 2023 report from the TN Department of Education, “Tennessee’s Commitment to Early Literacy,” forefronts the “Literacy Crisis in Tennessee” based on (you guessed it) historically poor rankings in NAEP reading scores.

One important point here is that the media and political discourse tend to focus on “bad” statistics such as rankings and averages—which is how TN establishes their “crisis.”

Yet, while the 2024 NAEP data has spurred a great deal of misguided doom and gloom, TN is putting a positive spin on their results: Nation’s Report Card Shows Meaningful Academic Gains as a Result of Tennessee’s Commitment to Public Schools.

For political leaders, “we have a crisis” and “I have saved us from the crisis” are not a sequential series of events, however, but a permanent rotation.

So why this positive spin for TN?

While the national average on NAEP reading has dropped, TN has experienced in 2024 a slight uptick. Because most everyone else was dropping, then, TN has seen a rise in their rankings (a key example of why ranking is a “bad” statistic).

Important again is that like MA and most states, TN scores for racial and socio-economic gaps remain fixed: “This performance gap was not significantly different from that in 1998.”

These responses to NAEP by MA and TN reveal a stark lesson that NAEP serves the interests of the media, politicians, and the education market place, but at least since 1998, NAEP hasn’t provided the data needed for any sort of genuine education reform or analysis.

Education is a political and market football, in fact.

Here are a few better takeaways from NAEP:

  • NAEP’s achievement levels are designed to be confusing and support the manufactured education crisis (see here).
  • Using NAEP to rank and sort is misleading and doesn’t support needed reform.
  • NAEP scores do offer some important facts related to achievement gaps and the pervasive influence of affluence and poverty on educational outcomes, but the media and political leaders choose to ignore those lessons.
  • Decades of NAEP reinforce this conclusion by Maroun and Tienken: “Policy makers and education leaders should rethink the current reliance on standardized test results as the deciding factor to make decisions about student achievement, teacher quality, school effectiveness, and school leader quality. In effect, policies that use standardized test results to evaluate, reward, and sanction students and school personnel are doing nothing more than rewarding schools that serve advantaged students and punishing schools that serve disadvantaged students.”
  • Media persist in focusing on only two stories about education: Crisis and outliers; both of which serve the interests of anyone expect students and teachers.

Like my students in my upper-level writing and research course, we would all benefit from evaluating the claims being made by media and political leaders in order to determine, first, is the claims are true (they often as misleading or false), and then to confront in whose interests these claims are being made.

Maybe this isn’t surprising given the current and historical political climate in the US, but almost never are the interests of students and teachers being served—especially when the interests of the most vulnerable students are the issue.

The “Science of Reading” Ushers in NAEP Reading Decline: Time for a New Story

With the release of the 2024 NAEP reading results, a disturbing new story is developing:

The media has long been obsessed with reading in the US, crying “crisis” every decade over the past century. The most recent media-based reading crisis has prompted aggressive and new reading legislation reaching back over a decade, policy and programs identified as the “science of reading” (SOR).

The hand wringing over the 2024 NAEP reading results, however, seems to focus on learning loss and post-Covid consequences—not that reading achievement on NAEP was flat during the balanced literacy era and now has dropped steadily during the SOR era:

The senior cohort in the 2024 NAEP reading scores represent the SOR era begun in 2012.

Suddenly, media appears to forget that the SOR movement was built on a series of baseless claims: the US has a reading crisis (despite NAEP score being flat for decades) because teachers do not know how to teach reading and rely on failed reading programs and balanced literacy.

The foundational claim of the SOR movement has been firmly discredited: “[T]here is no indisputable evidence of a national crisis in reading, and even if there were a crisis, there is no evidence that the amount of phonics in classrooms is necessarily the cause or the solution.”

But a key element of the SOR story is often overlooked: “One of the excuses educators have long offered to explain America’s poor reading performance is poverty.”

In other words, the SOR story argues that the US has a reading crisis that is entirely the result of in-school policies and practices, that SOR-based reading instruction guarantees 95%+ of students will achieve reading proficiency.

How then is the recent 5-year decline in NAEP scores being blamed on out-of-school factors, Covid learning loss? The story being sold is such blame is merely an excuse.

The problem here is that the entire SOR story is a series of misrepresentations and ideological claims not grounded (ironically) is research or evidence.

As I have noted, NAEP achievement levels are confusing since “proficient” is well above grade level and “basic” tends to correlate with most state metrics for “proficient” (see here for a full explanation and state/NAEP correlations).

However, journalists persists is misrepresenting NAEP scores in order to feed their manufactured “crisis” story: Two-Thirds of Kids Struggle to Read, and We Know How to Fix It.

With the release of 2024 NAEP scores on reading, we have an opportunity to embrace a different story, a credible story, by examining scores from Department of Defense (DoDEA) schools as well as Mississippi and Florida.

First, note that DoDEA schools again are the top scoring schools in grade 4 reading, but MS and FL rank in the top 25% of states despite challenging populations of students being served (both states appear to be outliers defying the odds):

Both MS and FL have been praised for their reading and education reform; however, there are two parts to this “miracle” story that are often left out, that show there is a mirage, not a miracle.

First, MS and FL join many states that have enacted SOR reading reform over the past decade-plus, yet the research on that reading reform highlights something other than reading instruction or programs.

Westall and Cummings concluded in a report on reading policy: “[S]tates whose policies mandate third-grade retention see significant and persistent increases in high-stakes reading scores in all cohorts…. [T]here is no consistent evidence that high-stakes reading scores increase in states without a retention component [emphasis added].”

States implementing K-3 grade retention are gaming the system by pulling out the lowest performing students and then re-inserting them into the testing population when older.

In fact, MS has been retaining about 9000-10,000 K-3 students a year since 2014, and FL retains about 17,000 students annually. [1]

Beyond the impact of grade retention on test scores, we should also ask: If SOR “works,” why do states continue to retain about the same number of students per year?

But NAEP also tells a story about SOR that has been ignored for years. Both MS and FL rank in the top 25% of grade 4 but the bottom 25% by grade 8 while DoDEA remains the top scoring schools in both grades:

Grade retention creates a mirage of achievement in grade 4 that disappears by grade 8, further evidence that SOR is not working at either grade.

Reading achievement as measured on testing has never been about reading instruction, teacher quality, or reading programs.

DoDEA school reading achievement is a testament to what research has shown for decades about student achievement:

Almost 63% of the variance in test performance was explained by social capital family income variables….The influence of family social capital variables manifests itself in standardized test results. Policy makers and education leaders should rethink the current reliance on standardized test results as the deciding factor to make decisions about student achievement, teacher quality, school effectiveness, and school leader quality. In effect, policies that use standardized test results to evaluate, reward, and sanction students and school personnel are doing nothing more than rewarding schools that serve advantaged students and punishing schools that serve disadvantaged students.

DoDEA student populations are diverse, often coming from impoverished and working class backgrounds; these schools also serve vulnerable and challenging populations of students.

However, teacher pay is high, and those students have healthcare, food and housing security, and parents with stable work.

DoDEA students almost all have the advantages mostly afforded children living in privilege so that how and what they are taught can matter.

If reading and literacy matter—and they do—and education matters in the US, we are well past blaming teachers, declaring false “miracles,” and jumping on the reading program merry-go-round once again.

Students must have their lives addressed so that their education can serve them well.

NAEP scores tell us little about reading (or math), but confirms again and again that the US is a country determined to ignore the corrosive impact of inequity on the lives and education of children.


Update

Media has only one story—a false one—about outliers in NAEP scores. Compare the coverage of MS in 2019 with LA 2024:

[1] Note that in the early 2000s, FL was the “miracle” state and established the Florida Model that essentially became the MS “miracle.”

Next up is Louisiana, and most of the coverage is claiming LA’s success is because the state has copied MS.

And a part of the lineage is more grade retention. Here are the currently available data on LA grade retention:

Writing Process: Scholarly/Academic Writing Edition

[Header Photo by 🇸🇮 Janko Ferlič on Unsplash]

Like many academics working in higher education, I spent several days over my holiday break preparing my courses for spring (two first-year writing seminars and one upper-level writing/research course) and then an intense three days writing and submitting a scholarly chapter on growth mindset and grit for an upcoming book.

I am fortunate, I think, because my teaching life and my writing life continually inform each other. Especially when teaching my writing-intensive courses, I teach as a writer and scholar, fore-fronting my writing/scholarship in my teaching.

My chapter on mindset and grit gave me a perfect opportunity to think deeply about and prepare new materials for my courses this spring (access those artifacts in this folder: Scholarly Essay Process).

As a writing teacher, I have been struggling throughout my 40-plus-year career with the negative impact of templates and scripts for students developing the skills and knowledge they need to be autonomous and compelling writers.

I have rejected, for example, the five-paragraph essay model, and I have challenged the mechanical implementation of the writing process as a sequential series of steps.

The problem is that this crusade against templates and scripts is not as simple (or effective) as I initially believed many decades ago.

Another problem with rejecting templates and scripts is that a significant amount of scholarly and academic writing is bound by scripts, word-count limits, formatting requirements, and citation/style guidelines.

My evolved and more nuanced position on templates and scripts in writing instruction and assignments acknowledges that beginning writers need opportunities to read widely in order to develop their own “scripts” for a wide variety of writing types. Of course they also need structure, but starting with the rigid template does far more harm than good for emerging writers.

Then, as students-as-writers move into high school and college, they need more experiences with authentic templates and guidelines found in much of academic and scholarly writing.

The editors of the chapter I just completed, for example, sent writers an content outline for chapters to follow as well as a word count limit and citation/style sheet requirements (APA).

When I write reviews for a think tank, I receive the same structures and very rigid expectations for staying within those limits (including their own in-house style sheet).

The irony, then, is that this spring, my first-year writing seminar will focus on challenging scripts and “rules” for essays and writing while my upper-level writing/research class will be writing a strictly scripted major scholarly essay (I assure upper-level students that this experience will prepare them for graduate school, and I recently received an email from a former student in this course telling me “thank you” for just that).

While I feel like my teaching of writing has better bridged the gap between helping students acquire the broad concepts of effective and compelling writing (versus imposing on them artificial templates and “rules”), I continue to struggle with fostering in students the sort of writing process that would better serve them.

Similar to my stance on the essay form, I teach that the writing process is not sequential or a rigid template, but a set of concepts that most writing addresses to help produce a writing final product needed for the purposes of that project. In other words, these broad concepts are fairly stable but the so-called “steps” may differ and the time spent on each “step” likely will vary for different writing purposed.

For scholarly writing, the writing process includes much more than composing sentences and paragraphs. Here, then is a brief overview of my recent process this week writing and submitting my book chapter on mindset and grit.

Let me start with a caveat that I think should be shared with students.

When most scholars start a writing project, we are dealing with content that we have expertise in; my project on mindset and grit has years of blogging and gathering research behind the brief process I followed over three days.

My first steps included revisiting the chapter guidelines sent by the editors, confirming formatting, citation, word count, etc.

Then, as I stress to students, I prepared my Word document, conforming to APA guidelines and inserting the subheads, etc., along with the chapter template required by the editors. One concern I have with students is they tend to address formatting last, and I urge them to address this tediousness first. (See my submitted and not yet edited copy here.)

Next, I put the required page break in the end of the document and prepared my working references list. To create the list, I reviewed my many blogs on the topics, searched through my library data bases, reached out to other scholars for recommendations, and carefully culled sources from the references of the sources I had gathered (working from the most recent publications).

Let me stress here that as I detail my process, as I worked, these “steps” became more and more recursive in that as I worked in one “step” I would invariably return to and revise other “steps” (I caught simple formatting edits and typos, for example, in many of the “steps” being detailed here even as that is considered the editing “step”).

One goal as I worked was to create a compelling opening that included a thesis paragraph clearly aligned with the subheads and organization of the chapter. I drafted that opening on the first day and then I carefully edited and formatted all of my references, checking APA and loosely thinking about removing or adding needed sources. See the opening here:

Literacy educator and scholar Lou LaBrant (1947) asserted almost eight decades ago: “A brief consideration will indicate reasons for the considerable gap between the research currently available and the utilization of that research in school programs and methods” (p. 87). While valid in the mid-2020s, a slightly more nuanced argument needs to be proposed: Scientific research on teaching and learning is often lost in translation once it is packaged by the education marketplace and reduced to legislation and policy. In other words, what is popular, packaged, and mandated in education is too often an oversimplified and even misguided version of scientific findings, nothing more than a fad. An even more complicating problem, as well, is that classroom practices likely should be guided by more than experimental and quasi-experimental research (Wormeli, n.d., The problem).

Over the past decade-plus, two examples of research lost in translation include growth mindset and grit. Carol Dweck (2008), often publishing with others (Dweck & Yeager, 2019) examines the role of mindset is academic success. Grit is grounded in the research and advocacy of Angela Duckworth (2018); however, a great deal of the popularization of grit occurred through the journalism and advocacy of Paul Tough (2013) who promoted “no excuses” charter school practices, specifically the Knowledge Is Power Program (KIPP) charter chain (Abrams, 2020). While growth mindset and grit are distinct concepts and educational movement, they tend to share similar spaces and problems in practice.

This chapter explores the central claims of growth mindset and grit before considering the validity of those claims in the context of the following critical questions: How are growth mindset and grit grounded while also perpetuating bootstrapping, rugged individualism and meritocracy myths? What are the roles of deficit ideologies (word gap, victim blaming, racism, sexism, classism, etc.) in popular advocacy for growth mindset and grit? As well the research and popular claims about growth mindset and grit are interrogated at three levels: (1) research validity and robustness, (2) evidence-based or ideologically based, and (3) racism and classism.


The next morning I reviewed and organized all of sources to comply with the structures required in the chapter. Here, I think, is where students are lost because of their previous experiences writing inauthentic research papers (in which many students gather the required number of sources and then simply walk the reader through their sources, writing about the sources and not their topic).

I created a table by my topics, mindset and grit, and then by the three major themes/patterns I planned to address; the key here, for students, is recognizing the need to focus on the patterns in their discussion and to cite multiple sources for those patterns.

I also created a listing of sources by my major topics, and then carefully reviewed them all to be sure I had classified them correctly and to identify the few I wanted to cite or quote more fully (I stress to students who have more experience with MLA and textual analysis that quoting is only one way to give evidence in scholarly writing and is often discouraged in many disciplines when not doing textual analysis).

Analyzing and organizing my evidence is designed to creating writing that offers a compelling generalization that is valid followed by a representative source or two to support the pattern; for example:

However, the current public discourse around mindset has made a significant turn to being critical and even skeptical (Study finds, 2018; Tait, 2020; Young, 2021a, 2021b). This shift is spurred by the growing research base that fails to replicate the primary claims of mindset advocacy or shows negative correlations or harm in implementing mindset intervention over other aspects of learning and achievement (Brez et al., 2020; Burgoyne et al., 2020; Burnette et al., 2018; Dixson et al., 2017; Ganimian, 2020; Li & Bates, 2019; Macnamara & Burgoyne, 2023; Sisk et al., 2018; Schmidt et al., 2017). Brez at al. (2020) conclude: “The pattern of findings is clear that the intervention had little impact on students’ academic success even among sub-samples of students who are traditionally assumed to benefit from this type of intervention (e.g., minority, low income, and first-generation students)” (p. 464). And Macnamara and Burgoyne (2023) make a more problematic assertion:

Taken together, our findings indicate that studies adhering to best practices are unlikely to demonstrate that growth mindset interventions bene t students’ academic achievement. Instead, significant meta-analytic results only occurred when quality control was lacking, and these results were no longer significant after adjusting for publication bias. This pattern suggests that apparent effects of growth mindset interventions on academic achievement are likely spurious and due to inadequate study design, flawed reporting, and bias. (p. 163)


Again, I need to emphasize that students must understand that several of these steps require and prompt continuous revision and editing. I returned to my title and the thesis paragraph for revision as I drafted the two major subhead sections and the subheadings under those. In other words, I was then in a constant state of seeking coherence in the chapter whereby all the parts match and create the whole (which then is reinforced by the final section/closing of the chapter).

For students, I will stress that I drafted an opening on the first day, drafted the first major subheading section the second day, and then drafted the second major section and closing the third day. But the writing parts were embedded in a great deal of reading, cataloguing, and organizing.

I also completed a full initial draft, but then let that sit for a while before doing a full re-read, revision, and editing session with the entire chapter in front of me; I did several re-read-revise-edits along the way as well.

For students, then, here is what they should see as elements of a writing process for academic/scholarly writing:

  • Identify writing assignment guidelines, formatting requirements, and citation style.
  • Prepare your Word document per those guidelines, creating your initial title, subheads, and any guiding bullet points or questions detailed in the assignment.
  • Create working references list, addressing citation formatting before working further.
  • Create an initial compelling opening (multiple paragraphs) with a thesis paragraph that correlates with the title, the organization, and subheads of the essay.
  • Read, re-read, organize, and catalogue (patterns/themes) references based on the organization of the assignment; identify the representative anchor sources that will be used to elaborate on the patterns identified and cited with multiple valid sources. Be sure to carefully identify direct quotes and include citation, page or paragraph numbers, etc., when creating a matrix of patterns and analyses of the evidence.
  • Revise and edit throughout these steps, even significant revisions such as addressing the title, the thesis paragraph, or organization if the review of the evidence prompts those revisions.
  • Create a full first draft, and then let that sit. The final step should be a careful re-read to revise and edit before submitting.

The essay form and the writing process are important concepts for developing writers and students to understand, and that understanding must come from authentically engaging with both in supportive environments.

The challenge with teaching students to write generally and then as academics/scholars is that there are too many moving parts and simply no hard and fast “rules” to govern either the essays they write or the process they use to write them.

The Outlier Story: How Education Journalism (Almost) Always Gets It Wrong

[Header Photo by Will Myers on Unsplash]

The first two decades of my career as a literacy educator were spent as a high school English teacher in rural Upstate South Carolina, the high school I had graduated from and my home town.

This began in 1984 when SC had passed sweeping education legislation that would become the standard legislative approach across the US—accountability policy grounded in state standards, high-stakes testing (grades 3 and 8 with exit exams in high school starting in grade 10), and school report cards.

SC was an early and eager adopter of the “crisis” rhetoric fueled by A Nation at Risk report released under the Reagan administration.

That high school and town were populated mostly by working-class and poor people; the town and smaller towns served by the high school were dead or dying mill towns.

Schools had far more poverty than the data showed because rural Southerners often refused to accept free and reduced meals (the primary data point for measuring poverty in schools).

However, for many years the high school ranked number 1 in the entire state for student exit exam scores in math, reading, and writing. Because of our student demographics (and notably because these students had relatively low or typical scores in grade 8 testing), we were what many people would refer to as a “high flying” or “miracle” school.

In more accurate statistical terms, we were an “outlier” data point in the state.

I have been in SC education for an ongoing five decades, and the overwhelming body of data related to student achievement in the state has matched what all data show across the US—measurable student learning is most strongly causally related to the socioeconomic status and educational levels of those students’ parents.

Further, the full story about how we achieved outlier status includes two aspects.

One is that from grade 8 to grade 10 testing, the population of students changed because of students dropping out of school (and these were among the lowest scoring students in grade 8). In fact, students were often encouraged to drop out and enroll in adult education (a two-fer win for the school because they would not be tested and enrolling in adult ed removed them from the drop-out data).

A second part of the story is that students scoring low in grade 8 were enrolled in two math and two ELA courses in grade 10. The “extra” courses were specifically designed as test-prep for state testing. We rigorously adopted a teach-to-the-test culture.

For the state writing exam, for example, we discovered that the minimum text a student could produce was an “essay” with a three-sentence introduction, a five-sentence body, and a three-sentence conclusion. Students in the “extra” ELA course wrote dozens of 3-5-3 essays in grade 10 with the teacher focusing on helping students avoid the “errors” that would flag the text as a below standard.

Many of us found the 3-5-3 approach to writing became a huge problem when students were required to write in other courses; even as students “passed” the state writing exam, they were not performing well as writers in other courses, and even refusing at times to write more than 3-5-3 essays.

For the high-stakes accountability era, we did do a great deal of good because many students across the US passed all their courses but could not receive a diploma because of exam exams. Most of our students graduated, and not because we did anything underhanded.

Yet, I must stress that how we accomplished our outlier status was likely not scalable, but more importantly, our approach should not be replicated by other schools.

Fast-forward 40 years, and education journalism has written hundreds and hundreds of stories not only in pursuit of “outlier” schools, but carelessly framing them as both proof of the on-going (permanent) education crisis and that “status quo” education refuses to implement what we know “works.”

The newest iteration of this misleading story in education is the “science of” movement grounded in the “science of reading” story first popularized by Emily Hanford, who wrote about a “miracle” school in Pennsylvania. This compelling but false story has been parlayed into an even more successful podcast as well as spawning dozens of copy-cat articles by education journalists across the country.

Media, however, never covered Gerald Coles’s careful debunking of the “miracle” school Hanford featured. Similar to my story above about the beginning of my teaching career, the full story of that school was quite different than what was covered in the media.

And as 2024 drew to a close, education journalists simply have no other lens that this: Which School Districts Do the Best Job of Teaching Math?

To be blunt, education journalists are mistakenly compelled to focus on the “exceptional” districts (outliers) while ignoring the more compelling red line that, again, shows what, in fact, is normal and what can and should be addressed in terms of educational reform—the negative impact of poverty on educational attainment.

So here is a story you likely will not read: Education journalism is failing public education, and has been doing so for decades.

Education journalists are blindly committed to the “crisis” and “outlier” stories because they know people will read and listen to them.

The “outlier” story makes for a kind of “good” journalism, I suppose, but the problem is that these stories become popular beliefs and then actual legislation and policy.

The current”science of” movement is riding a high wave because of the “science of reading” tsunami. But like all the misguided reforms since the original false education story, A Nation at Risk, this too will crash and reveal itself as a great harm to students, teachers, and our public school system.

This is boring, I know, but most outlier stories are ultimately false or they simply are not replicable or scalable, as I explained in my opening story.

If we genuinely care about student learning, teaching, and the power of public education, we need education journalists more dedicated to the full story and the not the outliers that help drive their viewing numbers.


Recommended

Big Lies of Education: A Nation at Risk and Education “Crisis”

Big Lies of Education: Reading Proficiency and NAEP

Big Lies of Education: National Reading Panel (NRP)

Big Lies of Education: Poverty Is an Excuse

Big Lies of Education: International Test Rankings and Economic Competitiveness

Big Lies of Education: Grade Retention

Big Lies of Education: Grade Retention

Update [November 2025]

Early Grade Retention Harms Adult Earnings, Jiee Zhong [access PDF HERE]

See also: American Economic Journal: Applied Economics (Forthcoming)


The Big Lie of grade retention in the US is that it is often hidden within larger reading legislation and policy, notably since the 2010s:

The Effects of Early Literacy Policies on Student Achievement, John Westall and Amy Cummings

Westall and Cummings, in fact, have recently found:

  • Third grade retention (required by 22 states) significantly contributes to increases in early grade high-stakes assessment scores as part of comprehensive early literacy policy.
  • Retention does not appear to drive similar increases in low-stakes assessments.
  • No direct causal claim is made about the impact of retention since other policy and practices linked to retention may drive the increases.

However, their analysis concludes about grade retention as reading reform :

Similar to the results for states with comprehensive early literacy policies, states whose policies mandate third-grade retention see significant and persistent increases in high-stakes reading scores in all cohorts. The magnitude of these estimates is similar to that of the “any early literacy policy” estimates described in Section 4.1.1 above, suggesting that states with retention components essentially explain all the average effects of early literacy policies on high-stakes reading scores. By contrast, there is no consistent evidence that high-stakes reading scores increase in states without a retention component.

Therefore, one Big Lie about grade retention is that it allows misinformation and false advocacy for the recent “science of reading” reform across most states in the US.

To be blunt, grade retention is punitive, impacts disproportionately minoritized and marginalized students, and simply is not “reading” reform [1]:

Since grade retention in the early grades removes the lowest scoring students from populations being tested and reintroduces them biologically older when tested, the increased scores may likely be from these population manipulations and not from more effective instruction or increased student learning.

Evidence from the UK, for example, suggests that skills-based reading testing (phonics checks) that count as “reading” assessment strongly correlate with biological age (again suggesting that test scores may be about age and not instruction or learning):

Another Big Lie about grade retention is that reading reform advocates fail to acknowledge decades of evidence that grade retention mostly drives students dropping out of school and numerous negative emotional consequences for those students retained.

Consequently, NCTE has a resolution rejecting test-based grade retention:

Resolved, that the National Council of Teachers of English strongly oppose legislation mandating that children, in any grade level, who do not meet criteria in reading be retained.

And be it further resolved that NCTE strongly oppose the use of high-stakes test performance in reading as the criterion for student retention.

Grade retention, then, is an effective Big Lie of Education because it allows misinformation based in test-score increases to promote policy and practices that fail to increase test scores in sustained ways (see the dramatic drop in “success” for “high-flying” states such as Mississippi and Florida, both of which taut strong grade 4 reading scores, inflated by grade retention, but do not sustain those mirage gains by grade 8).

Grade retention is a Big Lie of education reform that punishes minoritized and marginalized students, inflates test scores, and fuels politicized education reform.

In short, don’t buy it.

Recommended


Note [Updated]

[1] Consider that states retaining thousands of students each year, such as Mississippi, have not seen those retention numbers drop, suggesting that the “science of reading” reforms are simply not working but the retention continues to inflate scores.

The following data from Mississippi on reading proficiency and grade retention exposes that these claims are misleading or possibly false:

2014-2015 – 3064 (grade 3) – 12,224 K-3 retained/ 32.2% proficiency

2015-2016 – 2307 (grade 3) – 11,310 K-3 retained/ 32.3% proficiency

2016-2017 – 1505 (grade 3) – 9834 K-3 retained / 36.1 % proficiency

2017-2018 – 1285 (grade 3) – 8902 K-3 retained / 44.7% proficiency

2018-2019 – 3379 (grade 3) – 11,034 K-3 retained / 48.3% proficiency

2021-2022 – 2958 (grade 3) – 10,388 K-3 retained / 46.4% proficiency

2022-2023 – 2287 (grade 3) – 9,525 K-3 retained/ 51.6% proficiency

2023-2024 – 2033 (grade 3) – 9,121 K-3 retained/ 57.7% proficiency

2024-2025 – 2132 (grade 3) – 9250 K-3 retained/ 49.4% proficiency

Update [January 2026]

On education miracles in general (and those in Mississippi in particular), Howard Wainer, Irina Grabovsky and Daniel H. Robinson


Education Journalism Fails Education (Again): “News media often cater to panics”

[Header Photo by Markus Spiske on Unsplash]

“The available research does not ratify the case for school cellphone bans,” writes Chris Fergusonprofessor of psychology at Stetson University, adding, “no matter what you may have heard or seen or been [told].”

What Ferguson then offers is incredibly important, but also, it exposes a serious lack of awareness by Kappan considering their coverage of education:

And the media treatment has played a part in amplifying what can only be described as a moral panic about phones in schools.
 
One recent New York Times article begins with the sentence, “Cellphones have become a school scourge.” 
 
Can we expect objective coverage to follow?
 
News media often cater to panics, neglecting inconvenient science and stoking unreasonable fears. And this is what I see happening with the issue of cellphones in schools.

First, Ferguson’s characterizations of media coverage of education—”News media often cater to panics”—is not only accurate but matches a warning many scholars and educators have been offering for decades, especially during five decades of high-stakes accountability education reform uncritically endorsed by media.

The only story education journalists seem to know how to write is shouting crisis and stoking panic.

Just a couple days ago in The Hechinger Report, this headline, “6 observations from a devastating international math test,” is followed by this lede: “An abysmal showing by U.S. students on a recent international math test flabbergasted typically restrained education researchers. ‘It looks like student achievement just fell off a cliff,’ said Dan Goldhaber, an economist at the American Institutes for Research.”

And for a century, in fact, education journalism has been persistently fostering a “moral panic” about reading proficiency by students.

Here is Nicholas Kristof in the New York Times: “One of the most bearish statistics for the future of the United States is this: Two-thirds of fourth graders in the United States are not proficient in reading.”

Kristof is but one among dozens in the media repeating what constitutes at best an inexcusable mischaracterization and at worst a lie about what exactly NAEP testing data show about reading achievement in the US.

Nearly every media story about reading in the US since Emily Hanford launched in 2018 (and then repackaged as a podcast) the popular mischaracterization/lie has dutifully “amplif[ied] what can only be described as a moral panic” about reading achievement and instruction:

The stakes were high. Research shows that children who don’t learn to read by the end of third grade are likely to remain poor readers for the rest of their lives, and they’re likely to fall behind in other academic areas, too. People who struggle with reading are more likely to drop out of high school, to end up in the criminal justice system, and to live in poverty. But as a nation, we’ve come to accept a high percentage of kids not reading well. More than 60 percent of American fourth-graders are not proficient readers, according to the National Assessment of Educational Progress, and it’s been that way since testing began in the 1990s.

Ferguson’s warning about the misguided panic over cell phones in schools and the resulting rush to legislate based on that misguided panic is but a microcosm of the much larger and much more dangerous media misinformation about reading and the rise of “science of reading” (SOR) legislation.

We should heed Ferguson’s message not just about cell phones in schools but about the vast majority of media coverage of education and then how the public and political leaders overreact to the constant but baseless moral panics.

Yes, I am glad Kappan included Ferguson’s article, but I wish Kappan‘s The Grade and all education journalists would pause, take a look in the mirror, and recognize that his concern about media coverage of cell phones easily applies to virtually every media story on education.

In fact, I encourage The Grade and other education journalists to implement Ferguson’s “Red Flags” when considering education research, specifically the SOR story being sold:

RED FLAG 1: Claims that all the evidence is on one side of a controversial issue….

RED FLAG 2: Reversed burden of proof. “Can you prove it’s not the smartphones?”…

RED FLAG 3: Failing to inform readers that effect sizes from studies are tiny, or near zero, only mentioning they are “statistically significant.”…

RED FLAG 4: Comparisons to other well-known causal effects.

As I and others have repeatedly shown, the SOR stories fails all of these Red Flags.

Let’s look at just one example of Red Flag 1. Hanford quoting Louisa Moates (who has a market interest in selling SOR stories to promote her teacher training, LETRS, which, ironically, fails the scientific evidence test itself) asserts SOR is “settled science”:

There is no debate at this point among scientists that reading is a skill that needs to be explicitly taught by showing children the ways that sounds and letters correspond.

“It’s so accepted in the scientific world that if you just write another paper about these fundamental facts and submit it to a journal they won’t accept it because it’s considered settled science,” Moats said.

And this refrain is at the center of SOR advocacy, media coverage, and the work of education journalists: “Hanford pushed reporters to understand the research on how students learn to read is settled.”

However, not only is there no scientific evidence of a reading crisis caused by balanced literacy and a few targeted reading programs, the field of reading science is both complex and contested—the dominant theory, the simple view of reading, being revised by evidence supporting the active view of reading.

Ultimately, the moral panics around education have far more to do with media begging for readers/viewers, education vendors creating market churn for profit, and politicians grandstanding for votes.

In the wake of education journalists repeatedly choosing to “cater to panics,” students, teachers, and education all, once again, are the losers.

Course Grade Contracts: Assignments as Teaching and Learning, Not Assessment

[Header Photo by Diomari Madulara on Unsplash]

At the end of fall semester of year 41 as an educator, I can admit two things: (1) I may have learned more than my students (taught two new courses and continue to experiment with course grade contracts), and (2) I am excited about spring courses where I can implement what I learned (both about grade contracts and teaching students to write).

Since I entered the classroom in 1984, I am in my fifth decade as a teacher, much of that work dedicated to teaching writing to students but also using writing assignments as teaching and learning, not assessment.

Gradually and then at some point in the 1990s, I successfully eliminated traditional tests and assignment grades in my high school English courses. As a note of clarification, although I do not use tests or grades, I have always been required to assign grading period and course grades.

Thus, I have been seeking ways to better navigate a test/grade culture of traditional schooling (one my students have been conditioned to trust and even embrace) while practicing my critical philosophy that rejects both.

A few semesters ago, as part of that journey, I returned to the course grade contract, something I had tried in some fashion during my high school teaching years.

The problem I continued to have was that students were mostly unable to set aside their test/grade mentality, and thus, the absence of tests and assignment grades often negatively impacted student engagement and learning.

Initially, I envisioned course grade contracts would improve student engagement and lower stress and anxiety, thus improving learning.

Some non-traditional practices worked. I have students prepare for and participate in a class discussion for their midterm, for example. No memorization, no “cover your work,” and no exam stress.

This collaborative approach students both embraced and recognized as not assessment, but as learning experiences themselves.

However, particularly in courses that are not designated as writing courses (I do teach first-year writing and an upper-level writing/research courses), students tend to struggle significantly with the course structure and the use of a major writing assignments as an extended teaching and learning experiences (and not a way to grade them).

The first iteration of the course grade contract, then, focused on requiring students to submit, conference, and revise essays; I structured A and B course grades around minimum standards for the B-range (submit an acceptable essay, conference after receiving my feedback, and submit one acceptable revision that addresses the feedback) and additional revisions after more feedback for the A-range.

Despite the course grade being explicitly linked to minimum expectations for the process, students continue to see my feedback as negative and harsh, but also remain trapped in the possibility of submitting a perfect essay and never having to complete revisions.

In short, they see the essay assignment as a form of assessment and cannot fully engage in the submitting/revising process as individualized teaching and learning experiences.

Oddly, students continue to email me apologizing for their first submissions because they see the revision-oriented feedback, again, as negative or harsh—evaluative—and not a necessary part of essay assignment as teaching and learning.

The semester ending now, in fact, proved to me that using the course grade contract to shift assignments from forms of assessment to teaching/learning experiences (like the midterm exam period as a class discussion) needed another round of revision by me.

The problems I am still encountering include students struggling in content-focused courses (where they expect traditional tests and are not expecting to be challenged as thinkers and writers) because of the absence of tests/grades as well as the course structure that forefronts course content in the first half of the semester and mostly implements workshop the second half.

Here, then I want to share the new versions of those contracts to be implemented in spring. I have more explicitly included language about the purpose of the contract and added the final portfolio expectations in a format that also is more explicit about assignment expectations as well as fulfilling the contracted grade.

Here is the revised course grade contract for my first-year writing course:

And here is the revised course grade contract for my upper-level writing/research course:

The problem will remain, however, that I teach students conditioned for more than 2/3 of their lives in a culture of tests and grades, a culture that has taught them that assignments as by the teacher for evaluation and not for the student as teaching and learning.

I am seeking ways to shift the culture of teaching and learning as well as my students’ expectations for what it means to be a student and a teacher.

These are big asks for those students, but I am convinced they can make those shifts and benefit greatly from doing so.