Category Archives: Testing

Test Scores Reflect Media, Political Agendas, Not Student or Educational Achievement [UPDATED]

In the US, the crisis/miracle obsession with reading mostly focuses on NAEP scores. For the UK, the same crisis/miracle rhetoric around reading is grounded in PIRLS.

The media and political stories around the current reading crisis cycle have interested and overlapping dynamics in these two English-dominant countries, specifically a hyper-focus on phonics.

Here are some recent media examples for context:

Let’s start with the “soar[ing]” NAEP reading scores in MS, LA, and AL as represented by AP:

‘Mississippi miracle’: Kids’ reading scores have soared in Deep South states

Now, let’s add the media response to PIRLS data in the UK:

Reading ability of children in England scores well in global survey
Reading ability of children in England scores well in global survey

Now I will share data on NAEP and PIRLS that shows media and political responses to test scores are fodder for their predetermined messaging, not real reflections of student achievement or educational quality.

A key point is that the media coverage above represents a bait-and-switch approach to analyzing test scores. The claims in both the US and UK are focusing on rank among states/countries and not trends of data within states/countries.

Do any of these state trend lines from FL, MS, AL, or LA appear to be “soar[ing]” data?

The fair description of the “miracle” states identified by AP is that test scores are mostly flat, and AL, for example, appears to have peaked more than a decade ago and is trending down.

The foundational “miracle” state, MS, has had two significant increases, one before their SOR commitment and one after; but there remains no research on why the increases:

Scroll up and notice that in the UK, PIRLS scores have tracked flat and slightly down as well.

The problematic elements in all of this is that many journalists and politicians have used flat NAEP scores to shout “crisis” and “miracle,” while in the UK, the current flat and slightly down scores are reason to shout “Success!” (although research on the phonics-centered reform in England since 2006 has not delivered as promised [1]).

Many problems exist with relying on standardized tests scores to evaluate and reform education. Standardized testing remains heavily race, gender, and class biased.

But the greatest issue with tests data is that inexpert and ideologically motivated journalists and politicians persistently conform the data to their desired stories—some times crisis, some times miracle.

Once again, the stories being sold—don’t buy them.


Recommended

Three Twitter threads on reading, language and a response to an article in the Sunday Times today by Nick Gibb, Michael Rosen

[1] Wyse, D., & Bradbury, A. (2022). Reading wars or reading reconciliation? A critical examination of robust research evidence, curriculum policy and teachers’ practices for teaching phonics and reading. Review of Education10(1), e3314. https://doi.org/10.1002/rev3.3314

UPDATE

Mainstream media continues to push a false story about MS as a model for the nation. Note that MS, TN, AL, and LA demonstrate that political manipulation of early test data is a mirage, not a miracle.

All four states remain at the bottom of NAEP reading scores for both proficient and basic a full decade into the era of SOR reading legislation:

Advertisement

Even More Problems with Grade-Level Proficiency

I have explained often about the essential flaw with grade-level proficiency, notably the third-grade reading myth.

Grade level in reading is a calculation that serves textbook companies and testing, but fulfills almost no genuine purpose in the real world; it is a technocratic cog in the efficiency machine.

Now that we are squarely in the newest reading war, the “science of reading,” two other aspects of grade-level proficiency have been central to that movement—the hyper-focus on third-grade reading proficiency that includes high-stakes elements such as grade retention and the misinformation rhetoric that claims 65% of students are not reading at grade-level (the NAEP proficiency myth).

These alone are enough to set aside or at least be skeptical about rhetoric, practice, and policy grounded in grade-level proficiency, but there is even more to consider.

A Twitter thread examines grade-level achievement aggregated by month of birth:

The thread builds off a blog post: Age-Related Expectations? by James Pembroke.

The most fascinating aspect of this analysis thread is the series of charts provided:

As the analysis shows, student achievement is strongly correlated with birth month, which calls into question how well standardized testing serves high-stakes practices and how often standardized testing reflects something other than actual learning.

Being older in your assigned grade level is not an aspect of merit, and being older in your assigned grade seems to have measured achievement benefits that aren’t essentially unfair to younger members of a grade.

Further, this sort of analysis helps contribute to concerns raised about grade retention, which necessarily removes students most likely to score low on testing and reintroduces those students as older than their peers in the assigned grade, which would seem to insure their test data corrupts both sets of measurements.

This data above are from the UK, but a similar analysis by month/year of birth applied to retained students and their younger peers would be a powerful contribution to understanding how grade retention likely inflates test data while continuing to be harmful to the students retained (and not actually raising achievement).

There appears to be even more problems with grade-level proficiency than noted previously, and now, even more reason not to continue to use the rhetoric or the metric.

Lessons Never Learned: From VAM to SOR

The US is in its fifth decade of high-stakes accountability education reform.

A cycle of education crisis has repeated itself within those decades, exposing a very clear message: We are never satisfied with the quality of our public schools regardless of the standards, tests, or policies in place.

The sixteen years of the George W. Bush and Barack Obama administrations were a peak era of education reform, culminating with a shift from holding students (grade-level testing and exit exams) and schools (school report cards) accountable to holding teachers accountable (value-added methods [VAM] of evaluation).

The Obama years increased education reform based on choice and so-called innovation (charter schools) and doubled-down on Michelle Rhee’s attack on “bad” teachers and Bill Gates’s jumbled reform-of-the-moment approaches (in part driven by stack ranking to eliminate the “bad” teachers and make room for paying great teachers extra to teach higher class sizes). [1]

Like Rhee and Gates, crony appointee Secretary of Education Arne “Game Changer” Duncan built a sort of celebrity status (including playing in the NBA All-Star celebrity games) on the backs of the myth of the bad teacher, charter schools, and arguing that education reform would transform society.

None the less, by the 2010s, the US was right back in the cycle of shouting education crisis, pointing fingers at bad teachers, and calling for science-based reform, specifically the “science of reading” movement.

Reading legislation reform began around 2013 and then the media stoked the reading crisis fire starting in 2018. However, this new education crisis is now paralleled by the recent culture war fought in schools with curriculum gag orders and book bans stretching from K-12 into higher education.

Education crisis, teacher bashing, public school criticism, and school-based culture wars have a very long and tired history, but this version is certainly one of the most intense, likely because of the power of social media.

The SOR movement, however, exposes once again that narratives and myths have far more influence in the US than data and evidence.

Let’s look at a lesson we have failed to learn for nearly a century.

Secretary Duncan was noted (often with more than a dose of satire) for using “game changer” repeatedly in his talks and comments, but Duncan also perpetuated a myth that the teacher is the most important element in a child’s learning.

As a teacher for almost 40 years, I have to confirm that this sounds compelling and I certainly believe that teachers are incredibly important.

Yet decades of research reveal a counter-intuitive fact that is also complicated:

But in the big picture, roughly 60 percent of achievement outcomes is explained by student and family background characteristics (most are unobserved, but likely pertain to income/poverty). Observable and unobservable schooling factors explain roughly 20 percent, most of this (10-15 percent) being teacher effects. The rest of the variation (about 20 percent) is unexplained (error). In other words, though precise estimates vary, the preponderance of evidence shows that achievement differences between students are overwhelmingly attributable to factors outside of schools and classrooms (see Hanushek et al. 1998Rockoff 2003Goldhaber et al. 1999Rowan et al. 2002Nye et al. 2004).

Teachers Matter, But So Do Words

Measurable student achievement is by far more a reflection of out-of-school factors (OOS) such as poverty, parental education, etc., than of teacher quality, school quality, or even authentic achievement by students. Historically, for example, SAT data confirm this evidence:

Test-score disparities have grown significantly in the past 25 years.  Together, family income, education, and race now account for over 40% of the variance in SAT/ACT scores among UC applicants, up from 25% in 1994.  (By comparison, family background accounted for less than 10% of the variance in high school grades during this entire time) The growing effect of family background on SAT/ACT scores makes it difficult to rationalize treating scores purely as a measure of individual merit or ability, without regard to differences in socioeconomic circumstance.

Family Background Accounts for 40% of SAT/ACT Scores Among UC Applicants

Let’s come back to this, but I want to frame this body of scientific research (what SOR advocates demand) with the SOR movement claims [2] that teachers do not teach the SOR (because teacher educators failed to teach that) and student reading achievement is directly linked to poor teacher knowledge and instruction (specifically the reliance on reading programs grounded in balanced literacy).

This media and politically driven SOR narrative is often grounded in a misrepresentation of test-based data, NAEP:

First, the SOR claims do not match grade 4 data on NAEP in terms of claiming we have a reading crisis (NAEP scores immediately preceding the 2013 shift in reading legislation were improving), that SOR reading policies and practices are essential (NAEP data have been flat since 2013 with a Covid drop in recent scores), and that 65% of students aren’t proficient at reading.

On that last point, the misinformation and misunderstanding of NAEP are important to emphasize:

1.  Proficient on NAEP does not mean grade level performance.  It’s significantly above that.
2.  Using NAEP’s proficient level as a basis for education policy is a bad idea.

The NAEP proficiency myth

Now if we connect the SOR narrative with NAEP data and the research noted above about what standardized test scores are causally linked to, we are faced with very jumbled and false story.

Teacher prep, instructional practices, and reading programs would all fit into that very small impact of teachers (10-15%), and there simply is no scientific research that shows a causal relationship between balanced literacy and low student reading proficiency. Added to the problem is that balanced literacy and the “simple view” of reading (SVR) have been central to how reading is taught for the exact same era (yet SOR only blames balanced literacy and aggressively embraces SVR as “settled science,” which it isn’t).

One of the worst aspects of the SOR movement has been policy shifts in states that allocate massive amount of public funds to retraining teachers, usually linked to one professional development model, LETRS (which isn’t a scientifically proven model [3]).

Once again, we are mired in a myth of the bad teacher movement that perpetuates the compelling counter myth that the teacher is the most important element in a child’s education.

However, the VAM era flamed out, leaving in its ashes a lesson that we are determined to ignore:

VAMs should be viewed within the context of quality improvement, which distinguishes aspects of quality that can be attributed to the system from those that can be attributed to individual teachers, teacher preparation programs, or schools. Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions. Ranking teachers by their VAM scores can have unintended consequences that reduce quality.

ASA Statement on Using Value-Added Models for Educational Assessment (2014)

Let me emphasize: “the majority of opportunities for quality improvement are found in the system-level conditions,” and not through blaming and retraining teachers.

The counterintuitive part in all this is that teachers are incredibly important at the practical level, but isolating teaching impact at the single-teacher or single-moment level through standardized testing proves nearly impossible.

The VAM movement failed to transform teacher quality and student achievement because, as the evidence form that era proves, in-school only education reform is failing to address the much larger forces at the systemic level that impact measurable student achievement.

Spurred by the misguided rhetoric and policies under Obama, I began advocating for social context reform as an alternative to accountability reform.

The failure of accountability, the evidence proves, is that in-school only reform never achieves the promises of the reformers or the reforms.

Social context reform calls for proportionally appropriate and equity-based reforms that partner systemic reform (healthcare, well paying work, access to quality and abundant food, housing, etc.) with a new approach to in-school reform that is driven by equity metrics (teacher assignment, elimination of tracking, eliminating punitive policies such as grade retention, fully funded meals for all students, class size reduction, etc.).

The SOR movement is repeating the same narrative and myth-based approach to blaming teachers and schools, demanding more (and earlier) from students, and once again neglecting to learn the lessons right in front of us because the data do not conform to our beliefs.

I have repeated this from Martin Luther King Jr. so often I worry that there is no space for most of the US to listen, but simply put: “We are likely to find that the problems of housing and education, instead of preceding the elimination of poverty, will themselves be affected if poverty is first abolished.”

While it is false or at least hyperbolic messaging to state that 65% of US students are not proficient readers, if we are genuinely concerned about the reading achievement of our students, we must first recognize that reading test scores are by far a greater reflection of societal failures—not school failures, not teacher failures, not teacher education failures.

And while we certainly need some significant reform in all those areas, we will never see the sort of outcomes we claim to want if we continue to ignore the central lesson of the VAM movement; again: “the majority of opportunities for quality improvement are found in the system-level conditions.”

The SOR movement is yet another harmful example of the failures of in-school only education reform that blames teachers and makes unrealistic and hurtful demands of children and students.

The science from the VAM era contradicts, again, the narratives and myths we seem fatally attracted to; if we care about students and reading, we’ll set aside false stories, learn our evidence-based lessons, and do something different.


[1] TAKING TEACHER EVALUATION TO SCALE: THE EFFECT OF STATE REFORMS ON ACHIEVEMENT AND ATTAINMENT

Joshua Bleiberg
Eric Brunner
Erica Harbatkin
Matthew A. Kraft
Matthew G. Springer
Working Paper 30995
http://www.nber.org/papers/w30995

ABSTRACT

Federal incentives and requirements under the Obama administration spurred states to adopt major reforms to their teacher evaluation systems. We examine the effects of these reforms on student achievement and attainment at a national scale by exploiting the staggered timing of implementation across states. We find precisely estimated null effects, on average, that rule out impacts as small as 0.015 standard deviation for achievement and 1 percentage point for high school graduation and college enrollment. We also find little evidence that the effect of teacher evaluation reforms varied by system design rigor, specific design features or student and district characteristics. We highlight five factors that may have undercut the efficacy of teacher evaluation reforms at scale: political opposition, the decentralized structure of U.S. public education, capacity constraints, limited generalizability, and the lack of increased teacher compensation to offset the non-pecuniary costs of lower job satisfaction and security.

[2] I recommend the following research-based analysis of the SOR movement claims:

The Science of Reading and the Media: Is Reporting Biased?, Maren Aukerman

The Science of Reading and the Media: Does the Media Draw on High-Quality Reading Research?, Maren Aukerman

The Science of Reading and the Media: How Do Current Reporting Patterns Cause Damage?, Maren Aukerman

[3] See:

Hoffman, J.V., Hikida, M., & Sailors, M. (2020). Contesting science that silences: Amplifying equity, agency, and design research in literacy teacher preparation. Reading Research Quarterly, 55(S1), S255–S266. Retrieved July 26, 2022, from https://doi.org/10.1002/rrq.353

Research Roundup: LETRS (PDF in link above also)

Recommended

Part of the problem in debates about schools and education is the relentless use of “teacher quality” as a proxy for understanding “teaching quality”. This focuses on the person rather than the practice.

This discourse sees teachers blamed for student performance on NAPLAN and PISA tests, rather than taking into account the systems and conditions in which they work.

While teaching quality might be the greatest in school factor affecting student outcomes, it’s hardly the greatest factor overall. As Education Minister Jason Clare said last month:

“I don’t want us to be a country where your chances in life depend on who your parents are or where you live or the colour of your skin.”

We know disadvantage plays a significant role in educational outcomes. University education departments are an easy target for both governments and media.

Blaming them means governments do not have to try and rectify the larger societal and systemic problems at play.

Our study found new teachers perform just as well in the classroom as their more experienced colleagues

The Rise and Fall of the Teacher Evaluation Reform Empire

Does Instruction Matter?

For me, the pandemic era (and semi-post-pandemic era) of teaching has included some of the longest periods in my 39-year career as an educator when I have not been teaching.

The first half of my career as a high school English teacher for 18 years included also teaching adjunct at local colleges during the academic year along with always teaching summer courses (even while in my doctoral program).

Currently in my twenty-first year as a college professor, in addition to my required teaching load, I have always taught overloads during the main academic year, our optional MayX session, and (again) summer courses.

Teaching has been a major part of who I am as a professional and person since my first day at Woodruff High (South Carolina) in August of 1984.

However, during pandemic teaching, I have experienced several different disruptions to that teaching routine—shifting to remote, courses being canceled or not making (especially in MayX and summer), and then coincidentally, my first ever sabbatical during this fall of 2022 (in year 21 at my university).

One aspect of sabbatical often includes the opportunity to reset yourself as a scholar and of course as a teacher. As I was preparing my Moodle courses for Spring 2023, I certainly felt an unusually heightened awareness around rethinking my courses—an introductory education course, a first-year writing seminar, and our department upper-level writing and research course.

Here is an important caveat: I always rethink my courses both during the course and before starting new courses. Yes, the extended time and space afforded by sabbatical makes that reflection deeper, I think, but rethinking what and how I teach is simply an integral part of what it means for me to be a teacher.

For two decades now, I have simultaneously been both a teacher and teacher educator; in that latter role, I have been dedicated to practicing what I preach to teacher candidates.

I am adamant that teacher practice must always reflect the philosophies and theories that the teacher espouses, but I am often dismayed that instructional practices in education courses contradict the lessons being taught on best practice in instruction.

Not the first day, but a moment from my teaching career at WHS.

In both my K-12 and higher education positions, for example, I have practiced de-grading and de-testing the classroom because I teach pre-service teachers about the inherent counter-educational problems with traditional grades and tests.

Now, here is the paradox: As both a teacher and teacher educator my answer to “Does instruction matter?” is complicated because I genuinely believe (1) teacher instructional practices are not reflected in measures of student achievement as strongly (or singularly) as people believe and therefore, (2) yes and no.

The two dominant education reform movements over the past five decades I have experienced are the accountability movement (standards and high-stakes testing) and the current “science of reading” movement.

The essential fatal flaw of both movements has been a hyper-focus on in-school education reform only, primarily addressing what is being taught (curriculum and standards) and how (instruction).

I was nudged once again to the question about instruction because of this Tweet:

I am deeply skeptical of “The research is clear: PBL works” because it is a clear example of hyper-focusing on instructional practices and, more importantly, it is easily misinterpreted by lay people (media, parents, and politicians) to mean that PBL is universally effective (which is not true of any instructional practice).

Project-based learning (PBL) is a perfect example of the problem with hyper-focusing on instruction; see for example Lou LaBrant confronting that in 1931:

The cause for my wrath is not new or single. It is of slow growth and has many characteristics. It is known to many as a variation of the project method; to me, as the soap performance. With the project, neatly defined by theorizing educators as “a purposeful activity carried to a successful conclusion,” I know better than to be at war. With what passes for purposeful activity and is unfortunately carried to a conclusion because it will kill time, I have much to complain. To be, for a moment, coherent: I am disturbed by the practice, much more common than our publications would indicate, of using the carving of little toy boats and castles, the dressing of quaint dolls, the pasting of advertising pictures, and the manipulation of clay and soap as the teaching of English literature. (p. 245)

LaBrant, L. (1931, March). Masquerading. The English Journal, 20(3), 244-246. http://www.jstor.org/stable/803664

LaBrant and I both are deeply influenced by John Dewey’s progressive philosophy of teaching (noted as the source for PBL), but we are also both concerned with how the complexities of progressivism are often reduced to simplistic templates and framed as silver-bullet solutions to enormous and complex problems.

As LaBrant notes, the problem with PBL is not the concept of teaching through projects (which I do endorse as one major instructional approach), but failing to align the project in authentic ways with instructional goals. You see, reading a text or writing an essay is itself a project that can be authentic and then can be very effective for instruction.

My classrooms are driven, for example, by two instructional approaches—class discussions and workshop formats.

However, I practice dozens of instructional approaches, many planned but also many spontaneously implemented when the class session warrants (see Dewey’s often ignored concept of “warranted assertion”).

This is why Deweyan progressivism is considered “scientific”—not because we must use settled science to mandate scripted instructional practices but because teaching is an ongoing experiment in terms of monitoring the evidence (student artifacts of learning) and implementing instruction that is warranted to address that situation and those students.

So this leads to a very odd conclusion about whether or not instruction matters.

There are unlikely any instructional practices that are universally “good” or universally “bad” (note that I as a critical educator have explained the value of direct instruction even as I ground my teaching in workshop formats).

The accountability era wandered through several different cycles of blame and proposed solutions, eventually putting all its marbles in teacher quality and practice (the value-added methods era under Obama). This eventually crashed and burned because as I have noted here, measurable impact of teaching practice in student achievement data is very small—only about 10-15% with out-of-school factors contributing about 60-80+%.

The “science of reading” movement is making the exact same mistake—damning “balanced literacy” (BL) as an instructional failure by misrepresenting BL and demonizing “three cueing” (see the second consequence HERE, bias error 3 HERE, and error 2 HERE).

Here is a point of logic and history to understand why blaming poor reading achievement on BL and three cueing: Over the past 80 years, reading achievement has never been sufficient despite dozens of different dominant instructional practices (and we must acknowledge also that at no period in history or today is instructional practice monolithic or that teachers in their classrooms are practicing what is officially designated as their practice).

In short, no instructional practice is the cause of low student achievement and no instructional practice is a silver-bullet solution.

Therefore, does instruction matter? No, if that means hyper-focusing on singular instructional templates for blame or solutions.

But of course, yes, if we mean what Dewey and LaBrant argued—which is an ongoing and complicated matrix of practices that have cumulative impact over long periods of time and in chaotic and unpredictable ways.

From PBL to three cueing—no instructional practice is inherently right or wrong; the key is whether or not teachers base instructional practices on demonstrated student need and whether or not teachers have the background, resources, teaching and learning conditions, and autonomy to make the right instructional decisions.

Finally, hyper-focusing on instruction also contributes to the corrosive impact of marketing in education, an unproductive cycle of fadism and boondoggles.

In the end, we are trapped in a reform paradigm that is never going to work because hyper-focusing on instruction while ignoring larger and more impactful elements in the teaching/learning dynamic (out-of-school factors, teaching and learning conditions, etc.) creates a situation in which all instruction will appear to be failing.

Reforming, banning, and mandating instruction, then, is fool’s gold unless we first address societal/community and school inequities.

Introduction to Failure: Why Grades Inhibit Teaching and Learning

When Beckie Supiano, for The Chronicle, examined the debate surrounding a NYT article, At N.Y.U., Students Were Failing Organic Chemistry. Who Was to Blame?, this jumped out at me as I read:

Students struggle in introductory courses in many disciplines, but failure rates tend to be particularly high in STEM. Those introductory courses “have had the highest D-F-W rates on most campuses for several decades at least — in fact, most of them persist back into the ‘30s and ‘40s,” says Timothy McKay, associate dean for undergraduate education at the University of Michigan at Ann Arbor’s arts and sciences college. “To me, this is a sign that they’re unsuccessful courses.”

At N.Y.U., Students Were Failing Organic Chemistry. Who Was to Blame?

I have multiple connections to this controversy, including two decades of navigating college students who often find my courses “hard” and my feedback “harsh” as well as almost four decades of resisting a traditional education system that requires testing and grading.

For the record, students are not as happy with courses absent tests and grades (where grades are delayed until the final submission of grades required by the university) as you might imagine.

And despite how conservative politicians and pundits characterize higher education as filled with leftwing radicals, higher education in practice is extremely conservative and traditional—including a mostly uncritical use of so-called objective tests, grading students on bell curves, and not just tolerating but boasting about courses and professors with low grades and high failure rates.

Departments and professors who have students succeeding with higher grades are routinely shamed by department chairs, who have been shamed by administrators. We receive breakdowns of grade distributions by professors and departments and the unquestioned narrative is that high grades (“too many A’s”) are a sign of weak professors/departments and low grades are a sign of rigorous professors/departments.

And here is something I think almost no one will admit: Anyone can implement a course with multiple-choice tests designed to create a bell curve of grades that insures some students fail each course session.

In fact, that is incredibly easy (I would say lazy and irresponsible), and teachers/professors who adopt that model of instruction will almost always be praised as a “hard” teacher and the course will be lauded as “rigorous.”

This is academic hazing—not teaching, and it inhibits both teaching and learning.

I want to extend McKay’s comment above that low grades and high failure rates in introductory (or any) courses is a sign of “unsuccessful courses” because of negligent teachers/professors who hide behind a traditional system of grading.

This debate about who is to blame for students failing a course is a needed discussion, but I fear it will not focus where it should—just what is the purpose of education?

The high-failure-rate introductory courses in colleges are intentionally designed to “weed out” weak students and recruit good students for departments and disciplines.

Again, academic hazing.

I started de-testing and de-grading as a high school English teacher because I found both tests and grades did not support my students’ learning and tests/grades contributed to a hostile relationship between students and teachers. As well, tests and grades are elements in a deficit approach to how we view students and learning.

However, since this debate is grounded in a college professor, I want to focus on how grading practices are particularly egregious in higher education.

As a junior in college just starting my courses in education (my major), I had my first experience with a very modest challenge to traditional grading. My advisor and professor, Tom Hawkins, noted in class one day that college students are a mostly elite subset of all high school students, and since a bell-shaped curve is relevant to representative samples, he anticipated students in his college courses to fall on the A-C range of grades, not A-F (unless of course a student simply did not do the work, etc.).

At that moment, I began to interrogate grades and concepts such as “objective” in multiple-choice and standardized testing.

I, like Dr. Hawkins, anticipate that my students will not only engage seriously in my courses but that they will likely produce A or B work if they trust and follow my guidance. This is reinforced by my teaching at an academically selective university.

Another element of this concern about college courses, professors, and grades must acknowledge that college students are adults.

The teaching/learning dynamic among adults must have consent, cooperation, and common goals.

This brings me back to the problem with antagonistic dynamics among students and teachers/professors.

Building a reputation as a professor or department that many or some of the courses offered are guaranteed to have students fail is establishing antagonism and eroding teaching and learning. Period.

Whether intentional of not, The Chronicle’s headline is almost perfect: What Does It Mean When Students Can’t Pass Your Course?

The key here is “can’t” because there are many courses across the U.S.—disproportionately in the so-called hard sciences and hard-science adjacent disciplines—that predetermine how many students receive specific grades and monitor that grades fall in a proportional way across the entire spectrum of grades from A to F.

That sort of a-statistical nonsense is not just common, but almost entirely unchallenged even though it is being imposed on non-representative populations of students.

To be specific, in my first-year writing seminar with 12 students at an academically selective university, where several of the students were valedictorian/salutatorian (and almost all of them graduation in the top 10% of their classes), a final grade distribution of 1 A, 2 Bs, 6 Cs, 2 Ds, 1 F would be pure orchestrated nonsense, but would almost never be challenged.

When my classes routinely have all As and Bs (because they submit work, have conferences with me after receiving written feedback, and then are required and allowed to revise), however, I am repeatedly challenged for those grades—directly and indirectly—and framed as “easy” or that I “give” As and Bs.

The NYT story about Dr. Jones will be fodder for “kids today” lamenting and the failure of higher education to hold students accountable. Some will likely drag out the tired “grade inflation” nonsense that has been voiced for 100 years (when, o, when, were grades not inflated?).

But the real story is that grades inhibit teaching and learning, but remain a central feature of traditional schooling—yet even more proof that higher education is mostly conservative, not the leftist indoctrination factory conservatives rail against.

On Positive and Negative Feedback to Student Writing

Several students in my literacy course in our MAT program chose to read Donna Alvermann’s Effective Literacy Instruction for Adolescents. While the initial discussion around Alvermann’s essay focused on those students struggling with the density of her academic writing, they emphasized the importance and power of her addressing student self efficacy in the fostering of student literacy development:

Adolescents’ perceptions of how competent they are as readers and writers, generally speaking, will affect how motivated they are to learn in their subject area classes (e.g., the sciences, social studies, mathematics, and literature). Thus, if academic literacy instruction is to be effective, it must address issues of self-efficacy and engagement.

Effective Literacy Instruction for Adolescents

That discussion led to some very insightful comments about the importance of providing students feedback, as opposed to grades, on their writing as part of the drafting and workshop process (anchored in their reading Graham and Perin’s 2007 Writing Next analysis of research on teaching writing).

As a long-time advocate of feedback and someone who practices de-grading the classroom as well as delaying grades (assigning grades for courses but not on assignments), I strongly supported this discussion, and was impressed with the thoughtfulness of the students.

That discussion had a subtext also—a concern raised by several students about the need for teachers to provide students positive feedback (so students know what they are doing well), and not just negative feedback. (Some of that subtext, I am sure, was an unexpressed feeling among some of these graduate students that they received mostly or exclusively “negative” feedback from me on their first submitted essays.)

After several students worked through this argument for positive feedback, I asked them to step back even further to consider, or -re-consider, what counts as “positive” or “negative” feedback.

In the sort of way Alanis Morrissette perceives irony, I found on social media Your Essay Shows Promise But Suffers from Demonic Possession posted at McSweeney’s Internet Tendency—a brilliant portrayal of the tensions created by teachers giving students feedback on their essays, which begins:

I appreciate the hard work that went into this essay. It has many merits, but it also has something profoundly and disturbingly wrong with it. In fact, I’m writing this feedback on my phone, cowering in the bathtub with my wife, after your essay terrorized and nearly destroyed us….

The essay was formatted correctly, and each sentence was more or less intelligible in itself. But altogether, the effect was—disorientation. Worse, actually. Pure senselessness. The Void.

Your Essay Shows Promise But Suffers from Demonic Possession

This satirical piece does exactly what my MAT students requested, blending positive (“many merits”) with negative (“something profoundly and disturbingly wrong with it”) feedback; and I think, herein is the problem with the dichotomy itself.

Once dramatically while I was teaching high school and often since I have been teaching at my current selective liberal arts university, I have encountered students who perceive all feedback as negative and reject having to revise their writing.

My argument to my MAT students was that actionable feedback on student writing is not inherently “negative” even though it does suggest something is “wrong” and needs “correcting” (perceptions grounded in students’ experiences in traditional classrooms that focus on the error hunt and punish students with grades).

However, I am well aware over almost four decades that part of my challenge as a writing teacher is how to help students see and respond to feedback as supportive and not an attack on their work or them as people (we had a great discussion about whether or not students can or should see their writing as inextricable from them as people).

In other words, affect matters.

Throughout the past 20 years teaching in higher education, I have been struggling against the perception by students than my written feedback is “mean,” “harsh,” “negative,” etc., while they simultaneously find my face-to-face feedback supportive and “good.”

I continue to seek ways to make feedback on student writing more effective as a key aspect of helping students grow as writers and thinkers as well as fostering their independence as writers and thinkers (learning to revise and edit their work on their own).

Students persist, however, in finding the feedback “negative,” and occasionally shutting down.

If there is a path to moving past the dichotomy of negative/positive feedback to student writing, I think it lies in the following concepts and practices:

  • Having explicit discussions with students about the inherent need for all writers to revise writing, ideally in the context of feedback from an expert and/or supportive writer/teacher. I often share with students samples of my own work submitted for publication with track changes and comments from editors.
  • Rejecting high-stakes for low-stakes environments in the writing workshop format. This is grounded in my commitment to de-grading the classroom that honors that writing is a process (see More Thoughts on Feedback, Grades, and Late Work).
  • Adopting strategies and rhetoric that rejects deficit ideology and the error hunt (Connie Weaver). It is important for teachers and students to prefer “revising” and “editing” instead of “error,” “mistake,” and “correcting” as the language surrounding the writing process. The pursuit in writing must be grounded in the recognition that all writing can be better even when it is currently quite good (and especially if is is somewhat or deeply flawed).
  • Clarifying for students that challenging and critical feedback is intended as actionable by students as writers, and thus, inherently positive. One of the recurring tone issues I experience with students viewing my written feedback as negative is misreading questions; students often read questions as sarcastic or accusatory when I am asking in order to elicit a response (for example, when I write “Did you look at the sample?” how I move forward with helping a student depends on that answer). As my MAT students expressed in the context of Alvermann, students absolutely do need to see themselves as writers and do need to trust they will be successful, but they also must embrace the need to revise and the awareness that no one produces “perfect” writing in one (or even several) drafts.

Feedback and the dynamic between teachers and students (including trust) are the lifeblood of the writing process when students are young and developing. As I noted above, affect matters and the teacher/student relationship inevitably impacts how effective the teacher is.

As teachers providing feedback, we must be careful and purposeful in our feedback, focusing on actionable feedback and creating/maintaining a culture of support and encouragement.

To that end, I believe we cannot reduce feedback to a positive/negative dichotomy that serves only to reinforce the cultures and practices we need to reject, deficit ideologies and the error hunt.

In the McSweeney’s parody above, the writing teacher and their wife are ensnared in a demon-possessed student essay, but the more horrifying detail of this piece is the ending—the realization that teachers and students are actually trapped in an even greater hellscape:

“I did it,” she sobbed. “I killed it. I killed it.”

“You did it,” I said, climbing into the bathtub with her, holding my wife close. “It’s over. It’s all over now.”

Silence.

Then she said, “It’s not over.”

“What—”

“You still have to grade it.”

80%

Your Essay Shows Promise But Suffers from Demonic Possession

Yes, let’s work on feedback and the affect created around the writing process, but let’s not ignore that their are larger dynamics (grades and testing) at play that erode the teacher/student relationship as well as the effectiveness of teaching and the possibilities of learning.


See Also

Student Agency and Responsibilities when Learning to Write: More on the Failure of SETs

The Problem of Student Engagement in Writing Workshop

Teaching and Learning as Collaboration, not Antagonism

Chicken Little Journalism Fails Education (Again and Again): Up Next, the Science of Science?

Often education journalism is disturbing in its “deja vu all over again“: Why Other Countries Keep Outperforming Us in Education (and How to Catch Up).

Criticizing U.S. public education through international comparisons is a long-standing tradition in the U.S. media, reaching back at least into the mid-twentieth century.

This is one of many crisis approaches to covering education—Chicken Little journalism—that makes false and misleading claims about the quality of U.S. education (always framed as a failure) and that because of the low status of the U.S. in international comparisons of education, the country is doomed, economically and politically.

Oddly enough, as international rankings of education have fluctuated over 70-plus years, some countries have risen and fallen in economic and political status (even inversely proportional to their education ranking) while the U.S. has remained in most ways the or one of the most dominant countries—even as we perpetually wallow in educational mediocrity.

Yet, this isn’t even remotely surprising as Gerald Bracey (and many others) detailed repeatedly that international comparisons of educational quality are essentially hokum—the research is often flawed (apples to oranges comparisons) and the conclusions drawn are based on false assumptions (that education quality directly causes economic quality).

Media coverage, however, will not (cannot?) reach for a different playbook; U.S. public education is always in crisis and the sky is falling because schools (and teachers) are failing.

Next up? I am betting on the “science of science.”

Why? You guessed it: The Latest Science Scores Are Out. The News Isn’t Good for Schools. As Sarah D. Sparks reports:

Fewer than 1 in 4 high school seniors and a little more than a third of 4th and 8th graders performed proficiently in science in 2019, according to national test results out this week.

The results are the latest from the National Assessment of Educational Progress in science. Since the assessment, known as “the nation’s report card,” was last given in science in 2015, 4th graders’ performance has declined overall, while average scores have been flat for students in grades 8 and 12.

“The 4th grade scores were concerning,” said Peggy Carr, the associate commissioner of the National Center for Education Statistics, which administers NAEP. “Whether we’re looking at the average scores or the performance by percentiles, it is clear that many students were struggling with science.”

The Latest Science Scores Are Out. The News Isn’t Good for Schools

And it seems low tests scores mean that schools once again are failing to teach those all-important standards:

Carr said the test generally aligned with the Next Generation Science Standards, on which 40 states and the District of Columbia have based their own science teaching standards. Georgia, Massachusetts, and New Hampshire are developing new science assessments under a federal pilot program.

But it is even worse than we thought: “These widening gaps between the highest- and lowest-performing students, particularly in grade 4, mirror similar trends seen in national and global reading, math, and social studies assessments.”

Yep, U.S. students suck across all the core disciplines compared to the rest of the world!

And what makes this really upsetting, it seems, is we know how to teach science (you know, the “science of science”) because there is research: Effective Science Learning Means Observing and Explaining. There’s a Curriculum for That. Not only is there research, but also there are other countries doing it better and there are, again, those standards:

Organizing instruction around phenomena is a key feature of many reforms aimed at meeting the Next Generation Science Standards, an ambitious set of standards adopted or adapted by 44 states in 2013. Phenomena are also an organizing feature of instructional reforms in countries outside the United States, like high-performing Finland. But what is phenomenon-based learning, and what evidence is there that it works?…

Our study found that students exposed to the phenomenon-based curriculum learned more based on a test aligned with the Next Generation standards than did students using the textbook. Importantly, the results were similar across students of different racial and ethnic backgrounds.

William R. Penuel

Up next, of course, is the media trying to understand why science scores are so abysmal (like reading and math), assigning blame (schools, teachers, teacher education), and proposing Education Reform. What should we expect?

Well, since fourth-grade scores are in the dumpster, we need high-stakes science testing of all third-grade students and to impose grade retention on all those students who do not show proficiency in that pivotal third-grade year.

We also should start universal screening of 4K students for basic science knowledge (or maybe use “science” to screen fetuses in utero).

Simultaneously, states must adopt legislation mandating that all science curricula are based on research, the “science of science.”

Of course, teachers need to be retrained in the “science of science” because, you know, all teacher education programs have failed to teach the “science of science” [insert NCTQ report not yet released].

And while we are at it, are we sure Next Generation Science Standards are cutting it? Maybe we need Post-Next Generation Science Standards just to be safe?

Finally, we must give all this a ride, wait 6-7 or even 10 years, and then start the whole process over again.

The magical thing about Chicken Little journalism is that since the sky never falls, we can always point to the heavens and shout, “The sky is falling!”

Grades Tarnish Teaching as well as Learning

Recently on social media, a professor asked if others used rubrics with graduate students. Since rejecting rubrics has been a central component of my career-long efforts to de-grade and de-test teaching and learning, I chimed in.

My posts in the comments explaining why I don’t use rubrics were significant outliers because the thread of comments was overwhelmingly endorsing rubrics, almost entirely in terms of making grading easier or more transparent as well as providing teachers/professors protection against (hypothetical) students challenging their grades.

One immediate response to my comments is also worth highlighting since a person who doesn’t know me made fairly nasty assumptions about me being like the professors they had in grad school, the “gotcha” professors who use grades to ambush and punish students.

While most of my public (see here and here, for example) and scholarly work rejecting the use of rubrics—especially when teaching writing—has focused on their negative impact, along with grades, on students and learning (see this example), the recent social media thread highlights that grades also tarnish teaching.

Early into my first 18 years as a high school English teacher, I stopped giving tests; a bit later in that position, I also stopped grading assignments (although I had to assign students quarter and course grades). Over my on-going 19 years as a college professor, I have always delayed grades (feedback but not grades on assignments but course grades assigned) and never given traditional tests (midterms are often class discussions, projects, or reflections; and final exams are always portfolios of the work over the entire course).

My syllabi have no grade scales or policies, no weights for calculating grades, and no late policy even; I do have an explanation of my no grades/no tests approach to teaching, and I do share with students some broad patterns often correlated with course grades. [1]

While reading the thread on social media, I recognized a pattern of fear and a need among teachers/professors to justify grades but also to guard against a hypothetical complaining student.

This pattern struck me as a non-grader because over the 19 years I have been teaching in higher education full time, I have zero official complaints by students about grades. And only one student has ever confronted me about a course grade, a student who failed their FYW seminar for not participating in the minimum requirements (the student submitted all four essays once at the end of the course without submitting them throughout the semester and fulfilling the drafting and conferencing requirements).

That student left our meeting with the understanding that they in fact earned the F by not meeting the minimum requirements and expectations listed on our syllabus, and never pursued any official complaint.

While I remain deeply concerned about the negative consequences of grades, tests, and prescriptive structures such as rubrics on students and learning, I am also convinced more than ever that grades, tests, and rubrics detract significantly from effective teaching and actually create the problems many teachers/professors seem to be inordinately worried can occur in the hypothetical.

Rubrics as a subset of the traditional grading culture are often justified in terms of transparency as well—a very compelling argument.

As I have examined before in terms of the backwards design movement associated with Wiggins and McTighe, I have taught for almost 40 years while the focus on teachers and students has shifted from learning objectives to student assessment, and I do recognize that the shift to backwards design was in part an acknowledgement that students deserve transparency in expectations and goals for learning and student behaviors (artifacts of learning such as essays, projects, or performances).

Grade policies, rubrics, and templates are one type of transparency, prescriptive and authoritarian, but they all prove to be teacher/authoritarian-centered and to be mechanisms that reduce student autonomy and engagement in their own learning. Codified transparency is demanding compliance over student agency.

Despite the assumptions of at least one person commenting on social media, I am not a “gotcha” professor, and I am transparent about learning goals and student behaviors. However, I see transparency as a conversation in a learning community and an evolving, not static, state of any course bound by the limits of the academic calendar. That transparency must support my authoritative role as a teacher (as opposed to authoritarian).

I have posted many times that my transparency is in the form of minimum requirements (see below) and providing for students a wealth of resources that include detailed models of their assignments with instructional comments and checklists for preparing and revising their work.

By not grading assignments, I provide students low-risk environments that remove the “gotcha” element entirely since students are required and allowed to revise their work as well as engage with me in an ongoing conversation (conferences, feedback provided on the assignments) that helps them construct their own learning (individualized rubrics, in other words).

And since course grades are linked to a final portfolio of their work, assigning a grade occurs after students have had the entire course to learn, and considering the amount of feedback and conferences students have experienced along with class sessions grounded in their artifacts of learning (I teach based on the strengths and needs their assignments reveal), neither students nor I are surprised by the final course grade assigned.

I must emphasize again that I have been de-grading and de-testing my teaching since 1984 (the first year) and that these practices have been implemented in a rural public high school as well as a selective university. I developed and practiced not grading assignments and not giving traditional tests while teaching public school in a right-to-work (non-union) state and during my non-tenure years as I began my career in higher education.

I fully acknowledge and have worked in the so-called “real world” of traditional schooling that requires grades. Therefore, I have conceded that at best I am delaying grades, but I must emphasize that I also forefront significantly student learning and my teaching while complying with assessment, evaluation, and grades last, as a mandate that must not negatively impede student learning or my teaching.

Many justifications of rubrics are placing grades first, sacrificing learning and teaching.

Once we prioritize student learning/agency and teacher professionalism as well as teaching, structures such as rubrics can be recognized as traps that center the authority for a course in those structures (rubrics, templates, grading policies) instead of in the teacher/professor.

A syllabus is a legal contract, and once we codify how grades are determined, we as teachers/professors are bound to those codes regardless of how valid they prove to be for each student.

Well designed rubrics must be highly prescriptive (see Popham, Chapter 7), and thus, they do much of the work for students, choices and experiments that would better serve the students as learners; poorly designed rubrics (open-ended, vague, etc.) are neither fulfilling the goals of using a rubric or satisfying the standard justifications for using rubrics.

In rejecting rubrics, I am not rejecting transparency or fairness.

I am advocating for teachers and professors to step outside those traps and to make commitments to transparency and fairness grounded in student learning and teaching, not assessment, evaluation, and grades.


Notes

[1] [First-year writing seminar example; detail vary by course]

Student Participation in a Course without Grades or Tests

While you will receive a grade for this course per university policy, I do not grade individual assignments, and I do not administer traditional tests in any course I teach. We will comply with university expectations for midterm and final exams (see the assignments in the course overview), and I will submit either an S (satisfactory) or I (incomplete) for the midterm grade to designate whether or not you have fulfilled assignments as required through midterm.

Instead of traditional grades, I expect students to meet minimum requirements; in this course minimum requirements include completing all assignments (see the final portfolio sheet) fully and on time, and submitting, conferencing, and resubmitting all four required essays (a first full submission and a revision after receiving feedback and/or conferencing).

Assignments in my courses are not designed primarily for assessment (grading), but are designed as learning experiences. By completing and revising assignments, you are learning, and thus, you should expect to receive challenging feedback, and should also embrace the opportunity to revise work when allowed.

If you could complete an assignment perfectly the first time submitted, then there would be no reason for me assigning the work. All academic work can (and should) be improved through multiple efforts and feedback.

Since I require all work must be completed, and even though the expectation is that students meet due date deadlines, I must accept late work if and when students are unable to turn in work when due (see More Thoughts on Feedback, Grades, and Late Work). However, students should strive to be punctual with work unless circumstances beyond their control interfere (note that there are reasonable excuses for work being late, and I appreciate honest and upfront communication when students are unable to meet deadlines, even if the excuse isn’t urgent).

All four required essays must be revised at least once, but you are allowed and encouraged to revise as often as you wish to produce a high-quality essay.

At the end of the course, once you have been given ample opportunities to learn and can do so while taking risks and not worrying about your grade, I evaluate the entire portfolio of course work to assign a grade for the course.

Completing all work and submitting that work in the portfolio are mandatory (incomplete portfolios will be assigned an “F” for the course) and your course grade will be impacted by completing work fully and on time as well as the quality of the assignments (notably the four required essays). Proper citation (APA), quality of references, diligence in revising, and the sophistication of the writing and thinking in your assignments ultimately inform that final grade.

I recommend you read some or all of the following to understand my approach to grades and tests:

Minus 5: How a Culture of Grades Degrades Learning

Delaying Grades, Increasing Feedback: Adventures from the Real-World Classroom

More Thoughts on Feedback, Grades, and Late Work

Grades Fail Student Engagement with Learning

Note:

When I think about final grades, here are some guiding principles:

  • A work: Participating by choice in multiple drafts and conferences beyond the minimum requirements; essay form and content that is nuanced, sophisticated, and well developed (typically more narrow than broad); a high level demonstrated for selecting and incorporating source material in a wide variety of citation formats; submitting work as assigned and meeting due dates (except for illness, etc.); attending and participating in class-based discussion, lessons, and workshops; completing assigned and choice reading of course texts and mentor texts in ways that contribute to class discussions and original writing.
  • B work: Submitting drafts and attending conferences as detailed by the minimum requirements; essay form and content that is solid and distinct from high school writing (typically more narrow than broad); a basic college level demonstrated for selecting and incorporating source material in a wide variety of citation formats; submitting work as assigned and meeting most due dates; attending and participating in class-based discussion, lessons, and workshops; completing assigned and choice reading of texts and mentor texts in ways that contribute to class discussions and original writing.

Confronting the Tension between Being a Student and a Writer

Titian: Sisyphus
Titian: Sisyphus
Sisyphus, oil on canvas by Titian, 1548–49; in the Prado Museum, Madrid.
Heritage Image Partnership Ltd./Alamy

I worry about my students.

I worry, I think, well past the line of being too demanding in the same way being a parent can (will?) become overbearing.

Good intentions and so-called tough love are not valid justifications, I recognize, but there is a powerful paradox to being the sort of kind and attentive teacher I want to be and the inherent flaws in believing that learning comes directly from my purposeful teaching and high demands.

After 37 years of teaching—and primarily focusing throughout my career on teaching students to write—I have witnessed that one of the greatest tensions of formal education is the contradiction of being a student versus being a writer.

That recognition is grounded in my own experiences; I entered K-12 teaching, my doctoral program, and my current career in higher education all as a writer first.

My primary adult Self has always been Writer, but being a writer has remained secondary to my status as either student or teacher/professor-and-scholar.

The tension between being a student and a writer has been vividly displayed for me during my more recent decade-plus teaching first-year writing at the university level. To state it bluntly, many of the behaviors that are effective for being a good student are behaviors that must be set aside in order to be an effective and compelling writer.

I began addressing this tension early in my career as a high school English teacher by de-grading and de-testing my classes. The writing process, I found, had to be de-graded so that students could focus on substantive feedback and commit to drafting free of concern for losing credit.

But by the time students reach college, they have been trained in a graded system; that graded system implies that students enter each assignment with a given 100, and thus, students learn to avoid the risk of losing points (see my discussion of minus 5).

But equally harmful is that college students have also been fairly and even extremely successful in a grading culture driven by rubrics, class rank, and extra credit—each of which shifts their focus to the grades (and not the quality of their work) and centers most of the decision making in their teachers.

For example, I currently teach at a selective university. Most of my students have been A students in high school.

Yet, they seem paralyzed when confronted with decision making and genuinely terrified to attempt anything not prescribed for them.

In my first year seminars now, students are revising their cited essays, and one student emailed, asking if they needed to cite a YouTube video (of course) and how to do so.

At this last question (although the first is really concerning) is where I find myself often answering: “Just Google, ‘How to cite a YouTube video in APA?'”

A reasonable person of moderate affluence in 2020 with access to the Internet (often on a smart phone) would search anything they didn’t know using a browser. I am convinced that being a student tends to create helpless people out of very capable young adults.

And despite several direct lessons on and multiple comments and examples provided in materials and on submitted drafts, many of my students continue to submit revised drafts with the first few sentences, as they did in high school, overstating nothing; these are from revised essays after I once again addressed overstating nothing in the opening sentences:

Some questions that have been floating around for a while are, is college worth it?

Day to day interactions between different people form the bonds for different relationships in our lives. People have acquaintances, friendships, romantic relationships, familial relationships, and more.

While I want to share some of my strategies below detailing how I confront the tension between being a student and a writer for my students, I must stress that my uniquely different classroom creates an entirely new tension because I must recognize that most of my students’ academic careers will remain in traditional classrooms tethered to traditional grading.

Therefore, I seek strategies that address simultaneously how students can present themselves as careful and diligent students as well as credible, engaging, and compelling writers.

Those strategies include the following:

  • Teaching students how to prepare and submit work (often with Word) that reflects them in a positive way for anyone evaluating them. While I discuss with students that document formatting is trivial, a careless submission will likely negatively impact how any teacher/professor views them as students. I encourage them to learn how to format with Word (using page breaks and hanging indents, for example); to navigate track changes and comments (creating clean documents to resubmit); to set their font to a standard size and font (to avoid submitting work with multiple fonts or font sizes, which they often do), including how to paste text so that it matches the document settings; and to address the Spelling and Grammar function in Word so that they do not submit documents with the jagged underlining noting issues they should have edited before submitting. Students also struggle with naming document files, attaching their work to emails, and emailing professors in ways that represent them well—so I am diligent about not accepting work until they meet those expectations. Important to note here is that in my class, these experiences come with no loss in grades, but I stress to them that in other courses, they likely could receive lower grades and probably will create a negative perception of them as students.
  • Instead of rubrics and writing prompts, we work from models of writing, and I provide for students checklists and examples that are designed so that they become the agents of their learning (and this is particularly frustrating since students still function with fear and thus avoid risk or making their own decisions). Drafting through all the stages of writing, then, are spaces where students are decision makers like real-world writer, but I provide them a somewhat risk-free experience that is unlike being a student.
  • In some respects, students seeking to present themselves well and writers seeking ways to be credible and engaging have some overlap. Therefore, many of my key points of emphasis as a teacher of writing will, in fact, raise their status as students. Some of these include attending to appropriate diction (word choice) and tone that matches the level of the topic being addressed, focusing on effective and specific (vivid) openings and closings (key skills for writers, but students establish themselves when being graded with their first sentences and then leave the person evaluating them with an impression linked to their final sentences), and selecting high-quality sources (typically peer-reviewed journal articles) and then integrating sources in sophisticated ways when writing (avoiding the high school strategy of over-quoting and walking the reader through one source at a time [see the discussion of synthesis in the link above and here]).
  • Students also leave high school feeling the need to make grand claims, grounded in simplistic approaches to the thesis sentence and standard practices by teachers that require students to have their thesis approved before they can draft an essay (see this on discovery drafts). I encourage students to focus narrowly and specifically throughout their essays while leaning toward raising questions (a more valid pose for students) instead of grand claims.

While I struggle, as I admitted above, with my tendency to be too demanding (my tough-love streak), I also recognize that providing only about 3 months in my unique teaching and learning environment faces a monumental hill to crest against more than a decade of experiences as students and student-writers.

More often than not, I do not crest, but descend a bit defeated like Sisyphus to roll that rock yet again.

The tension between being a student and a writer is not insurmountable, I hope, but it certainly must be confronted openly and directly in our classes, especially our writing-intensive classes.

In the world beyond formal schooling, many of the qualities of a good student will prove to be ineffective in the same way they are for young people learning to write well.

The best strategies for being an effective writing teacher include recognizing and helping our students navigate their roles as students—even as we seek to help them to move beyond those artificial restrictions.

No Need to Catch Up: Teaching without a Deficit Lens

Some jokes work only when spoken aloud, and possibly especially when spoken aloud in certain regions of the country, but this one came to mind recently in the context of the impact of Covid-19 on schooling: “This is the worst use of ‘catch up’ in education since the Reagan administration allowed the condiment to count as a vegetable in school lunches.”

Heinz tomato ketchup bottle in shallow focus photography
Photo by Charisse Kenion on Unsplash

As I noted in a Twitter thread, a common response to schools closing during the spring of 2020 because of the pandemic is an editorial (The Post and Courier, Charleston, SC) declaring, Use summer to figure out how to catch up SC students; they’ll need it.

“How do schools help students catch up after the Covid-19 closures?” is the wrong question, grounded in a deficit lens for teaching and learning also found in concepts such as remediation and grade-level reading.

Traditional formal schooling functions under several inter-related ideologies, some of which are contradictory (consider assumptions about the bell-shaped curve and IQ v. the standards movement that seeks to have all students achieve above a normal standard).

Deficit ideologies depend on norms, bureaucratized metrics, against which identified populations (in education, grade levels linked to biological age) can be measured; the result is a formula that labels students in relationship to the norm. Many students, therefore, are positioned as deficient, labeled with what they lack.

The hand wringing about students falling behind with schools moving to remote teaching and learning during the spring exposes this deficit lens, but it has always been pervasive since the early twentieth century (at least) in U.S. education.

Consider the branding of federal education over the past couple decades—No Child Left Behind (George W. Bush) and Race to the Top (Barack Obama)—the first posing an image of falling behind (and thus the need for some to catch up) and the latter framing education as a race with necessary winners and losers (who, of course, were behind, need to catch up).

These deficit views of teaching and learning—and of teachers and students—are essential to the main structures of formal schooling, management and efficiency.

While it is a conservative mantra that all-things-government (such as public schools) are doomed to failure because it is government, the fundamental problem with public education is, in fact, bureaucracy (a weakness found in publicly funded institutions and the free market [read Franz Kafka, of Dilbert, and watch Office Space and The Office]).

Attempting to house and teach large numbers of students as efficiently as possible with constrained public funds is a guiding (if not the guiding) mechanism for how we teach students—students as widget monitored by quality control.

My father, Keith, worked in quality control his entire career. But his work involved machined parts, not human beings.

The manufactured “catch up” dilemma is a subset of that widget/quality control paradigm that can create a perception of efficiency but is antithetical to the complexity of human behaviors such as teaching and learning.

We teachers are tasked daily with a given set of students, traditionally arranged by grade levels that loosely conform to biological ages; however, our schools and our classes also vary significantly by out-of-school factors such as the socioeconomic levels of communities and racial as well as gender demographics that schools house but do not cause.

Putting efficiency and management first often ignores and even works against individual student needs and the corrosive impact of inequity that is embodied by individual and groups of students.

Putting 25-35 students in a classroom, building a highly structured and sequential curriculum, evaluating all students against those standards, and compelling teachers to maintain the same instruction and assessment across every grade level can address the priorities of efficiency and management.

But these deficit-based practices accomplish those goals at the expense of large segments of student populations.

It is counter-intuitive to admit that no such coherent and definable thing really exists as third-grade standards since we have spent forty years determined to create and recreate those standards, to test all students against those standards, and to ignore that “all students will” does not and cannot happen—in this system especially that ignores and perpetuates the inequities our students embody through no fault of their own.

Yet, no such thing as third-grade standards exist as we construct them and as we use them to label and manage students.

Eight- and nine-year-old children are biologically and environmentally incredibly diverse, especially in the ways they learn and respond to the world.

Despite our effort to limit or control human autonomy, even children are compelled to be autonomous; they have some limited ability to want to learn, to choose to comply or not with teacher expectations.

Teaching without a deficit lens is an option, however, possibly even within the system we have; although a new system would be much more preferable.

First, teaching can begin with individual students, focusing on the qualities, strengths, and knowledge they bring to any classroom.

Once a teacher knows the make-up of the abilities among any group of students, the teacher can design new and review material and experiences to provide for all students to incorporate their strengths and interests into acquiring new and better learning. Teachers can accomplish these strength-based lessons around whole-class, small-group, and individualized instruction—concessions to efficiency and management that come after putting students strengths and addressing inequity first.

As a final example of the problem of seeing the Covid-19 impact on education as somehow unique (instead of magnifying existing flaws in the system), consider the concerns raised about inequity in administering the SAT and Advanced Placement (A.P.) tests in modified forms for the remote necessities of the pandemic.

Online and modified SAT and A.P. tests have not created some new inequity; they are the mechanisms of inequity that have always existed and helped drive the deficit lens of public schooling.

Standardized testing has always measured inequity, but that testing has also always perpetuated that inequity by labeling many students as deficient as learners while the metric, in fact, mostly measures disparities in social class, gender, and race.

There is an ugly irony to calling for helping students catch up in the wake of the Covid-19 pandemic. The move to remote teaching and learning is one of the few common experiences among our students, who enjoy or suffer the consequences of privilege and disadvantage whether in school or at home despite a pandemic.

In other words, if we remain trapped in deficit language, students are sharing the same “behindness” of having moved to remote course and having reduced instruction.

Ultimately, trying to help students catch up keeps our judgmental gaze on the student, a deficit lens, in fact. The problem with the impact of the pandemic is the same as before Covid-19 changed our world—inequity.

Pathologizing students further because of the pandemic once again allows the systemic inequities in our communities and schools to be ignored, to remain.

Ketchup was never a valid vegetable in public school lunches, and trying to catch up students in the wake of Covid-19 is yet another way to further malnourish our students.