Tag Archives: test scores

Consuming Education and Unintended (Ignored) Consequences

As I have noted often, the roots of the accountability era—President Reagan’s directive for the Nation at Risk report—are clearly connected to commitments to free market forces as central to education reform.

Over the past thirty years or so, parental choice has been promoted through a variety of market formats (vouchers, tuition tax credits, charter schools), and then accountability driven by standards and high-stakes tests have increasingly been morphed from academic incentives to financial incentives—starting with school report cards and exit exams for students before expanding to linking teacher retention and pay to student test scores and even now calling for adding teacher education to the value-added mania.

Many have begun to confront the negative impact of focusing high-stakes accountability on test scores, but those concerns tend to be about narrowing the curriculum and expectations by teaching to the test or about the lack of credible research supporting value-added methods of evaluating teachers or teacher education programs.

While those concerns are powerful and accurate, something more insidious is rarely examined: the unintended and ignored consequences of creating in education a culture of competitiveness among teachers about student test scores.

Whether value-added methods are used to determine teacher retention or merit pay, those policies are creating a system of labeling and ranking teachers, and thus, pitting teachers against each other for a finite number of jobs or pool of compensation.

The result of those policies is that each teacher must now not only prioritize her/his students’ test scores, but also seek ways in which her/his students can score higher than students in other teachers’ classes.

If Teacher A, then, finds ways in which to raise her/his students’ scores, she/he is incentivized to implement those practices while not sharing them with the wider community of teachers.

Yes, value-added methods (VAM) further reduce education to teaching to the test, but even more troubling is that VAM codifies a culture of competition that consumes the very community needed so that all students and all teachers excel.

Competition is often barbaric—as we witnessed at the end of the 2015 Superbowl when the Seahawks and Patriots were reduced in the closing seconds to the sort of fighting not accepted in the sport of football.

Schools, teaching, and learning are increasingly like those closing seconds—the circumstances are reduced, the stakes are high, and everyone becomes desperate to grab “his/hers,” without regard to others.

In education, then, the market forces us into the barbarism that formal education has been trying to overcome for decades.

Advertisement

What Are Tests Really Measuring?: When Achievement Isn’t Achievement

High-stakes standardized testing must be the most resilient phenomenon ever to exist on the planet. Joining high-stakes standardized testing in that (dis)honor would be the persistent but misleading claim that test scores are primarily achievement (and a growing future candidate for this honor is the claim that test scores by students, labeled “achievement,” are also credible metrics for “teacher quality”).

Let’s start with a couple statistical breakdowns of what test scores constitute:

But in the big picture, roughly 60 percent of achievement outcomes is explained by student and family background characteristics (most are unobserved, but likely pertain to income/poverty). Observable and unobservable schooling factors explain roughly 20 percent, most of this (10-15 percent) being teacher effects. The rest of the variation (about 20 percent) is unexplained (error). In other words, though precise estimates vary, the preponderance of evidence shows that achievement differences between students are overwhelmingly attributable to factors outside of schools and classrooms (see Hanushek et al. 1998Rockoff 2003Goldhaber et al. 1999Rowan et al. 2002Nye et al. 2004).

Just 14 per cent of variation in individuals’ performance is accounted for by school quality. Most variation is explained by other factors, underlining the need to look at the range of children’s experiences, inside and outside school, when seeking to raise achievement.

Next, consider this from the UK:

Differences in children’s exam results at secondary school owe more to genetics than teachers, schools or the family environment, according to a study published yesterday.

The research drew on the exam scores of more than 11,000 16-year-olds who sat GCSEs at the end of their secondary school education. In the compulsory core subjects of English, maths and science, genetics accounted for on average 58% of the differences in scores that children achieved.

While the genetics claim is potentially dangerous, and certainly controversial, the article offers some important clarifications:

The findings do not mean that children’s performance at school is determined by their genes, or that schools and the child’s environment have no influence. The overall effect of a child’s environment – including their home and school life – accounted for 36% of the variation seen in students’ exam scores across all subjects, the study found….

Writing in the journal, the authors point out that genetics emerges as such a strong influence on exam scores because the schooling system aims to give all children the same education. The more school and other factors are made equal, the more genetic differences come to the fore in children’s performance. The same situation would happen if everyone had a healthy diet: differences in bodyweight would be more down to genetic variation, instead of being dominated by lifestyle.

Plomin said one message from the study was that differences in children’s performance were not merely down to effort. “Some children find it easier to learn than others do, and I think it’s appetite as much as aptitude,” he said. “There is a motivation, maybe because you like to do what you are good at.”

Genetics, he said, caused people to create, select and modify their environment, and so nature drives nurture, which in turn reinforces nature. A child with a gift for maths seeks friends who like maths. A child who learns to read easily might join a book club, and work through books on the shelves at home.

Additional points drawn from this research present some strong cautions about continued reliance on not only standardized tests, but also uniform national standards:

“Education is still focused on a one-size-fits-all approach and if genetics tells us anything it’s that children are different in how easily they learn and what they like to learn. Forcing them into this one academic approach is going to make some children confront failure a lot and it doesn’t seem a wise approach. It ought to be more personalised,” he said.

“These things are as heritable as anything in behaviour, and yet when you look in education or in educational textbooks for teachers there is nothing on genetics. It cannot be right that there’s this complete disconnect between what we know and what we do.”

Finally, consider this research on the disconnect between test scores and student abilities:

To evaluate school quality, states require students to take standardized tests; in many cases, passing those tests is necessary to receive a high-school diploma. These high-stakes tests have also been shown to predict students’ future educational attainment and adult employment and income.

Such tests are designed to measure the knowledge and skills that students have acquired in school — what psychologists call “crystallized intelligence.” However, schools whose students have the highest gains on test scores do not produce similar gains in “fluid intelligence” — the ability to analyze abstract problems and think logically — according to a new study from MIT neuroscientists working with education researchers at Harvard University and Brown University.

In a study of nearly 1,400 eighth-graders in the Boston public school system, the researchers found that some schools have successfully raised their students’ scores on the Massachusetts Comprehensive Assessment System (MCAS). However, those schools had almost no effect on students’ performance on tests of fluid intelligence skills, such as working memory capacity, speed of information processing, and ability to solve abstract problems….

Instead, the researchers found that educational practices designed to raise knowledge and boost test scores do not improve fluid intelligence. “It doesn’t seem like you get these skills for free in the way that you might hope, just by doing a lot of studying and being a good student,” says Gabrieli, who is also a member of MIT’s McGovern Institute for Brain Research.

So should we be shocked when students passing high-stakes reading tests in Texas admit they cannot read?:

A female classmate of Tony’s says she can’t get through the stories she reads in school unless someone explains them to her. She’s passed all her state tests, too. How? She says she uses classroom-taught “strategies” on her English reading test and that if she underlines and highlights enough and narrows down her options, she has a better chance of guessing right by playing the odds. She failed her math state test because of the word problems, so she employed her English strategies there on the retry attempt and passed.

Or that the most recent analysis of the teaching of writing in middle and high schools has found that best practice in writing hasn’t occurred because of accountability and high-stakes testing?:

Overall, in comparison to the 1979–80 study, students in our study were writing more in all subjects, but that writing tended to be short and often did not provide students with opportunities to use composing as a way to think through the issues, to show the depth or breadth of their knowledge, or to make new connections or raise new issues…. The responses make it clear that relatively little writing was required even in English…. [W]riting on average mattered less than multiple-choice or short-answer questions in assessing performance in English…. Some teachers and administrators, in fact, were quite explicit about aligning their own testing with the high-stakes exams their students would face. (Applebee & Langer, 2013, pp. 15-17)

Our educational world has been turned over wholesale to testing, despite ample evidence that test scores are many things (markers of privilege, markers of genetic predispositions, markers of teaching-to-the-test), among the least of which are student achievement and teacher quality.

If we don’t have the political will to de-test our schools, the evidence is clear that the stakes associated with testing must be greatly lessened and that the amount of time spent teaching to the tests and administering the tests must also be reduced dramatically.

NAEP? Nope: Why (Almost) Everyone Will Misread (Again) Data on Gaps

Let the data orgy begin!

NAEP data have been released and I anticipate almost as much time and money will be wasted on the data as has been wasted on administering the tests, scoring the tests, and creating the handy web link to all that data—notably the predictable link to gaps. [For the record, most of these data charts can be prepared without any child ever taking tests; just use the socioeconomic data on each child and extrapolate.]

Take a moment and scroll through the gray space between myriad groups in both math and reading.

There, enjoy it?

While you’re at it, look at the historical gaps between males and females in the SAT.

Males on average outscore females in reading and math (though females outscore males in writing, the one section of the SAT that doesn’t count for anything anywhere, hmmmm).

The problem, of course, is that standardized test data are simply metrics for social conditions that we pretend are measures of learning and teaching.

It is a particularly nasty game, but it seems few are going to stop playing any time soon. “Achievement gap”* has now ascended to the point of being classified as a subset of Tourette syndrome among politicians and education reformers.

The problems with persisting to lament achievement gaps and then address those gaps with new standards and more testing are that the solutions both primarily measure those gaps and contribute to them:

  • Standardized testing remains biased by class, race, and gender.
  • Standardized test scores remain mostly a reflection of any child’s home (from about 60% to as much as 86%).
  • School and classes students take are more often than not a reflection of the community and homes children are born into; thus, school/learning quality is determined by a child’s socioeconomic status, but those schools do not change that status.
  • If affluent children and impoverished children are provided equal learning opportunities (which they are not), the gap cannot close (go back and look at the handy NAEP charts on gaps, by the way).

The short point is something different has to be done in both the lives and schools of children in poverty (as well as racial and language subgroups overrepresented in poverty) if those data-point gaps are ever going to be reduced.

David Berliner (2013) is illustrative of what those differences should entail, using PISA data often instrumental in ranking educational quality of countries:

Let me look at inequality and schooling internationally: Do countries with greater income inequality generally do worse on achievement tests than countries where income inequality and poverty is lower? The answer is yes (Condron, 2011). Larger income disparities within a nation are associated with lower scores on international tests of achievement. For example, on the 2006 mathematics tests of the Program on International Student Achievement, with a mean score near 500, Finland scored above all other nations (548), and substantially beat the United States of America (474). But Finland is a country with low inequality and a very low childhood poverty rate. But suppose that Finland had the same rate of childhood poverty as the United States of America, and the United States of America had the same rate of childhood poverty as Finland. What might the scores of these two nations be like then? If one statistically adjusted each nation’s scores using the poverty rate of the other, then Finland’s score is predicted to be 487, a long way from the top position it had attained. The score for the United States of America would have been 509, quite a bit better than it actually did. Clearly, inequality within a nation matters. If large numbers of youth in a nation are poor, then achievement test scores are likely to be lower. If there were a reduction in the poverty rate of a nations’ youth, achievement scores are likely to go up….

To those who say that poverty will always exist, it is important to remember that many Northern European countries such as Norway and Finland have virtually wiped out childhood poverty. (pp. 205, 208)

Thus, if we are bound and determined to persist in our fetish for test scores and remain committed to raising test scores (instead of actually alleviating inequity or providing all children with wonderful and rich school days that would end in learning and happiness), guess what?

We need to do something different than what we have been doing for thirty-plus years!

First, end the standards-testing rat race.

Second, end childhood poverty.

Reference

David C. Berliner (2013) Inequality, Poverty, and the Socialization of America’s Youth for the Responsibilities of Citizenship, Theory Into Practice, 52:3, 203-209, DOI: 10.1080/00405841.2013.804314

* Please see my series on “achievement gaps”:

Achievement Gap Misnomer for Equity Gap, pt. 1

Achievement Gap Misnomer for Equity Gap, pt. 2