My Life in Trumplandia Began in 1961

My first jobs were at the country club where my mother worked and on the golf course where we lived—a working-class family of rednecks who saw building a house there as making it, achieving the American Dream.

On rainy and cold days, all the pro shop and greens-keeping workers mulled around the club house. I vividly recall one of those days when a member of the grounds crew explained to me in careful detail that black people (he preferred the racial slur) were the consequence of Cain being banished for murdering Abel and then mating with apes.

It’s biblical, he proclaimed.

This experience, I must emphasize, was not an outlier. This was normal for my life, having been born in 1961 in Woodruff, South Carolina.

Such blatant and casual racism was pervasive among my white family, friends, and community.

So Roseanne Barr’s recent racist Twitter rant and the entire rise of Trumplandia—these are not in any way shocking while they are incredibly burdensome, a heaviness that will never approach the weight carried by those who are the targets of racism and bigotry but that certainly drags me closer and closer to fatalism.

I also know fatalism quite well.

In my late teens and throughout college and young adulthood, my relationship grew increasingly antagonistic with my father, often punctuated with heated arguments spurred by his racism.

Over years of arguing, I simply gave up, became a quiet and passive visitor to my parents’ house. Increasingly, I called fewer and fewer times; I visited almost exclusively on required holidays.

The ennui was the tension between the natural love felt for parents—and the incredible debt I felt to the many sacrifices they made for me—and the inexcusable ideologies my parents espoused, often relentlessly.

My parents were Nixon apologists, faithful Republican voters their entire lives.

They also were increasingly strapped for money, and their last decades were characterized by heart disease and just surviving the consequences of being working-class children of the 1940s-1950s (smoking and eating as many Southerners did).

My parents were the poster-couple for self-defeating politics, decades before the mainstream media became obsessed with understanding the disenfranchised white voter. And finally, my parents paid the ultimate cost for grounding their political and economic lives in racism.

At the very least, a healthcare system connected to universal insurance and a robust social safety net would have extended my parents’ lives, lives that ended very badly and with their life’s earnings nearly exhausted.

The house that represented their achieving the American Dream is the very last thing remaining—a depressing monument to their stubborn self-defeating ideologies, their racism.

Our last decade together is the most depressing. My daughter dated, married, and then had a daughter with a black man.

I am now the grandfather of two biracial grandchildren.

It wasn’t a hard decision, but it was hard—to give up on your parents as you recognized this family of yours deserved your complete devotion. Passive and silent were none the less complicit.

Everyone in my immediate family, except me, became entirely estranged from my parents as I attempted to meet some extreme minimum obligations as my father’s health deteriorated dramatically, and then my mother had a stroke.

The last six months of my parents’ lives thrust them once again into the center of my life, the fatalism to which I had resigned myself set aside as their reduced circumstances demanded we all recognize their essential humanity despite their own role in having come to these unnecessary and desperate ends.

No one wants to admit their parents are flawed or even horrible people—just as most white people do not want to admit they are complicit in white privilege and racism.

My parents’ deaths during the beginning of the Trump administration carry an awful symbolism in the same way my parents’ house does now as we rummage through all my parents’ stuff—throwing away most of it—in preparation to sell this crumbling statue dwarfed by the desert of their tarnished beliefs.

I carry in my 57 years another layer of exhaustion at the mainstream media trying to understand Trump voters—white angst grounded in the racism that social norms refuse to acknowledge—and the current wrestling with Barr, including some who are calling for explaining her rant as somehow connected to her mental health.

That layer of exhaustion has the face of the grounds crew member explaining to me that black people came from Cain mating with an ape; it has the face of hundreds of white people in my family, my community.

I do not need anyone to explain this to me. It is my life.

A life already well acquainted with fatalism resting against love and deep appreciation, a life rendered heavy, nearly too heavy to carry, certainly too heavy to move.

Yes, I gave up on changing my parents’ minds, shaking their souls in the name of human dignity as I looked into the eyes of my grandchildren.

How, then, to make strangers see the inhumanity in their racism, see their hatred and bigotry as self-defeating as well as entirely unwarranted?

Fatalism is a powerful narcotic.

Recommended

Prejudiced, Hateful and Degreed: PhD’s, racial dissonance, and the culture of indifference in academia | Think Piece

Who Me?

Jordan Peterson’s Ignorance of Postmodern Philosophy

Free Speech, Free Market, and the Lingering “Rigid Refusal”

In the documentary Corridor of Shame, which explores the historical inequities of school funding in South Carolina along lines of race and social class, Senator (R, SC) Lindsey Graham claims while speaking at MLK Day in 2005: “We have a disparity of funding in a region of our state…. The reason we have disparity in funding is not cause we are prejudiced at the governmental level. It’s because we collect taxes based on property value. And our property value in those counties are pretty low because there’s no industry.”

Graham’s denial of systemic racism represents what Ta-Nehisi Coates called “elegant racism” while confronting the “oafish racism” of Cliven Bundy and former L.A. Clippers owner Donald Sterling:

The problem with Cliven Bundy isn’t that he is a racist but that he is an oafish racist. He invokes the crudest stereotypes, like cotton picking. This makes white people feel bad. The elegant racist knows how to injure non-white people while never summoning the specter of white guilt. Elegant racism requires plausible deniability, as when Reagan just happened to stumble into the Neshoba County fair and mention state’s rights. Oafish racism leaves no escape hatch, as when Trent Lott praised Strom Thurmond’s singularly segregationist candidacy.

Elegant racism is invisible, supple, and enduring. It disguises itself in the national vocabulary, avoids epithets and didacticism. Grace is the singular marker of elegant racism. One should never underestimate the touch needed to, say, injure the voting rights of black people without ever saying their names. Elegant racism lives at the border of white shame. Elegant racism was the poll tax. Elegant racism is voter-ID laws.

Graham acknowledges inequity, but uses “prejudiced” instead of “racist,” and casually rejects systemic racism.

As Coates explains, whites in the U.S. are more apt to acknowledge oafish racism while almost always employing elegant racism, such as denying systemic racism; therefore, Graham’s obfuscation is a powerful and effective political ploy, especially in the South.

In the matter of a few days recently, this distinction has played out in a public way with the NFL instituting a new policy about players protesting during the National Anthem and Roseanne Barr having her ABC sit-com canceled after a racist outburst on social media.

The NFL Anthem policy and Barr’s show cancelation have two important elements in common: what they represent in terms of how the U.S. confronts and understands racism, and how many in the U.S. have a deeply flawed understanding of free speech.

First, when former NFL quarterback Colin Kaepernick initiated protests during the National Anthem, the public and political response has tended to misrepresent the actions. Kaepernick and other players were protesting systemic racism, inequitable policing of blacks often resulting in death, during the Anthem.

Notably, Barr’s oafish racism, comparing a person of color to an ape, has resulted in a similar outcome for Barr and Kaepernick—the loss of work—although the former is a racist and the latter is protesting racism.

While Kapernick and other protesting NFL players have been condemned for being political (disregarding they are taking credible stands against a reprehensible social reality), Barr has a history of being bigoted.

Writer Roxane Gay has examined that history and then the recent cancelation, in fact.

Also significant about these two situations is that the new NFL policy does in fact limit when and how NFL players can express themselves, but Barr was perfectly free to share her comments, with an incredibly wide audience.

That comparison leads to the now common aspect of the public discussion of Barr’s cancelation, claims that they are about free speech: Since the NFL and ABC are not the government, neither of these situations is an issue of free speech.

As Katherine Timpf explains:

First of all, this is in no way a free-speech or First Amendment issue. The First Amendment protects us from facing consequences from the government over our speech, not consequences from our peers or our employers. Yes, what Barr said, although abhorrent, absolutely was constitutionally protected speech, and, of course, it should be. After all, giving the government the power to decide what is and is not “acceptable” speech would be giving the government the power to silence whatever kind of speech it felt like silencing, which would be very dangerous indeed. Anyway, the point is, a free-speech-rights violation would be someone trying to, say, arrest Barr for her comments, not firing her for them. Her rights were in no way violated in this case. ABC simply exercised its own rights as a private company to decide whom it does and does not want to associate with, and it’s my view that no one should blame its executives for making the decision that they made.

Therefore, the NFL policy on the National Anthem and the cancelation of Barr’s sit-com are not about free speech but the free market. Both the NFL and ABC are hedging that their actions preserve their audiences, their bottom line.

And what those concerns about their audiences reinforce is that the public has a much lower tolerance for oafish racism (Barr) than for confronting elegant racism (NFL protests). The NFL believes its audience either denies or cannot see systemic racism, and thus does not support the so-called politics of NFL players who protest while ABC feels that continuing to give an oafish racist a major platform will erode their audience.

Here is where we must confront the problem with trusting the free market since doing the right thing is linked to the moral imperative of the majority, the consumers. Currently in the U.S., that majority remains insensitive to systemic inequity and injustice; therefore, elegant racism survives—even bolstered ironically when oafish racism is shamed and seemingly blunted.

When each oafish racist is given their due, those denying systemic racism have their worldview confirmed since they see individual punishment as justice.

These actions by the NFL and ABC reflect that in the U.S. whites are still in the early adolescent stage of racial consciousness. Being able to confront oafish racism isn’t even fully developed yet.

Many in the media called Barr’s slurs “racially insensitive,” showing the same sort of refusal to call a lie, a lie that now characterizes mainstream media. But a few in that media are calling Barr’s words “racist,” and ABC folded under the weight of that fact—although we should be asking why Barr had this second chance considering her history of bigotry.

As a people, white America is not adult enough, however, to move past finger-wagging at oafish racists and to acknowledge systemic racism because, as Coates recognizes, “to see racism in all its elegance is to implicate not just its active practitioners, but to implicate ourselves.”

James Baldwin’s “Lockridge: ‘The American Myth'” remains a chilling warning then: “This rigid refusal to look at ourselves may well destroy us; particularly now since if we cannot understand ourselves we will not be able to understand anything.”

That anything, as the NFL and ABC have exposed, is racism—the cancer destroying our democracy and our free market.

As consumers, we have a moral obligation to tell the NFL it is wrong; we will not stand for systemic racism. And we must tell ABC that canceling Barr’s sit-com is a start, but it isn’t enough.

As citizens, we have to look at ourselves in the mirror of the voting booth—something we have failed to do yet in the good ol’ U.S. of A.

Recommended

Who Me?

More on Rejecting Growth Mindset, Grit

When I posted a recent study on growth mindset—Study finds popular ‘growth mindset’ educational interventions aren’t very effective—on my blog Debunked!, growth mindset advocates quickly bristled at the blog title, notably this Tweet:

Several patterns in the subsequent Twitter discussion are worth addressing in a format more detailed than Tweets.

First, I have been a consistent critic of both growth mindset and grit best captured in the following posts:

I immediately shared these posts as part of the discussion—often guided by Wormeli’s thoughtful and welcomed concerns about my stances.

Next, however, many advocates (mostly for growth mindset) offered typical rebuttals, including (1) arguments that both growth mindset and grit in practice are often counter to the intent of Carol Dweck  (growth mindset) and Angela Duckworth (grit), noting that both have raised concerns about those misuses and misconceptions, (2) chastising me for “conflating” growth mindset and grit, and (3) requesting practical alternatives to growth mindset and grit practices.

To the first point, I want to be clear that I am strongly aware of the gender problems inherent in me, a white male academic, challenging Dweck and Duckworth, including critiques that can be and have been viewed as attacking them personally.

I do think it is fair to address the character of those scholars advocating character education for children (see this on Duckworth, for example), but I also have taken care to monitor gender biases inherent in how we police women scholars versus men scholars.

But, while I am aware that both Dweck and Duckworth have raised concerns about the misuses of growth mindset and grit, I contend that both scholars have reaped a great deal of financial and professional capital from that misuse, primarily, and haven’t refused those profits. I find their cautions hollow, then.

I reject the second point—that I conflate growth mindset and grit—and recognize that growth mindset advocates often seek ways to distance themselves from the grit movement and that research has begun to challenge both growth mindset and grit research by Dweck and Duckworth, although far more challenging claims have been made against Duckworth’s research.

In short, I absolutely recognize that growth mindset and grit are not the same, and may not even be on the same level of validity and credibility as research.

However, while I do not conflate the two, I do highlight in my critiques that both are grounded in deficit ideologies: Both growth mindset and grit, I contend, mistake growth mindset/grit as the dominant or even exclusive quality causing success in student learning (ignoring the power of systemic influences) and then create an environment in which some students (too often black, brown, and poor) are defined in deficit terms—that they lack growth mindset/grit.

Yes, growth mindset and grit are unique approaches, but they share the failure of being complicit in deficit practices. And while the science of growth mindset may be more solid than the science of grit, both are prone to the problem of scientific racism—the failure to unpack “high-quality research” for biases.

Now, to the final point, I would recommend Paul Gorski’s work on equity practices, specifically this second edition which directly confronts both growth mindset and grit: Reaching and Teaching Students in Poverty: Strategies for Erasing the Opportunity Gap. Here, also, are some starting points with Gorski’s work:

Ultimately, then, I do reject growth mindset and grit, both as programs that are misused and thus harmful to the students who need formal education the most. I also see little room to justify the research behind either, or to excuse Dweck or Duckworth even when they raise cautions about the misuses.

My concerns are driven by an equity lens that recognizes and confronts the problems masked by narrow views of research and science as well as the myopia inherent in accountability that demands in-school-only approaches to teaching, testing, and reform that tend to be driven by bootstrap ideologies.

Teaching and learning as well as success and failure are incredibly complex. Often in education, our rush to find the key to success and failure in order to improve teaching and learning is ruined by a missionary zeal corrupted by biases—both of which must be confronted and resisted.

Growth mindset and grit fail as overzealous programs, and students are better served by equity practices couched in efforts to alleviate the systemic forces that shape how they live and learn regardless of their character.

Does Your Academic Institution Value Diversity, Equity? (Probably Not)

Several years ago, my university was forced to acknowledge it has a gender problem. As a selective liberal arts university, the institution had already begun addressing its race and diversity problems among students admitted and faculty hired.

Two gender concerns could not be ignored: Women were paid less than men at the same ranks, and faculty attrition was overwhelmingly among women professors, who constitute only about 30% of the faculty.

A gender equity study was commissioned, but when the report was issued, a group of male faculty circulated an open letter challenging the methodology of the report, raising concerns about a lack of empirical data and expressing the need for quantitative versus qualitative methods.

This response certainly had an image problem—white male faculty calling into question a gender equity study—and the concerned faculty did eventually withdraw the letter in deference to the good of the university community.

However, this study and the response illustrate a serious problem in academia, the pervasive power of traditional structures (expectations about what data matter, what types of research matter, and a lingering argument that objectivity can be achieved) to serve as a veneer for entrenched, and thus rendered invisible, sexism, racism, and classism.

A parallel example is when my university seeks to increase the diversity of the faculty, that effort is always contested with “Let’s just hire the best candidate,” again often voiced by white male faculty [1].

“Best,” of course, like quantitative methods and empirical data is a veneer for the embedded biases that have been normalized (and thus seemingly invisible to the power structure itself and those who benefit from the bias).

White and male privilege, then, are institutionalized in higher education (see here and here for ways those privileges exist, again, invisibly to white men). Despite the popular claim that higher education is some liberal indoctrination factory, higher education is incredibly traditional and conservative at its core; only the edges appear liberal.

But, I can feel many wanting to interject, how can calling for high-quality research to address gender equity on campus and expecting candidates for open faculty positions to be the best constitute flawed practices in academia?

Let me often another example, one that calls into question the grounding of those arguments themselves, the claims of fidelity to high standards.

Another traditional practice in higher education is the use of Student Evaluations of Teaching (SET), feedback gathered from students and then used in various ways to evaluate faculty for tenure and promotion.

Notably, a significant body of research [2] has revealed that SET lack validity and negatively impact women, faculty of color, and international faculty (in the U.S.).

Concurrently, the use of SET positively impact the existing and skewed white male faculty at most universities, who disproportionately dominate higher ranks and salaries.

Guess what happens when concerns are raised about SET based on high-quality empirical data and quantitative studies? The same faculty crying foul over gender equity reports and hiring practices toss up their hands and say, “O, well, we have to have something.”

As Colleen Flaherty explains:

While some institutions have acknowledged the biases inherent in SETs, many cling to them as a primary teaching evaluation tool because they’re easy — almost irresistibly so. That is, it takes a few minutes to look at professors’ student ratings on, say, a 1-5 scale, and label them strong or weak teachers. It takes hours to visit their classrooms and read over their syllabi to get a more nuanced, and ultimately more accurate, picture.

For example, my university’s self-evaluation form and the connected chair evaluation directly instructs in the teaching evaluation section: “Give particular emphasis to evidence of teaching quality, which could include numerical results from student opinion survey forms, written comments from student opinion survey forms, and comments from faculty or other consultants visiting your classes.”

“Evidence” is bolded and then the first example is “numerical results from student opinion survey forms.” There are clear biases here that privilege an instrument invalidated by a body of high-quality research—exactly what some faculty deemed missing in our gender equity study.

Junior faculty explain, often in private, that they are aware numerical data from the SET are the most important element of their case for tenure and promotion. As well, our Faculty Status Committee has provided workshops directly detailing which data from those forms are most influential, providing, as the committee claims, ways to distinguish faculty from each other.

Virtually every college and university has a diversity and equity statement and a perpetual formation and reformation of diversity and equity committees.

No statement or committee can make existing institutional sexism, racism, and classism disappear—especially if those words and that work are forced to work within existing biased structures.

“Research is reviewed in a rigorous manner, by expert peers,” writes Flaherty. “Yet teaching is often reviewed only or mostly by pedagogical non-experts: students. There’s also mounting evidence of bias in student evaluations of teaching, or SETs — against female and minority instructors in particular. And teacher ratings aren’t necessarily correlated with learning outcomes.

As long as calls for “high-quality” and “best” to guide policies and practices remain selective—and clearly in the service of the existing inequities and lack of diversity—we must admit the real commitment is not to”high-quality” or “best,” but to the status quo.

While not the only litmus test, a powerful way to determine if your academic institution values diversity and equity is if it continues to implement SET. Almost all do, so the answer remains, probably not.

See Also

Is Your University Racist? Bedelia Nicola Richards


[1] See how “merit” can work in the service of privilege in this reconsideration on Jordan Peterson:

I met Jordan Peterson when he came to the University of Toronto to be interviewed for an assistant professorship in the department of psychology. His CV was impeccable, with terrific references and a pedigree that included a PhD from McGill and a five-year stint at Harvard as an assistant professor.

We did not share research interests but it was clear that his work was solid. My colleagues on the search committee were skeptical — they felt he was too eccentric — but somehow I prevailed. (Several committee members now remind me that they agreed to hire him because they were “tired of hearing me shout over them.”) I pushed for him because he was a divergent thinker, self-educated in the humanities, intellectually flamboyant, bold, energetic and confident, bordering on arrogant. I thought he would bring a new excitement, along with new ideas, to our department.

[2] See:

Blue Scholars

Throughout the early 2000s, a conservative student group at my university was very aggressive—attacking faculty through online forums (using anonymous screen names), creating lists of faculty conservative students should avoid, and sponsoring an inordinate number of Cultural Life Programs (CLP). This group had significant outside (also anonymous) funding as well.

Once, the conservative antagonist Ann Coulter was a sponsored speaker on campus by this group. I mentioned this in a class, noting her lack of credibility, and a student responded with, “But her books have footnotes.”

I think about this exchange often because the student was recognizing the conventions of scholarly work, conventions that are apt to supersede in a superficial way the credibility of the scholarship or the scholar; footnotes denoted for the student credibility—without the student considering whether or not the sources were credible, whether or not the conclusions and claims made by Coulter were credible.

In this era of Trumplandia, the tired but resilient claim that universities are liberal and that conservative scholars are nearly absent or at least ostracized is once again gaining momentum. As well, the resurgence of the oppressed white male has gained momentum.

Those contexts are also driven by calls for free speech, allowing all sides a voice, and mostly superficial arguments about the tension between academic freedom and politically correct speech and concepts such as safe spaces.

Here, the post title, “Blue Scholars,” is not yet another addition to the “quit lit” genre, but an investigation of the race and gender implications of respectability politics in the work of scholars.

Consider the issues raised in these two following Tweets:

The expectations around social scientist Crystal Marie Fleming—the chastising of respectability politics, not what she claims but her prfanity—are quite distinct when compared to calls for civil discourse as a response to Jordan Peterson, a public scholar who has been thoroughly discredited while also being quite popular outside of academia.

Fleming is facing the academic and public stigma about working blue—the use of profanity superseding the content of her discourse. Peterson, a misogynist cloaked in academic garb and discourse, benefits from calls for civil discourse, a subset of respectability politics, because his language and the language of his detractors allow reprehensible ideas a stage more prominent than they deserve.

Fleming’s experience as a scholar parallels Colin Kaepernick’s confronting arguments that his message was not the problem, but how (and when) he was conducting his protests.

Beneath calls for respectability politics and civil discourse, then, are the interests of white and male privilege; the existing power structure always benefits from a demand for resect by default and for civility, the antithesis of protest.

Language and content, as I have examined in terms of stand-up comedy, are always about race, gender, and social class. The how of language, invariably, becomes the focus as soon as any marginalized group becomes confrontational, critical, empowered.

“Don’t speak or write that way” and “Don’t act that way” are always about the status of power—not about right or wrong, credible or baseless.

The criticism leveled at Fleming and the calls for civil discourse to allow Peterson’s vile arguments are windows into the failure of academia, an Ivory Tower trapped still in Medieval paradigms of authority, rhetoric, and deference.

In Kurt Vonnegut’s God Bless You, Mr. Rosewater, the titular character of the novel, Eliot Rosewater, implores:

“Go over to her shack, I guess. Sprinkles some water on the babies, say, ‘Hello, babies. Welcome to Earth. It’s hot in the summer and cold in the winter. It’s round and wet and crowded. At the outside, babies, you’ve got about a hundred years here. There’s only one rule that I know of, babies—:

“‘God damn it, you’ve got to be kind.’” (p. 129)

A moral imperative wrapped in blasphemous language.

I prefer the moral imperative, and I prefer the critical scholar working blue while rejecting the false calls for civility that foster scholars pandering to the worst among us.

If there are words that should give us pause, they are “respect” and “civil discourse”—not the seven words you can’t say on television.

Be Informed, Not Ideological

The Onion has created a dark humor Groundhog Day response to school and mass shootings in the U.S.: ‘No Way To Prevent This,’ Says Only Nation Where This Regularly Happens.

Public discourse and social media discussions suffer from something as predictable after school shootings as well—a fruitless clash of ideological claims, often bereft of evidence or historical context.

As an educator and a scholar, I feel compelled to advocate for safety in our society and our schools; therefore, I routinely address the research base on gun violence and school safety through my Twitter feed, on Facebook, and in my blogging.

Here’s a pattern I witness each time.

I post something about gun violence and school shootings, and someone comments with a claim that the school shootings are the result of a decline in morals, occasionally tossing in a reference to taking God and prayer out of schools (this last part is, by the way, entirely false as forced prayer has been deemed unconstitutional in public schools, but everyone in those schools are free to pray without interference).

This popped up after the shooting in Santa Fe, Texas, so I simply responded by asking what evidence exists that the U.S. was ever a moral/ethical country and thus how do we prove a decline.

The person openly stated that they had no proof, and just believed it to be true—conceding that I had the right to believe whatever I wanted.

Herein is the problem: Most people believe and argue as ideologues, and thus, assume everyone else is arguing as an ideologue also—reducing public and social media debate to little more than a shouting match absence evidence.

The worst extremes of being ideological, for example, are racism and sexism. Racism is the idea that some races are superior to others, and racists, then, impose that idea onto the world instead of drawing conclusions about race from evidence. Sexism functions the same regarding sex/gender.

The ideologue, then, can often be discredited by evidence—except that those functioning by ideology alone refuse to move from being ideological to being informed by that evidence.

Science, often misunderstood, is a discipline designed to build better understanding through a variety of ways of thinking, reasoning:

“In inductive inference, we go from the specific to the general. We make many observations, discern a pattern, make a generalization, and infer an explanation or a theory,” Wassertheil-Smoller told Live Science. “In science, there is a constant interplay between inductive inference (based on observations) and deductive inference (based on theory), until we get closer and closer to the ‘truth,’ which we can only approach but not ascertain with complete certainty.”

We start with some idea—I think this is true about the world, or human behavior—and then we put that idea to a test. The outcome of that testing creates some foundation for anticipating how the world will work, how humans will behave.

However, those ideas grounded in evidence are then always subject to the consequences of further evidence—if the evidence reinforces the idea, it survives; if the evidence contradicts the idea, it must change.

Ideologues, resistant to evidence, become victims to logical fallacies—flawed thinking, for example:

A leading candidate would be “attribution error.” Attribution error leads us to resist attempts to explain the bad behavior of people in the enemy tribe by reference to “situational” factors—poverty, enemy occupation, humiliation, peer group pressure, whatever. We’d rather think our enemies and rivals do bad things because that’s the kind of people they are: bad….

This is attribution error working as designed. It sustains your conviction that, though your team may do bad things, it’s only the other team that’s actually bad. Your badness is “situational,” theirs is “dispositional.”…

Another cognitive bias—probably the most famous—is confirmation bias, the tendency to embrace, perhaps uncritically, evidence that supports your side of an argument and to either not notice, reject, or forget evidence that undermines it.

To refuse continually interrogating our ideas about the world against the evidence is to commit to faulty thinking, attribution error and confirmation bias, for a just a couple of the most powerful ways people become mired in false ideology and resistant to credible ideas.

Being ideological instead of informed has dire consequences. Ideological thinking created a healthcare crisis because patients believed antibiotics cure every sort of illness, and then the medical field made a market error by allowing patient demand to drive bad medical practice.

Antibiotic-resistant disease is the child of ideological over informed behavior.

The gun debate and the pursuit of safety also suffer from ideological flaws.

For example, many people argue for gun ownership, and against gun regulation, because they believe guns in the home protect their family and property.

Two aspects of this argument are important.

First, this argument conflates safety with gun ownership without investigating whether or not this is a fair association.

The personal and family safety—self-defense—argument is both rational and irrational. To desire safety is entirely rational; to cling to guns in that pursuit, once you are informed and not ideological, becomes irrational.

Thus, second, gun ownership for safety has many outcomes more common that self-defense—domestic violence, suicide, and accidental shootings (see research listed here).

At the root of many people being ideological and not informed is our basic human nature; we are causal machines as a pursuit of survival.

Humans are constantly jumping from correlation to causation because we are predisposed to making those inferences at the unconscious level, split-second decisions once necessary to survive.

Consider, again, our rush to make medical claims not based in evidence: People think being cold causes colds; however, colds are the result of the presence of viruses. (It seems worth noting we can experience cold with our senses and viruses are not recognizable to the bare senses.)

Extreme cold can lead to hypothermia, and can reduce our resistance to bacteria and viruses. But cold weather doesn’t cause colds.

To be ideological (and wrong) is easier because there is some seemingly concrete way to jumble correlation with causation; the be informed requires a willingness to step back from what we believe.

As great failures of ideology, then, we demand antibiotics and cling to guns because we have made flawed associations with both in pursuit of perfectly good outcomes—health and safety.

To be informed, and not ideological, means that we must be willing to identify what it is we are trying to understand. And then we must be willing not only to seek out evidence but also to recognize that evidence even as it goes against our initial idea—that which we have always believed.

Liars and Racists

If Thomas Jefferson impregnated his slave, Sally Hemings, as historians claim, Jefferson was a rapist. No slave had the power of sexual consent or rejection; at best, slaves functioned within a repressive culture of “reduced circumstances.” [1]

About Andrew Jackson, Tim Morris explains:

Jackson was an unrepentant slaveholder and the power behind the legislation that forced five peaceful American Indian tribes from their homelands and triggered the Trail of Tears, a 1,000-mile death march that would leave 4,000 of 16,000 Cherokees dead along the way.

Jackson was an virulent racist.

South Carolina’s shame, as Will Moredock writes, Ben Tillman was a racist, terrorist, and murderer:

The then 29-year-old Tillman led the members of the Sweetwater Sabre Club, a.k.a. the Edgefield Redshits, against a local militia group, all black. Several African-American militia men were killed in a pitched battle with red-shirt-wearing white terrorists. After the militia surrendered, five of them were called out by name and executed. A few weeks later, when vigilantes captured a black state senator named Simon Coker, Tillman was present when two of his men executed the prisoner while he was on his knees praying.

In more recent history, Bill Clinton was an adulterer and a liar. His life as a sexual predator is undeniable.

Today, Donald Trump leads the U.S. as a serial liar and a racist. He has a history as a sexual predator and has bragged about sexual assault.

When I was a child and teenager, I was routinely hit and punished for my attitude, my tone—even when what I argued was, in fact, true, valid.

Of the failures by my father I still struggle against, this is one of the worst lessons he taught me: The credibility of what you claim is always secondary to how you make your claims, and you should always defer to authority even when authority is wrong and you are right.

That is the sort of tone policing bullshit that is the refuge of those in authority who realize they have no real right to that authority.

So I now witness the U.S. drift increasingly into the sort of environment I have rejected my whole life.

To call a liar, a liar, especially in jest, is somehow the offensive thing—not the lies and the liar.

To call a racist, a racist, is somehow the source of racial discord—not the racism or the racist.

Those in authority who know they have no real right to that authority are encouraging tone policing as a distraction.

Sarah Huckabee Sanders is a liar; that is the offensive thing.

Trump is a racist and a liar; that is the offensive thing.

Trump’s support is significantly driven by racism; that is a fact, and the offensive thing.

Let us by vigilant about naming liars and racists.

Let us not be derailed or dissuaded by tone policing.

The offensive thing is the thing itself—never the ones brave enough to name it.


[1] See from Beware the Bastards: On Freedom and Choice:

In Margaret Atwood’s The Handmaid’s Tale, Offred (June), the eponymous handmaid of the tale, reveals that “[t]he circumstances have been reduced” (p. 8) for the younger women of Gilead, a post-apocalyptic theocracy of sorts. These seemingly fertile women have become extremely precious for the survival of the white race and paradoxically the embodiment of a perverse slavery for procreation.

Atwood has written at length about being indebted to George Orwell—those who control language control everything and everyone—and that her speculative novel includes a quilting of human actions drawn directly from history, not fabricated by Atwood.

How have humans kept other humans in literal and economic bondage? Often by exploiting token members of the group being exploited.

Thus, in The Handmaid’s Tale, a few women are manipulated to control other women. The handmaid’s are trained by Aunts, who instill the propaganda:

There is more than one kind of freedom, said Aunt Lydia. Freedom to and freedom from. in the days of anarchy, it was freedom to. Now you are being given freedom from. Don’t underrate it….

We were a society dying, said Aunt Lydia, of too much choice. (pp. 24, 25)

Throughout the novel, readers must navigate how Offred (June) weaves the overlap of her own original ideas and vocabulary as that intersects with the propaganda of Gilead:

Will I ever be in a hotel room again? How I wasted them, those rooms, that freedom from being seen.

Rented license. (p. 50)

“Freedom” and “license” are exposed as bound words, the meanings contextual.

As Offred (June) continues to investigate rooms, she discovers a powerful but foreign phrase:

I knelt to examine the floor, and there it was, in tiny writing, quite fresh it seemed, scratched with a pin or maybe just a fingernail, in the corner where the darkest shadow fell: Nolite te bastardes carborundorum.

I didn’t know what it meant, or even what language it was in. I thought it might be Latin, but I didn’t know any Latin. Still it was a message, and it was in writing, forbidden by that very fact, and it hadn’t been discovered. Except by me, for whom it was intended. It was intended for whoever came next. (p. 52)

The power to control language includes defining words, but also denying access to language—forbidding reading and writing, literacy, to those in bondage.

And then, Offred (June) explains about her life before Gilead:

We lived, as usual, by ignoring. Ignoring isn’t the same as ignorance, you have to work at it.

Nothing changes instantaneously: in a gradually heating bathtub you’d be boiled to death before you knew it….The newspaper stories were like dreams to us, bad dreams dreamt by others. How awful, we would say, and they were, but they were awful without being believable. They were too melodramatic, they had a dimension that was not the dimension of our lives.

We were the people who were not in the papers. We lived in the blank white spaces at the edges of the print. It gave us more freedom.

We lived in the gaps between the stories. (pp. 56-57)

And from that previous life of “ignoring” the other since it wasn’t about them, Offred (June) finds herself the procreation slave of a Commander, in “reduced circumstances” where she realizes: “There wasn’t a lot of choice but there was some, and this is what I chose” (p. 94).

Her previous life of “ignoring” has been replaced by something seemingly more awful, but nearly exactly the same as she explains about the Ceremony: “One detaches oneself” (p. 95).

Even in Gilead, Offred (June) again becomes the other woman, lured into an infidelity characterized by playing Scrabble with the Commander, who reveals to her that Nolite te bastardes carborundorum is slang Latin for “Don’t let the bastards grind you down” (p. 187).

Adolescent language as rebellion has become a life-or-death slogan for Offred (June).

As her relationship with the Commander becomes increasingly trite and complex, Offred (June) declares, “Freedom, like everything else, is relative” (p. 231).

Mama’s Boy: Aftermath

I have never liked Mother’s Day—as I have never liked any holidays, special days. The burden of celebrations and gifts.

This sort of ceremony and tradition has always felt forced, insincere, superficial.

As I grew older and my mother grew older, buying her gifts became more and more difficult because what do people in their 50s, 60s, 70s, and beyond need given to them?

It is Mother’s Day 2018. My first since my mother died in December 2017 just a few months after my father died in June of the same year.

This Mother’s Day feels even more burdensome than normal, of course.

As I have examined, we left living have been sifting through all my parents’ stuff, throwing away most of it—presents rendered just more trash.

As I have examined, I am a churning mess of anxiety, in part, as a biological and environmental gift of my mother.

She died over the course of about six months, slowly and fitfully after a stroke and then stage 4 lung cancer. The stroke took her ability to communicate, but worst of all, it supercharged her anxiety.

It was horrible to witness.

It wasn’t a fair thing for anyone to endure on the way out.

I don’t have much left to say except I am more convinced than ever that these holidays, these designated moments to celebrate and give gifts—this is truly a real failure of human imagination.

For gifts, I had begun to give my mother plants, living plants in pots that could be transferred and maintained. I just could not buy her another shirt she didn’t really want and certainly didn’t need.

When my father-in-law died 7 years ago, his daughters found stacks of gifts, mostly shirts, if I recall correctly, never opened, never worn.

Just resting in his dresser.

Somehow I thought the plants were a best case approach to gift giving, to this damned circus of stuff that we have reduced our human condition to in the name of love.

But they weren’t.

What my mother needed, what my mother deserved, what everyone deserves, was her human dignity.

Especially in the last years and then final months, she needed and deserved high-quality and affordable health care.

Instead, her deteriorating body and my father’s even more dramatic decline were hellish burdens on them and everyone around them. And this wore heavily on my mother who believed her stroked killed my father at last (in a way, it did of course, but mostly, his life was at its end and she had kept him alive longer, if anything, than his frailness really supported).

My nephews and I are still trapped in the calloused and mind-numbing labyrinth of bureaucracy surrounding my parents’ living and dying, the most evil part being the insurance system designed more to deny healthcare, to deny human dignity, than anything else.

Dignity, I suspect, seems too abstract, and health care, too mundane.

But if all we can must are a few designated days, some really awful cards, and then an endless stream of things people really never wanted or needed, we may be better served to consider the real value of human dignity and the essential role something as mundane as high-quality and affordable healthcare for everyone plays in that dignity.

To live as if everyday were a holiday, to live for others as if we all deserve the full fruits of human dignity.


Recommended

I Ask My Mother to Sing, Li-Young Lee

Eating Together, Li-Young Lee

The Gift, Li-Young Lee

the philosophy of gerunds (my mother is dying)

my mother has returned to where she began

fragility (and then i realize)

Negotiating Meaning from Text: “readers are welcome to it if they wish”

[Header Photo by Shelley Pauls on Unsplash]

Yesterday, I finished Jeff VandeMeer’s The Southern Reach Trilogy. As full disclosure, I should add “finally” since I plowed through with glee Annihilation, warmed to Authority after adjusting to the different style/genre and main character, but sputtered through Acceptance out of a sort of self-imposed commitment to finish the trilogy.

On balance, I can fairly say I may have almost no idea what the hell happened in these novels, and I certainly have only some faint urges about what the trilogy means—especially in the sorts of ways we assign meaning in formal scholling such as English courses.

Now only a few years away from 60, having taught for over 30 years, I am afforded something almost no students are allowed: I read entirely by choice, and thus, I can quit any book at any time with no consequences (except my own shame at having not read a book).

I still on occasion highlight and annotate the books I read. But no tests, no papers (except I do often blog about the books I read).

Traditionally, fictional texts and poetry have been reduced in formal schooling—in English courses—to mere vehicles for “guess what the text means,” or more pointedly “guess what the teacher claims the text means.”

Text meaning in English courses, then, is located often in the authority of the teacher, not in the text itself or the student.

As a high school English teacher, I was always careful to avoid propagandizing students toward “the” singular authoritarian meaning of a text, but I also felt compelled to make students fully aware of the traditional expectations (New Criticism, Advanced Placement testing, etc.) of couching all claims of meaning in the text itself.

Students still often balked at how one meaning held credibility and others did not.

One approach to this challenge I used was to ask students to read William Carlos Williams’s “The Red Wheelbarrow,” and then to visualize a wheelbarrow. I went around the room and had the students identify the position of the wheelbarrow in their visualization.

Decoding ‘The Red Wheelbarrow’–The impact of a 16-word poem

I also shared that I always thought of wheelbarrows leaned against a tree because I was chastised growing up about not leaving wheelbarrows so that rain water could accumulate and rust out the tub.

From here, we discussed that the poem gives some details—”red,” “glazed with rain/water”—but nothing about its physical position. Meaning, then, could work from those text details, but students’ visualization of the wheelbarrow was a personal response, not an element for claims of academic meaning.

Here, I also stressed that students should not think the distinction between meaning and personal response meant that their responses did not matter, or mattered less. However, in formal situations such as testing or assigned critical analysis, most assessments would draw an evaluative judgment, honoring text-based meaning over personal response.

Yet, I remain deeply concerned about how formal schooling, especially narrow versions of literary analysis essays and high-stakes testing, erodes and even poisons students’ joy in reading text by continuing to couch text meaning in the authority of the teacher, which is often a proxy for the authority of the critic (and not the author, or the students as readers).

Authors, I often warned my students, did not write their fiction and poetry so teachers could assign them and then have students analyze the text for literary techniques and the ultimate meaning or theme. Many celebrated authors loathed English courses, and equally loathe the literary analysis game.

Author Sara Holbrook, for example, recently confessed I can’t answer these Texas standardized test questions about my own poems:

These test questions were just made up, and tragically, incomprehensibly, kids’ futures and the evaluations of their teachers will be based on their ability to guess the so-called correct answer to made up questions….

Texas, please know, this was not the author’s purpose in writing this poem.

This tyranny of testing supplants not only the authority of students as readers, but also the authority of the writer who constructed the text!

And Hannah Furness reports:

Ian McEwan, the award-winning author, has admitted feeling “a little dubious” about people being compelled to study his books, after helping his son with an essay about his own novel and receiving a C.

McEwan explained:

“Compelled to read his dad’s book – imagine. Poor guy,” McEwan added.

“I confess I did give him a tutorial and told him what he should consider. I didn’t read his essay but it turned out his teacher disagreed fundamentally with what he said.

“I think he ended up with a C+.”

Meaning couched in the authority of the teacher trumps, again, students constructing meaning and the author as an agent of intent.

And finally, consider Margaret Atwood discussing her recently reimagined The Handmaid’s Tale as a serial TV drama:

When I first began “The Handmaid’s Tale” it was called “Offred,” the name of its central character. This name is composed of a man’s first name, “Fred,” and a prefix denoting “belonging to,” so it is like “de” in French or “von” in German, or like the suffix “son” in English last names like Williamson. Within this name is concealed another possibility: “offered,” denoting a religious offering or a victim offered for sacrifice.

Why do we never learn the real name of the central character, I have often been asked. Because, I reply, so many people throughout history have had their names changed, or have simply disappeared from view. Some have deduced that Offred’s real name is June, since, of all the names whispered among the Handmaids in the gymnasium/dormitory, “June” is the only one that never appears again. That was not my original thought but it fits, so readers are welcome to it if they wish.

Having taught The Handmaid’s Tale for well over a decade in A.P. Literature, and also having written a book on Atwood, I felt my stomach drop when I first read this—forcing myself to recall that I had taught as authoritative what Atwood contested: June as the original given name of Offred. The source of that, for me, was a published critical analysis, in fact.

This caution offered by Atwood, I believe, speaks to our English classes, where text is too often reduced to an assignment, to a game of guess what the teacher wants you to say this texts means.

As teachers of English, of course, we have many responsibilities. Making students aware of traditional and text-based expectations for assigning meaning to text is certainly one of those responsibilities.

But this must not be the only ways in which we invite students to read, enjoy, and then draw meaning from text.

Choice in what they read as well as a wide variety of ways for students to respond to text—these must become the expanded set of responsibilities we practice in our classrooms.

Occasionally, if not often, we should as teachers be as gracious as Atwood, providing the space for students to read and then respond with their own athority in a class climate grounded in “readers are welcome to it if they wish.”

Comedy Is Not Pretty: In Black and White

Mix all the colors of light and we see white; the absence of light is black.

Mix all the colors of pigment and we see black; the absence of pigment is white.

This paradox of how we see color often is the source of debate; I’ve heard students complain about being taught different facts in art and physics classes. But it also serves as a useful metaphor for the problem of color as a foundation of race and racism.

When I was young and still discovering and shaping who I am (and necessarily coming to terms with race in the deep South), I was profoundly influenced by stand-up comedians—George Carlin, Richard Pryor, and Steve Martin among the most influential.

Martin’s 1979 comedy album, Comedy Is Not Pretty, is a prescient title for two contemporary stand-up comedians whose routines, viewed together, capture nearly perfectly everything that is wrong with contemporary understanding about race and racism: Dave Chappelle and John Mulaney.

Having recently watched Chappelle’s and Mulaney’s newest specials, I was struck by how many of their bits were similar—both used their wives’ minority status to tread on dangerous material, both depended on meta-jokes based on reactions to their routines, both weaved in political humor with the autobiographical, both worked blue, and each addressed race.

However, these two men and their routines are also profoundly different—Chappelle is very much a black comedian (think Pryor), and Mulaney is very much white (think Jerry Seinfeld).

And I imagine anyone knowing these comedians finds this distinction a bit simplistic, even bordering on crass, but I want to argue here that race becomes the defining element of their work, and as such, exposes a central problem in public discourse about race and racism: Many whites are apt to resist discussions of race and racism with “Why does everything have to be about race?” or “I don’t see color.”

Let’s start with a few clips, a shortened version of Mulvaney’s Trump skit (here on TV, but expanded in his new special) and a couple clips from Chappelle on Netflix:

Mulaney’s political humor is indirect; it is metaphor—in a similar way his profanity is rare and his special includes one direct reference to being white.

Chappelle’s political humor, profanity, and race, by comparison, are direct, even blunt, and pervasive. Consider especially the second clip above.

Because of his whiteness, his privilege, Mulaney is afforded the space of being indirect while Chappelle, even as he acknowledges his wealth privilege, cannot risk these subtleties.

The paradox of race/racism in human behavior is parallel to the paradox of color in light/pigment: For whites, race seems always invisible because white is the norm of U.S. culture (the absence of race is white), but for blacks, race is a constant reality, something always visible (the presence of race is black).

The media rarely identify race for whites, but nearly always do for black and brown people—especially in criminal situations.

Whites, then, watching Mulaney are apt to see the routines as not about race (even though the entire routine is imbued with whiteness) and mostly not political (although, again, his entire routine is a political commentary); those same whites, we can guarantee, would see Chappelle as racial (if not, to misuse the term, racist) and strongly political.

The problem with race/racial/racism as that intersects with political is that everything in human behavior includes both, but the norms make one invisible to the dominant race (white) and omnipresent to the marginalized race (black). And thus, all human behavior is political either by omission (maintaining those norms) or by confrontation (changing the norms).

Mulaney, in his whiteness and the primary state of omission, becomes a seemingly less radical comedian; Chappelle, in his blackness and confrontation, becomes a seemingly more radical comedian.

I include “seemingly” because, as Chappelle acknowledges, both comedians work with wealth privilege—even as Chappelle is not afforded through that to rise about his being black (see his skit about being pulled over by the police while a friend is driving for him).

Almost 40 years past Martin’s visual gag (and he too may seem less political in his whiteness), Chappelle and Mulaney offer by comparison comedy that is not pretty, but is pretty sharp in terms of modeling the lingering problems with race and racism in the U.S.