“Evaluating the Comparability of Two Measures of Lexical Diversity”

I’m excited and grateful to share with you the news that my article “Evaluating the Comparability of Two Measures of Lexical Diversity” has been accepted for publication in the applied linguistics journal System. The article should appear in the journal in the next several months. I would like to take a minute and explain it in language everyone can understand. I will attempt to define any complex or unusual terms and to explain processes as simply as possible, but these topics are complicated so getting to understanding will take some time and effort. This post will probably be of interest to very few people so I don’t blame you if you skip it.

What is lexical diversity?

“Lexical diversity” is a term used in applied linguistics and related fields to refer to the displayed range of vocabulary in a given text. (Related terms include lexical density, lexical richness, etc., which differ based on changes in how these terms are used and defined — for example, some systems use weightings based on the relative rarity of words.) Evaluating that range seems intuitively simple, and yet developing a valid, reliable metric for such evaluation has proven unusually tricky. A great many attempts to create such metrics have been undertaken with limited success. Some of the more exciting attempts now utilize complex algorithmic processes that would not have been practically feasible before the advent of the personal computer. My paper compares two of them and provides empirical justification for a claim about their mechanism made by other researchers.

Why do we care about it?

Lexical diversity and similar metrics have been used for a wide variety of applications. Being able to display a large vocabulary is often considered an important aspect of being a sophisticated language user. This is particularly true because we recognize a distinction between active vocabulary, or the words a language user can utilize effectively in their speech and writing, and a passive vocabulary, or the words a language user can define when challenged, such as in a test. This is an important distinction for real-world language use. For example, many tests of English as a second language involve students choosing the best definition for an English term from a list of possible definitions. But clearly, being able to choose a definition from a list and being able to effectively use a word in real-life situations are different skills. This is a particularly acute issue because of the existence of English-language “cram schools” where language learners study lists of vocabulary endlessly but get little language experience of value. Lexical diversity allows us to see how much vocabulary someone actually integrates into their production. This has been used to assess the proficiency of second language speakers; to detect learning disabilities and language impairments in children; and to assess texts for readability and grade-level appropriateness, among other things. Lexical diversity also has application to machine learning of language and natural language processing, such as is used in computerized translation services.

Why is it hard to measure?

The essential reason for the difficulty in assessing diversity in vocabulary lies in the recursive, repetitive nature of functional vocabulary. In English linguistics there is a distinction between functional and lexical vocabulary. Functional vocabulary contains grammatical information and is used to create syntactic form, and contains categories like the articles (determiners) and prepositions; lexical vocabulary delivers propositional content and contains categories like nouns and verbs. Different languages have different frequencies of functional vocabulary relative to lexical. Languages with a great deal of morphology — that is, languages where words change a great deal depending on their grammatical context– have less need for functional vocabulary, as essential grammatical information can be embedded in different word forms. Consider Latin and its notorious number of versions of every word, and then contrast with Mandarin, which has almost no similar morphological changes at all. English lies closer on the spectrum to Mandarin than to Latin; while we have both derivational morphology (that is, changes to words that change their parts of speech/syntactic category, the way -ness changes adjectives to nouns) and inflectional morphology (that is, changes to words that maintain certain grammatical functions like tense without changing parts of speech, the way -ed  changes present to past tense), in  comparison to a language like Latin we have a pretty morphologically inert language. To substitute for this, we have a) much stricter rules for word order than a language like Latin and b) more functional vocabulary to provide structure.

What does this have to do with assessing diversity in vocabulary? Well, first, we have a number of judgement calls to make when it comes to deciding what constitutes a word. There’s a whole vast literature about where to draw the line to determine what makes a word separate from another, utilizing terms like wordlemma, and word family. (We have a pretty good sense that dogs is the same word as dog but what about doglike or He was dogging me for days, etc.) I don’t want to get too far into that because it would take a book. It’s enough to say here that in most computerized attempts to measure lexical diversity, such as the ones I’m discussing here, all constructions that differ by even a single letter are classified as different terms. In part, this is a practical matter, as asking computers to tell the difference between inflectional grammar and derivational grammar is currently not practical. We would hope that any valid measure of lexical diversity would be sufficiently robust to account for the minor variations owing to different forms.

So: the simplest way to assess the amount of diversity would simply be to count the number of different terms in a sample. This measure has been referred to in the past as Number of Different Words (NDW) and is now conventionally referred to as Types. The problem here is obvious: you could not reliably compare a 75-word sample to a 100-word sample, let alone a 750-word sample. To account for this, researchers developed what’s called a Type-to-Token  Ratio (TTR). This figure simply places the number of unique words (Types) in the numerator and the number of total words (Tokens) in the denominator, to generate a ratio that is 1 or lower. The highest possible TTR, 1, is only possible if you never repeat a term, such as if you are counting (one, two, three…) without repeating. The lowest possible TTR, 1/tokens, is only possible if you say the same word over and over again (one,  one, one….). Clearly, in real-world language samples, TTR will lie somewhere in between those extremes. If half of all your terms are new words, your TTR would be .50, for example.

Sounds good, right? Well, there’s a problem, at least in English. Because language is repetitive by its nature, and particularly because functional vocabulary like articles and prepositions are used constantly– think of how many times you use the words “the” and “to” in a given conversation– TTR has an inevitable downward trajectory. And this is a problem because, as TTR inevitably falls, we lose the ability to discriminate between language samples of differing lengths, which is precisely why TTR was invented in the first place. For example, a 100-word children’s story might have the same TTR as a Shakespeare play, as the constant repetition of functional vocabulary overwhelms the greater diversity in absolute terms of the latter. We can therefore say that TTR is not robust to changes in sample size, and repeated empirical investigations have demonstrated that this sensitivity can apply even when the difference in text lengths are quite small. TTR fails to adequately control for the confounding variable it was expressly intended to control for.

A great many attempts have been made to adjust TTR mathematically– Guiraud’s Root TTR, Somer’s S– but none of them have worked.

What computational methods have been devised to measure lexical diversity?

Given the failure of straightforwardly mathematical attempts to  adjust TTR, and with the rise of increasingly powerful and accessible computer programs for processing text, researchers turned to algorithmic/computational models to solve the problem. One of the first such models was the vocd algorithm and the metric it returns, D. D stands today as one of the most popular metrics for assessing diversity in vocabulary. For clarity, I refer to D as “VOCD-D” in this research.

Developed primarily by the late David Malvern, Gerard McKee, and Brian Richards, along with others, the vocd algorithm in essence assesses the change in TTR as a function of text length and generates a measure, VOCD-D, that approximates how TTR changes as a text grows in length. Consider the image below, which I’ve photographed from Malvern et. al’s 2004 book Lexical diversity and language development: Quantification and assessment. (I apologize for the image quality.)

ttr over tokens

What you’re looking at is a series of ideal curves depicting changing TTR ratios over a given text length. As we move from the left to the right, we’re moving from a shorter to a longer text. As I said, the inevitable trajectory of these curves is downward. They all start in the same place, at 1, and fall from there. And if we extend these curves far enough, they would eventually end up in the same place, bunched together near the bottom, making it difficult to discriminate between different texts. But as these curves demonstrate, they do not fall at the same rate, and we can quantitatively assess the rate of downward movement in a TTR curve. This, in essence, is what vocd does.

The depicted curves here are ideal in the sense that they are artificial for the process of curve fitting. Curve fitting procedures are statistical methods to match real-world data, which is stochastic (that is, involves statistical distortion and noise), to approximations based on theoretical concepts. Real-world TTR curves are in fact far more jagged than this. But what we can do with software is to match real-world curves to these ideal curves to obtain a relative value, and that’s how vocd returns a VOCD-D measurement. The algorithm contains an equation for the relationship between text length, TTR, and VOCD-D, processes large collections of texts, and returns a value (typically between 40-120) that can be used to assess how diverse the vocabulary is in those texts. (VOCD-D values can really only be understood relative to each other.) The developers of the metric define the relationship between TTR, for tokens, and D for a given value along a TTR curve as TTR = D/N[(1+2n/d)^1/2 – 1]

Now, vocd uses a sampling procedure to obtain these figures. By default, the algorithm takes 100 random samples of 35 tokens, then 36 tokens, then 37, etc., until 50 tokens are taken in the last sample. In other words, the algorithm grabs 100 randomly-chosen samples of 35 words, then 36, etc., and returns an average figure for VOCD-D. The idea is that, because different segments of a language sample might have significantly different levels of displayed diversity in vocabulary, we should draw samples of differing sizes taken at random from throughout each text, in order to ensure that the obtained value is a valid measure. (The fact that lexical diversity is not consistent throughout a given text should give us pause, but that’s a whole other ball of wax.) Several programs that utilize the vocd algorithm also run through the whole process three times, averaging all returned results together for a figure called Doptimum.

VOCD-D is still affected by text length, and its developers caution that outside of an ideal range of perhaps 100-500 words, the figure is less reliable. Typical best practices involve combining VOCD-D with other measures, such as the Maas Index and MTLD (Measure of Textual Lexical Diversity), in order to make research more robust. Still, VOCD-D has shown itself to be far more robust across differing text lengths than TTR, and since the introduction of widely-available software that can measure it, notably the CLAN application from Carnegie Mellon’s CHILDES project, it has become one of the most commonly used metrics to assess lexical diversity.

So what’s the issue with vocd?

In a series of articles, Phillip McCarthy of the University of Memphis’s Institute for Intelligent Systems and Scott Jarvis of Ohio University identified a couple of issues with the vocd algorithm. They argue that the algorithm produces a metric which is in fact a complex approximation of another measure that is a) less computationally demanding and b) less variable. McCarthy and Jarvis argued that vocd‘s complex curve-fitting process actually approximates another value which can be statistically derived from a language sample based on hypergeometric sampling. Hypergeometric sampling is a kind of probability sampling that occurs “without replacement.” Imagine that you have a bag filled with black and white marbles. You know the number of marbles and the number of each color. You want to know the probability that you will withdraw a marble of a particular color each time you reach in, or what number of each color you can expect in a certain number of pulls, etc. If you are placing the marbles back in the bag after checking (with replacement), you use binomial sampling. If you don’t put the stones back (without replacement), you use hypergeometric sampling. McCarthy and Jarvis argued, in my view persuasively, that the computational procedure involved in vocd simply approximated a more direct, less variable value based on calculating the odds of any individual Type (unique word) appearing in a sample of a given length, which could be accomplished with hypergeometric sampling. VOCD-D, according to Jarvis and McCarthy, ultimately approximates the sum of the probabilities of a given type appearing in a sample of a given length. The curve-fitting process and repeated random sampling merely introduces computational complexity and statistical noise. McCarthy and Jarvis’s developed an alternative algorithm and metric. Though statistically complex, the operation is simple for a computer, and this metric has the additional benefit of allowing for exhaustive sampling (checking every type in every text) rather than random sampling. McCarthy and Jarvis named their metric HD-D, or Hypergeometric Distribution of Diversity.

(If you are interested in a deeper consideration of the exact statistical procedures involved, email me and I’ll send you some stuff.)

McCarthy and Jarvis found that HD-D functions similarly to VOCD-D, with less variability and requiring less computational effort. The latter isn’t really a big deal, as any modern laptop can easily churn through millions of words with vocd in a reasonable time frame. What’s more, McCarthy and Jarvis explicitly argued that research utilizing VOCD-D does not need to be thrown out, but rather that there is a simpler, less variable method to generate an equivalent value. But we should strive to use measures that are as direct and robust as possible, so they advocate for HD-D over VOCD-D, as well as calling for a concurrent approach utilizing other metrics.

McCarthy and Jarvis supported their theoretical claims of the comparability of VOCD-D and HD-D with a small empirical evaluation of the equivalence. They did a correlational study, demonstrating a very strong relationship between VOCD-D and HD-D, supporting their argument for the statistical comparability of the two measures. However, their data set was relatively small. In a 2012 article, Rei Koziumi and Yo In’nami argued that Jarvis and McCarthy’s data set suffered from several drawbacks:

(a) it used only spoken texts of one genre from L2 learners; (b) the number of original texts was limited (N = 38); and (c) only one segment was analyzed for 110-200 tokens, which prevented us from investigating correlations between LD measures in longer texts. Future studies should include spoken and written texts of multiple genres, employ more language samples, use longer original texts, and examine the effects of text lengths of more than 200 tokens and the relationships between LD measures of equal-sized texts of more than 100 tokens.

My article is an attempt to address each of these limitations. At heart, it is a replication study involving a vastly larger, more diverse data set.

What data and tools did you use?

I used the fantastic resource The International Corpus Network of Asian Learners of English, a very large, very focused corpus developed by Dr. Shin’ichiro Ishikawa of Kobe University. What makes the ICNALE a great resources is a) its size, b) its diversity, and c) its consistency in data collection. As the website says, “The ICNALE holds 1.3 M words of controlled essays written by 2,600 college students in 10 Asian countries and areas as well as 200 English Native Speakers.” Each writer in the ICNALE data set writes two essays, allowing for comparisons across prompts. And the standardization of the collection is almost unheard of, with each writer having the same prompts, the same time guidelines, and the same word processor. Many or most corpora have far less standardization of texts, making it much harder to draw valid inferences from the data. Significantly for lexical diversity research, the essays are spell checked, reducing the noise of misspelled words which can artificially inflate type counts.

For this research, I utilized the ICNALE’s Chinese, Korean, Japanese, and English-speaking writers, for a data set of 1,200 writers and 2,400 texts. This allowed me to compare results between first- and second-language writers, between writers of different language backgrounds, and between prompts. The texts contained a much larger range of word counts (token counts) than McCarthy and Jarvis’s original corpus.

I analyzed this data set with CLAN, in order to obtain VOCD-D values, and with McCarthy’s Gramulator software, in order to obtain HD-D values. I then used this data to generate Pearson product-moment correlation matrices comparing values for VOCD-D and HD-D across language backgrounds and prompts, utilizing the command-line statistical package SAS.

What did you find? 

My research provided strong empirical support for McCarthy and Jarvis’s prior research. All of the correlations I obtained in my study were quite high, above .90, and they came very close to the measures obtained by McCarthy and Jarvis. Indeed, the extremely tight groupings of correlations across language backgrounds, prompts, and research projects strongly suggests that the observed comparability identified in McCarthy and Jarvis’s work is the result of the mathematical equivalence they have identified. My replication study delivered very similar results using similar tools and a greatly expanded sample size, arguably confirming the previous results.

Why is this a big deal?

Well it isn’t, really. It’s small-bore, iterative work that is important to a small group of researchers and practitioners. It’s also a replication study, which means that it’s confirming and extending what prior researchers have already found. But that’s what a lot of necessary research is — slowly chipping away at problems and gradually generating greater confidence about our understanding of the world. I am also among a large number of researchers who believe that we desperately need to do more replication studies in language and education research, in order to confirm prior findings. I’ve always wanted to have one of my first peer-reviewed research articles be a replication study. I also think that these techniques have relevance and importance to the development of future systems of natural language processing and corpus linguistics.

As for lexical diversity, well, I still think that we’re fundamentally failing to think through this issue. As well as VOCD-D and HD-D work in comparison to a measure like TTR, they are still text-length dependent. The fact that best practices require us to use a number of metrics to validate each other suggests that we still lack a best metric of lexical diversity. My hunch (or more than a hunch, at this point) is not that we haven’t devised a metric of sufficient complexity, but that we are fundamentally undertheorizing the notion of lexical diversity. I think that we’re simply failing to adequately think through what we’re looking for and thus how to measure it. But that’s the subject of another article, so you’ll just have to stay tuned.

tl;dr version: Some researchers said that two ways to use a computer to measure diversity in vocabulary are in fact the same way, really, and provided some evidence to support that claim. Freddie threw a huge data set with a lot of diversity at the question and said “right on.”

 

who and what is the university for?

10626682_10100498555702659_8856810526624564766_nA couple of weeks ago, I went to the University of Illinois at Urbana/Champaign with activist friends of mine. We went to protest in support of Dr. Steven Salaita and the several unions and student groups who were rallying for better labor conditions, for the principle of honoring contracts, for collective bargaining rights, for recognition by the administration, and for respect. It was a beautiful, brilliant rally; I estimated 400 people, many more than I had thought to hope for. And it posed the simplest question facing academics today: who and what is the university for?

The labor unions in attendance that day were fighting for better conditions and more honest, direct bargaining with the university administration, as labor unions in Illinois have fought for decades. Some fought for fair pay and transparent, equitable rules for advancement and compensation. The school’s young graduate union, the GEO, fights simply to be recognized by the university, in an academic world in which universities could not survive without graduate student labor. What was remarkable about the event was how easily and naturally these labor issues coincided with the fight for Dr. Salaita. Some might mistake these issues for disconnected and separate, but in fact they are part of the same fight. The fight for Dr. Salaita is about Palestine, and about academic freedom. But it is also about labor and the rights of workers. It’s about faculty governance in a university system that has seen ceaseless growth in higher administrators and an attendant growth in the cost of employing them. It’s about recognizing that a university is not its endless vice provosts and deputy deans, nor its sushi bars and climbing walls, nor its slick advertising campaigns, nor its football team, nor its  statuary. A university is its students and its teachers. To defend Dr. Salaita is to defend the notion that, in an academy that crowds out actual teaching and actual learning in myriad ways, the actual teachers in the academy must preserve the right to hire other teachers, and to honor those commitments once they are made.

The people at the protest also rallied around a simple truth: that calls for civility are in fact calls for servility. The argument used against Dr. Salaita has been, primarily, the call for civility, a concept that has no consistent definition beyond “that which people in power want against those without power.” Civility is the discourse of power; it is a tool with which those with less power are disciplined by those asking for even more. The activists, union organizers, and rank-and-file members were not being civil. They were being passionate, honest, and righteous. And they were making demands, because as Frederick Douglass said, “Power concedes nothing without a demand. It never has and it never will.” Those who insist on civility are calling for a world in the which such a demand can never be made.

In the Chronicle‘s profile of Corey Robin, Corey lays out the Salaita case in the plainest terms: “An outspoken critic of Israel, speaking in an inflammatory way about it, being punished and drummed out of the academy—that’s what’s happening.” We who stand with Salaita stand for the right to tell that plain truth, that Salaita is being punished for his controversial political views, and not because of nonexistent rules of decorum that are enforced without consistency or honesty. We have to draw the lessons that Tithi Bhattacharya and Bill Mullen have identified, and we have to hold onto the plain truth of the bullying nature of those with power, even as they obfuscate with appeals to civility.

Universities can be beautiful places. I’ve been privileged to see them at their best. But universities can also be cruel places. Cruel, because there are many within them who talk about the ideals of intellectual and academic freedom while using their institutional and disciplinary power to prevent the exercise of that freedom. That is particularly true when some are seen as jumping the line, as stepping outside of the rigid hierarchy of professional advancement by working outside of the top-down structures of academic publishing. I am not in any sense a big deal, and would never claim to be, but I have been fortunate enough to have had the opportunity to reach a large audience and engage with many brilliant people. When I get opportunities like writing for The New York Times or guest blogging for Andrew Sullivan, or when one of my pieces gets wide attention and many thousands of hits, I should be able to merely feel gratitude. But I am always made keenly aware that calling attention to oneself brings risk, in a world of unequal power and a brutal labor market. I say this only as a witness, just as one of many who faces this dynamic. I tell you this not because I am uniquely burdened in this way but because I have seen it again and again with adjuncts and graduate students at all kinds of programs, people who have tried to engage publicly and politically out of a sense of democratic and moral obligation, only to find themselves labeled and threatened for having done so. While people like Nick Kristof call for academics to speak out publicly and share their work and ideas, they are rarely around to witness how nails that stick up get hammered down.

Many faculty I know are firmly, fully dedicated to the spirit of honest, passionate inquiry. But I’m sorry to say that many faculty are instead dedicated to the rigid hierarchies of employment and unequal economic power. The faculty in my own program have never been anything but supportive of me, demonstrating respect for my right to exist as an independent, political mind with ideas about pedagogy, and research, and politics. But in my wider dealings with professors and academics, at conferences and gatherings and especially online, I have been shocked and saddened by how many faculty are willing to threaten those beneath them in the academy’s hierarchy, whether directly or obliquely, over perceived slights to their authority. With the job market as bad as it has ever been, too many of the lucky few who enjoy the protection of tenure are willing to use that disparity in power to discipline graduate students and untenured faculty who disagree with them or fail to show the deference they believe they are owed. The results are predictable: thousands of adjuncts and graduate students who feel forced into silence and acquiescence out of fear of having no chance at a career at all.

Steven Salaita is the type of professor who has never asked for permission, who has refused to be docile in the face of those who would force him to be with their calls for civility. That is precisely what is needed in a world with an increasing chasm between those at the top and those at the bottom. What’s needed now is not docility but passion, not deference but demand. Each member of the academic community has to make a choice: you can stand either for the principles of passionate, sometimes intemperate exchange that are written into the very fabric of liberal education and deliberative democracy, or you can stand in support of a servile, silent university. You can use the systems of reward and advancement within the university to reward graduate students and untenured faculty who are passionate, risk-taking, and unafraid, and in doing so nurture a professoriate that is willing to fight for its future in the university of the 21st century. Or you can contribute to a climate of fear in which adjuncts, pre-tenure TT faculty, and graduate students are forced into servility, forever chasing fads in research and pedagogy, conceding to every new demand from administration, and refusing to fight for what they need and believe in, leading inevitably to the total and irrevocable deprofessionalization of the university. You can choose to be a profession of the passionate and the committed or you can choose to be a profession of the passionless and the docile. That’s the choice and those are the stakes and don’t believe anyone who tells you different.

Who and what will the university be for? In the most obvious, most likely telling, the university will be an army of redundant administrators, lording over hideously expensive luxury dorm rooms and dining halls, for students who have been rendered docile and silent through the service mentality, taught by precarious, contingent instructors who have no job security or autonomy, all in service to a neoliberal agenda that seeks to preempt dissent among the youth so that it will not have to crush dissent among adults. That is the most plausible future for the university. But there is another potential future, one articulated by those who rallied for Salaita and for labor at UIUC, whose passion and commitment humbled and inspired me. That future is one where the students and teachers who make up the actual university recognize their shared problems and their shared power, who band together to oppose the corporate takeover of our universities, and work to build something better, a university with fair and equitable labor practices for all instructors, faculty governance over instruction and hiring, and commitment to the principles of free exchange of ideas and academic freedom. That may not be a likely future; it may not, at this point, even be a possible future. But it is one worth fighting for.

there’s a zillion people writing about the same stuff these days

… and we need to factor that into our worries about plagiarism.

wadejoke

While I appreciate the concern of the eagle-eyed tipster who pointed this out to me — do I think funny and talented writer Jeb Lund plagiarized a goofy joke I told in the comments on Deadspin five months later? I do not! Of course not. I think it’s a funny joke that two different people who thought of the same reference came up with separately. I’m damn sure I’m not the first to use that reference to call someone old. I may not even have been the first to make that joke about Wade and the Heat. How could I be sure of such a thing? There’s millions and millions of people watching the same sports, following the same news, and accessing the same media. And as this example demonstrates, a lot of us have been marinating in the same pop culture for a long time. Similar thinking is inevitable.

It’s an odd time to think about plagiarism. On the one hand, it’s proving remarkably difficult to get accountability for people like Fareed Zakaria who have demonstrated repeated, unambiguous acts of plagiarism. On the other hand, there’s also a lot of misplaced suspicion, in my opinion, particularly given that economic incentives compel writers to all write about the same stuff. I’ve had writer friends grumble about one piece or another looking too much like theirs, and I’m not quite sure what to say; they’re aggregating the same video or essay that emerged from the same events as everybody else. Sometimes, people accuse others of patchwriting when all I’m seeing is a different summary of the same material. Stand up comics are even worse. It seems like every day there’s a new joke stealing scandal. If it’s repeated and consistent, then for sure, that’s a problem. But if it’s one or two times? There’s 10,000 of you guys! Of course people think up the same jokes. We have to be able to simultaneously call out the egregious, repeated cases like Zakaria or Benny Johnson and be careful in our accusations about specific incidents. It’s not always easy, and accusations of plagiarism will always have an element of uncertainty to them. Adults just have to use their discrimination and make adult judgments. Luckily, people like Zakaria and Johnson make it easy for us.

Anyway, Lund’s piece is funny and correct, so check it out.

racism is asphalt, racism is a bullet

You will have already been deluged with analysis about the grand jury’s refusal to indict the police officer who shot Michael Brown in Ferguson, so I’ll be brief. I guess the essential thing that has to be repeated, again and again, is that this outcome, and so many like it, are the result of a system functioning the way it is intended to function. Racism is baked right into the foundation of the system. When racist outcomes happen they happen not because of the evil in the hearts of individuals but because our social, economic, and legal systems have been designed to deliver those racist outcomes. You can imagine a world where a few things break differently and Darren Wilson does not kill Michael Brown. If you try really hard, you can imagine a world where a grand jury does indict him. But you can’t imagine a world where police officers aren’t an immense danger to young black men. You can’t imagine a world without Michael Browns, without Darren Wilsons.

Every one of those grand jurors might have hearts of purest gold. The outcome was predetermined precisely because the outcome did not rely on the individual character of the jurors. We have police aggression against black people because the white moneyed classes of this country have demanded aggressive policing and the moneyed control our policy. We have police aggression because the War on Drugs provokes it and we still have a War on Drugs because the War on Drugs puts vast amounts of tax dollars in the hands of police departments and a voracious prison industrial complex. We have police aggression against black people because centuries of gerrymandering and political manipulation have been undertaken with the explicit purpose of empowering some people and disenfranchising others. None of that can be solved through having pure hearts and pure minds. Racism is not a problem of mind. Racism cannot be combated by individuals not being racist. A pure heart makes no difference. In response to systemic injustice, you’ve got to change the systems themselves. It’s the only thing that will ever work.

How you go about doing that, I don’t pretend to know. I don’t blame well-meaning white people for reaching for emotive responses in a situation of such awful emotional devastation for our people of color. But the reflexive return to the language of privilege checking, where opposition to racism is fundamentally a matter of attitude and ideas, is indicative of why there’s been so little progress for so long. For 30 years or more, we have opposed racism emotionally rather than structurally, and the consequences are what they are. I ask you to consider two very different responses to this decision:

mendelson

bouie

I don’t even know what the first tweet means. I really don’t. All I know is that it defines white privilege, first and foremost, as a matter of emotion, as a matter of what its author thinks and feels. And that’s exactly the problem. Another definition of white privilege is being so steeped in the language of emotive politics that you think the system cares whether you as an individual are terrified or outraged. I promise: whether you as a white person feel outraged, terrified, delighted, or indifferent, the system that ensures cyclical state violence against black men is utterly unconcerned with how you feel. It just doesn’t matter. An 18 year old got shot to death by the cops and nothing has happened. Who fucking cares if you feel outraged rather than afraid?

The second tweet, in contrast, says the opposite: it doesn’t matter if you understand your white privilege and it doesn’t matter if you tweet that understanding and it doesn’t matter if you retweet others who understand, too. I am not indicting people for not “doing something” — I don’t know what they should do and I don’t know what to do myself. I’m not exactly shaking the foundations of the system out here, am I. I am not indicting people for failing to actually create change in a system that has resisted it vociferously for decades. But I am indicting them for refusing to consider the possibility that their emotional and psychic relationship to racism simply doesn’t matter. If  we ever are going to figure out how to do something about all this, it will only come from an acknowledgment that good white people being good has done nothing to prevent a world where Michael Brown lay dead in the street for hours. Until that second sentiment is more popular among them than the first, the outrage of white people will never be a force for change.

against the five paragraph essay

There are many commonplaces in teaching and pedagogy. One of these commonplaces is accessible templates or forms that students can use to gain control over complex and intimidating learning tasks. These consistent formats demonstrate the essential “moves” of particular learning tasks, which the students can apply to their own work. Ideally, they will then let go of those formal constraints, having learned to perform these moves on their own. However, sometimes these forms have negative unintended consequences, and their weaknesses outweigh their benefits. One such example is the five paragraph essay. I believe that the five paragraph essay should be abandoned as a teaching tool.

One major reason to abandon the five paragraph essay is that it leads to formulaic writing. This is unsurprising, given that the five paragraph essay is a formula. When we talk about the five paragraph essay, we are not just talking about five paragraphs. We are usually talking about an introduction that begins with a broad theme and ends with an explicit thesis statement, three body paragraphs that use topic sentences to articulate specific arguments in favor of that thesis followed by evidence, and a conclusion that restates the thesis and broadens outward. This set format does indeed provide students with a consistent, reliable system that they can use again and again. But that merely results in consistently, reliably stale, uninspired essays. This is bad enough from the perspective of the teachers who must read and grade these essays, but it is truly destructive for the students themselves. By so associating the act of writing with rules and restrictions, teachers risk dulling the inventiveness and fun that can inspire students to become lifelong writers.

Another reason to oppose the five paragraph essay is that these tasks bear little resemblance to the writing tasks most students will have to undertake at work and in college. Whether in classes or at the workplace, in their later lives our students are much more likely to undertake researched writing, such as in a white paper or researched argument, or formal correspondence such as business letters and emails. In their private lives, students will likely be writing genres like reviews, personal narratives, and fiction, none of which are likely to be deeply enriched by practicing the five paragraph essay. What’s more, composition increasingly includes important aspects of working with different mediums and modes, such as incorporating visual design elements or interactive features for online text. This divide between the skills learned in composing five paragraph essays and the writing tasks our students undertake regularly again serves to disconnect writing pedagogy from their real-world writing needs. This is not to say that persuasive writing is unimportant to students, or that they are uninterested in it. Rather, the persuasive writing students undertake will have far more in common with the blog posts and essays that they read compulsively online than it does with the five paragraph essay.

In addition to these first two reasons to oppose the five paragraph essay, another strong argument against its use is that students may never take the necessary steps beyond it. Very few would ever argue that the five paragraph essay is the only form students need or should learn. Instead, they argue that the five paragraph essay gives students the opportunity to practice important aspects of writing with a form that works, which they can then abandon as they become more mature and confident. But years of experience teaches me that many students never undergo that evolution. Instead, they cling to the five paragraph essay, having been told by overworked middle school and high school teachers that the five paragraph form is the correct way to write an argument. Students frequently have difficulty understanding the difference between more formal “rules” such as in basic syntax —  every sentence must have a subject– and guidelines that can be helpful or harmful depending on context — make your thesis statement the last sentence of your first paragraph. Given the rise of standardized tests, it is hard to blame K-12 educators for sticking to the reliable five paragraph essay. We also cannot blame students for sticking with a form they have often practiced again and again. But we can, as a community of educators and policy makers, deliberately move away from the five paragraph essay as a teaching tool.

In conclusion, the five paragraph essay harms more than it helps and should be abandoned in our writing pedagogy. In its place, we should concentrate on tasks rather than forms, and on global writing skills such as mechanics, rhetoric, and style rather than on satisfying conventions that go in and out of style. This will enable students to see writing as inventive, freeing, and fun, such as in this Peter Suderman essay, for example. Teaching meta-skills and broader mental habits of mind, rather than formal conventions and restrictive rules, will serve our students not only in the writing classroom, but beyond.

our educational problems are not problems of will

Mikhail Zinshteyn at The Atlantic:  “What if U.S. students took fewer tests that measured their ability to understand academic concepts far more deeply than current tests permit?”

Uh, yeah! What if! That’d be pretty sweet! Pardon me for getting all 2006 on you, but… and a pony. I’m all for calling for better assessments– especially when done in service to testing fewer students and less often, as Zinshteyn is here– but this is functionally the same as saying you want your cellphone to be faster, have more features, and be cheaper. Sounds nice, but the difficult part is actually making it happen. I don’t mean to be harsh to Zinshteyn here. Talk like this is very common in popular education media. It’s reflective of two problems: one, the Big Think phenomenon, where the only kinds of ideas that get bandied about in educational debates are very big ideas that call for all kinds of revolutionary change or dramatic improvement, when the history of education is a history of gradual change, failed revolutions, and incremental improvement. Second, of thinking that our educational problems are problems of will– that we simply have to decide to fix them– rather than of ability, or inability. It’s the Green Lantern Theory of education.

In writing my dissertation on the CLA+, I’m frequently frustrated by those who assume that criticisms of these kinds of assessments are necessarily political or self-interested, rather than practical or empirical. From the most apolitical, ruthlessly empirical standpoint, there are many problems with large-scale assessments. That doesn’t make them not worth attempting. But it does mean that we need to reckon with that difficulty when we propose making sweeping changes to public policy based on them, and it means that we need to make those changes in a way that is minimally invasive. The standardized test movement has done neither of these.

Besides, as I’ve said many times: we already have the tools  necessary to dramatically reduce the total testing load on our students. Those tools are called inferential statistics, and they are very powerful indeed. With careful sampling, stratification, and responsible inference, we can understand state and national trends with remarkable accuracy, and use those trends to drive policy responsibly. The NAEP is the single most effective educational assessment we have in this country today. It is not a census measure. Instead, it uses sampling and inference appropriately, and it does so with remarkable explanatory power while placing minimal burdens on students, teachers, and schools. Such assessments are well within our ability to create. Typical reasons to oppose that kind of sampling include a) not understanding the power of inferential statistics or b) because you want to sell pricey tests and test materials for tax dollars, and you want to have as large of a customer base as possible. It’s almost as if the profit motive and what’s best for our schools and students are not well aligned!

Update: The thing about the Atlantic‘s educational coverage is not just that it’s so filled with problems. It’s that they’re always the same problems. And those problems are such massive cliches that I don’t understand how nobody at the magazine ever thinks to say to themselves, “gee, maybe disruptive innovators utilizing dynamism with Web 3.0 by teaching a kid to code through MOOCs taught by thinkfluencers won’t solve all of our problems.” It’s exemplified in that awful Graeme Wood piece on Minerva, which reads like somebody watching Silicon Valley and not getting that it’s satirical: it’s the absolute and utter credulity towards those who use the right buzzwords. These people are selling you something, Atlantic crew. Try a little skepticism. Every once in awhile. Just for fun.

modern computer games just have way too much going on

So I got Dragon Age Inquisition as a present for myself for getting through the  bulk of my job applications. It’s the first  time I’ve bought a computer game that wasn’t on sale in a long time. So far there are some things I like about it (the world, the graphics, the art direction, the music) and some things I really am aggravated by (the terrible camera, the user interface in general). I am enjoying it overall, after having gotten past the lousy tactical camera, at least mostly. (Seriously, what is the point of a tactical camera if you can’t zoom out enough to see everyone involved in a battle? Ugh.) But what it really has me thinking is that computer games are just throwing in way, way too much stuff.

Take this game. It’s pretty conventional as far as contemporary RPGs go: you have a little band of adventurers, you wander around killing orcs and such, you find loot, you gain experience points and levels, you get more badass, you kill stronger creatures. It’s a very story-heavy RPG with a lot of reading and dialogue, which I like, but those core mechanics are similar to most other RPGs. The thing is, there’s just layers and layers of stuff to  do and find. In the game, you’re trying to gain territory and influence for your particular faction. So you go out and find predetermined places where you can build camps, expanding your influence. When you set up a new camp, you get a requisitions officer who gives you a list of materials they want you to get and stuff they want you to build. You also have to go around and find these rifts, where you kill bad guys and then use your unique magic power to close the rifts. (You are some sort of Chosen One, like the protagonist in every other video game these days.) You also find these skulls which you look through, and if you look hard enough you reveal these shards, which you then have to go and pick up. I don’t even really know what the shards do. You also find these mosaic pieces, which I definitely don’t know what they do. You also find these weird places where you, like, put down some sort of staff and claim that territory, I guess? It’s weird because you are already claiming territory with the camps, so I dunno. You also find these puzzles which are based on constellations. This is all in addition to the typical RPG stuff– there’s all of the weapons and armor and monsters to kill and figuring out which magic sword is best and choosing spells from a skill tree and fetching milk for the little old lady whose son’s corpse you’re going to find in the woods with a letter for her and a pendant which gives you +3 charisma. It’s the full, triple-A RPG experience plus all the other stuff.

All the other stuff, inevitably, includes crafting, because there is nothing video game developers like now but shoehorning crafting into  every conceivable game. I think eventually all new games are going to coalesce into a single game called CraftQuest. By crafting I mean mechanics where you have to go around collecting junk with which to make more junk. Apparently, video game developers are under the impression that what the average Joe wants when he gets off a long day of work is to enjoy the sweet escapism of living the life of a cobbler. Crafting is in like literally every game and genre now. There’s crafting in third person action games like Tomb Raider, RPGs like DAI and Skyrim, first-person shooters like the Far Cry series, MMOs like World of Warcraft…. They’re all the same basic thing. I can’t tell you how many games have directed me to hunt some form of ungulate so I can get its leather and a rib bone in order to make a papoose for carrying more grenades, or whatever. Only really you always have to kill, like, twelve yaks and four birds and a crocodile. It’s endless. And the complexity! I mean in Dragon Age I am dimly aware that I can craft both weapons and armor, but also improve the armor I already have, which is on a different system than just making new armor I think, and also there are runes which are involved somehow, which are all very different. I’m coding in R nowadays to do natural language processing but I’m utterly baffled by the complexity of these crafting menus. Oh, and you need to make potions. By getting recipes. And trolling the woods for these very specific types of berries. Constantly.

It’s all a real buzzkill, in so many games. You’re always some sort of badass on some sort of badass adventure, but you spend more time finding the right kind of beaker to hold the sap of the TumTum tree so that you can make a potion which you’ll never use. I tried Far Cry 3 for a bit and despite being this dude with a flamethrower on a revenge quest I spent like 90% of my time stitching together goat hide to make some sort of knapsack. It’s a drag, man.

This does not even get into the weird battle map deal in this game, where you order personality-free automatons to the digital boonies to, like, seek out the blessings of the minor nobility so that you can earn some sort of prestige points that are totally inscrutable. Like literally you look at a map and these guys are like “I should go talk to this dude,” and so I click “OK Bob” and then they computer is like “he’ll be back in 20 minutes” and in 20 minutes I get a little notification that’s like “5 more cool points for you!” It feels totally arbitrary and made up and dumb. Why am I doing this? None of it is animated or makes me feel like the story is moving along. I’m not doing anything. It’s like the developers thought that what I really wanted was the experience of being a middle manager. I expect the next Dragon Age will just have you populate cells on a spreadsheet.

There’s more different kinds of symbols on the map than there are in a Chinese dictionary. I have no earthly idea what half of them are for. There’s big arrows and little diamonds and grayed-out exclamation points and question marks and solid gold exclamation points and question marks. There’s something that looks like a little temple and Xs marking every spot and something that looks like the eye of Sauron. I’m totally baffled by it. I end up feeling like I have less information after I look at the map than I had before. I feel like I’m learning calculus.

The thing is, none of this is unique to Dragon Age. It’s not even particularly bad in that regard. The last game I played a significant amount of before this one was Assassin’s Creed 4. It’s a good game, a really fun pirate adventure with great graphics and some fun mechanics. But it’s also an endless laundry list of tasks. It’s got all the crafty craftiness your grandma could desire, every zone a veritable Jo-Ann’s Fabrics. You gotta do the typical hunting to get the raw materials to make your own clothes, because when you’re a famous pirate captain you still have to sew your own vests, so you tromp around in the swamp shooting monkeys out of the trees with a blowgun. You have to upgrade your swords and your guns and your clothes. Your boat has got like a million things to upgrade, both in terms of how good it is and in terms of pure aesthetics. There’s all kinds of jimjaws and treasures you have to discover. There are songs you chase along the rooftops, people to pickpocket, board games to play, brawls to get into in bars. There are these animus fragments, which are just like sparkly floating things they want you to collect so they can force you to shimmy up trees. There are these vantage points you have to reach to unlock new parts of the map, which are always populated by an eagle because realism. You have people to assassinate, being that it’s about assassins and all, but there’s also all kinds of faction missions that get you, I think, keys that open locks that get you a special suit of armor. Which is separate, if I remember correctly, from these Mayan-inspired artifacts you get  by solving puzzles. You dive underwater with primitive scuba gear and, like, wrestle sharks and shit, when you aren’t sailing around looking for dozens or hundreds of guys on little life rafts like the USS Indianapolis just went down in the Caribbean in 1600. It’s exhausting. I know for a fact I’m leaving like a half-dozen things out, here. And I’m not even talking about the actual story, the actual quest.

And that’s not even to mention a very similar mechanic to the weird battle map junk in Dragon Age. See, you can capture ships, which you can then send on voyages around the map, where the mechanic is literally “this ship has a 75% chance and then you’ll get 150 monies,” and then you send the ship and then a couple hours later it’s like “your ship came back! Have some points!” Why am I doing this? Why is this fun? What is the purpose? Oh, and there’s also this whole dealie where you also live in the present day and go to work in some goofy software company and take part in weird corporate politics, because when I think of compelling gameplay, I think of chatting with Marge from HR.

I know a lot of people would respond to this by asking why I’m complaining about more game. And I get it– some of this stuff, you can ignore. But there’s so many beeping alarms and endless notifications and nag screens that it just drains my interest. I get wanting more content, but at some point, you’re sending the message that you don’t actually trust your core game to provide enough entertainment. When I play Dragon Age I just want to cast wingardium leviosa and kill goblins and shit. I don’t want to worry that I haven’t sent an emissary to sweet talk Lord Steve in a castle they never bothered to render. When I play Assassin’s Creed I want to do era-inappropriate parkour on the rooftops of some ancient city. I don’t want to hunt in the forest for an upgrade to my needlepoint set so that I can knit some new hat. Let me play the game, dudes.

I’m just saying: sometimes less is more, you guys.

immigration reform is humanitarian intervention

Although it has a distinct feeling of too little, too late, President Obama deserves considerable praise for his immigration speech and his administration’s apparent decision to pursue legitimate reform. (Dara Lind has the details.) Since I don’t believe in the nation state, I don’t believe in the legitimacy of their borders, and even if we must have states for now, the only policy remotely consonant with human rights is unfettered immigration and emigration for all. Since I’m never going to get that, I’ll take the president protecting millions of people who have done nothing wrong in living in this country, working in our economy, and paying taxes into our coffers, often without receiving typical government benefits for fear of being discovered.

I do want to make one point: if you are a liberal internationalist, a humanitarian interventionist, you better be out there beating the drum for this reform every day. You better be going on cable news, spending all of your political capital trying to make this happen. You better take to the op/ed pages and Twitter and every other way you have to communicate. And when you do, you better use all of that same moralizing language you do when you’re making your constant calls for war. You better be just as aggressive in suggesting that people who oppose your preferred policy just don’t care about the lives of people who could be saved, as you do when you are advocating for cruise missile strikes. You better follow through.

Because one of the most straightforward, direct, achievable, and cheapest forms of humanitarian intervention is to welcome people with open arms into our country. The fact that this kind of humanitarianism is so rarely considered, when people are looking for ways to save the world with violence, tells you a lot about them and what they really care about.

fighting stigma vs the insatiable need for CONTENT

I bet if you polled the writers who all wrote the same post about Michel Phelps reportedly having sex with a woman who was born intersex, almost all of them would say that being born intersex does not immutably determine someone’s gender identity and that it’s therefore no big deal if he did. But the fact that they wrote the posts inevitably contributes to the perception that this is a big deal. I assume that Phelps, as a successful and famous single athlete, has lots of sex, and good for him. Mostly I don’t think anyone would think of it as news. Writing a post about this sexual partner of his definitely sends the message that there is something unusual or prurient going on here, even if that cuts against the protests of the writers. Unfortunately, because the need for CONTENT is insatiable, and overworked bloggers and aggregators are constantly looking for easily-produced posts, the urge to write overwhelms the urge to diminish stigma by not writing.

Culture is more powerful than politics, particularly when it comes to issues like sexual practice. I’ve often felt that there’s a divide between what people explicitly say as a matter of sound politics and how they feel in an emotional and social sense. If someone you know starts dating a transgender person, would that change how that person is viewed by friends and acquaintances? I think even many progressive people who say and believe all the right things would inevitably view such a person in a different light, even against their own best wishes. That’s the kind of cultural change that takes longer than it does for people to develop explicit beliefs, and is as important or more important in the long run.

Of course, by using these posts as a means to make a meta-point, I am also casting attention onto Phelps and the woman he had healthy, mutually-pleasurable intercourse with. The mind is a complex maze.

it’s pretty simple, really

“Social justice” is an awkward term for an immensely important project, perhaps the most important project, which is to make the world a more equitable, fair, and compassionate place. But the project for social justice has been captured by an elite strata of post-collegiate, digitally-enabled children of privilege, who do not pursue that project as an end, but rather use it as a means with which to compete, socially and professionally, with each other. In that use, they value not speech or actions that actually result in a better world, but rather those that result in greater social reward, which in the digital world is obvious and explicit. That means that they prefer engagement that creates a) outrage and b) jokes, rather than engagement that leads to positive change. In this disregard for actual political success, they reveal their own privilege, as it’s only the privileged who could ever have so little regard for actual, material progress. As long as they are allowed to co-opt the movement for social justice for their own personal aggrandizement, the world will not improve, not for women, people of color, gay and transgender people, or the poor.

Update: Nota bene.

collective blame for Palestinians, never for Israelis

If you want to see exactly why our media is, comprehensively and in every sense, complicit in the oppression of the Palestinian people, you could do much worse that observing the way in which the attack on a synagogue last night is being discussed. Because almost without exception, in every kind of publication, across the terribly limited ideological spectrum that is permitted in our corporate media, you will find one assumption: that all Palestinians bear the moral and practical responsibility of acts committed by any Palestinians, no matter how small their number. Two lunatics walk into a synagogue and do something terrible, and millions of Palestinians will be made to suffer for it. Generations of Israelis undertake a campaign of violence and apartheid against the Palestinian people, and no one will endorse collective blame or collective punishment against them, and thank god for that. Here, in our liberal media, the basic logic of collective blame will be endorsed again and again, if only against the Palestinians, and our liberals will find themselves throwing up their hands, which is all they seem good for anymore.

Consider, for example, this missive by Charles Pierce. Here he is, beard-scratching, stuffed with portentous wisdom, affecting a pose of exhaustion and melancholy. “This was purely an act of religious war,” writes Pierce, compelling the question, by whom? By which religion? Where was the convocation of Palestinian Muslims where this religious war was agreed upon? When did the Palestinians get together and vote for  the actions of these two men? How can two men armed with knives ever represent the will of millions? Pierce’s little essay is a master class in hiding the subject; the two men are barely mentioned, his piece instead filled with “it”s, “the choice of targets” without the choosers. No time to stop and ask: who are these men, and who do you claim they represent, and how, and why?

That the Israeli people are not responsible for the murderous, cruel, and illegal occupation of the Palestinian people by the Israeli government is a matter of media consensus. And, indeed, it’s true– despite the fact that Israeli government is an elected government, despite the fact that Israel’s democratic polity is in the main vociferously committed to the greatest crime of this young century, no individuals deserve to bear the blame for the sins of their government or their military. That is as clear a lesson as any the 20th century taught to us: the notion of collective blame is the first step on the staircase to genocide. And yet when it comes to the actions of two desperate madmen, there is no similar consensus, for the simple, plain, unavoidable fact that to the American media, Palestinians don’t deserve human rights because Palestinians are not human.

You will hear two types of rhetoric in the days to come: you will read the flat language of dehumanization and subjugation, the calls for extermination, the straightforward endorsement of the destruction of the Palestinian people, which is perfectly common and not even noteworthy in our media. You are allowed to call for death from above for the Palestinians, you are allowed to call for the ethnic cleansing of Palestine, you are allowed to endorse permanent apartheid, and retain your post at tony publications like the Atlantic. On the other side, you will hear the sighing chorus, dispensing cliches about a cycle of violence, lying about the moral equivalence of the occupied and the occupiers that brutalize them. The latter is almost worse than the former, because it’s so much less honest.

Pierce writes about “the ex post facto justifications of both sides,” but as he well knows, there will be no justifications of these murders, not from anyone with prominence, an audience, or power. Every time the IDF kills civilians, or undertakes more banal tortures like spraying schools with the overwhelming scent of excrement, the internet alights with justifications, with defenses, with deflections, with support. I run in a pretty radical circle, when it comes to Palestine, and I know no one — not a soul — who will defend these killings. That is the most consistent and clear dynamic, here in America, when it comes to this issue: one side is permitted to mourn its dead without the noise of justification.

There is one fact that precedes all others in this debate: the cause of violence is the occupation. The occupation is a crime of historic proportions. The government of Israel is to blame. Only the government of Israel can end the occupation, and thus only the government of Israel can stop the violence. I find the reflexive demands for condemnation of atrocities like this so thoroughly phony I can’t stand it, coming as they do from those who will justify the Israeli slaughter of innocents again and again. These murders were vile, the murderers pathetic, in every sense, moral cretins whose violence was as ineffective as it was indefensible, human meat cleavers in a world of laser-guided bombs. What a world, indeed, when the deaths of 2,000 civilians was greeted with hand waving and throat clearing, but the deaths of 4 is expected to shake the heavens. I would like to build that world, but it comes first and only through an accounting for the 2000, and human rights can only be defended when they are for all humans, everywhere, forever. So you think the world should be unanimous in condemning this atrocity, this crime. Me too. Maybe you should ask yourself: what have you done to stain the fragile sheet of peace yourself?

things having to do with Jonathan Franzen ranked from best to worst

Best Thing Having to do with Jonathan Franzen: that Ben Marcus Harper’s essay

This essay by Ben Marcus in which he destroys Franzen’s weird, destructive, dumb war on experimental fiction and William Gaddis. So good, so richly deserved.

Really Very Good: The Corrections 

That was a good’un. Grown on me a lot. Seriously. You should check it out.

Pretty Darn Good: The Twenty-Seventh City

I mean, yeah, in some ways it’s the novel of a self-involved young dude writerly type. But the truth is (and a lot of people don’t want to hear this) a lot of great novels have gotten written by self-involved young dude writerly types. Life isn’t fair.

I Actually Barely Remember This One: Strong Motion

I know Franzen has complained that this one is unfairly criticized. I don’t remember enough about it to say. That’s generally not the best sign.

Overpraised and Kind of Obnoxious: Freedom

Overlong. Derivative of itself. I sometimes felt like I could hear Franzen sweating to himself, “I’m the man who wrote The Corrections!” as he wrote it. Probably in a moleskin. While wearing that tweedy blazer of his.

Comprehensively Awful: His Opinions on Books and Novels and Deep Stuff Like That

Let me summarize his essay containing Big Thoughts About The Novel Today, Which Is A Big Topic, “Perchance to Dream“: waaaaaaaaaaaaaaah!

(Yes, he for real named an essay about the future of the novel “Perchance to Dream,” proving once again that we live in the funniest of all possible worlds.)

Like Seriously Maybe the Worst Thing That I’ve Ever Read: “Mr. Difficult”

The woooooooooooorst. Wounded, self-impressed, childish, clumsy, banal when it thinks its being profound, foot-stomping, aggrieved, pretentious, guilty of the sins it thinks it’s indicting, entitled, desperate for approval, disdainful of the tastes of others, utterly safe while pretending to be dangerous, filled with absolute and utter terror at the possibility that some people might derive enjoyment and meaning from something that its author doesn’t like, sanctimonious, obtuse, badly misinformed, pitched against a dead and defenseless writer whose sole crime was not writing books that existed to fan the already towering flame of Jonathan Franzen’s unbearable self-regard, and worst of all, unsympathetic in precisely the worst possible way that any act of writing should be, in that it so thoroughly fails to bother to look for a friendly reading that it twists its authors mind into a great and powerful machine for missing every possible point that does not direct his shambling, awkward, awful essay towards its dumb foregone conclusion. I don’t believe in violence against people, but this essay, itself, deserves a slap across its smug, self-deluded face.

Literally the Worst Thing That Has Ever Existed on Earth: The Tweets You in the Hot Take Crew are Furiously Composing About How Annoying It Is That You Must Share This Great Fertile Globe with Another  Book By Jonathan Franzen

I don’t know you. But there is literally nothing in your life you could  do that could be a bigger waste of time. Like the Buddha said, set it free.