Doug Hesse is the latest compositionist to take to the Chronicle to defend the honor of the field against someone arguing that students can’t write. Hesse is a really genuinely kind and considerate scholar – I talked to him at a conference once where he was sitting around gabbing enthusiastically with grad students before I realized who he was – but I find his essay unpersuasive. Some of it is just the conventions of the genre; yes, “college students today can’t write” is a hoary old cliche, but that doesn’t mean the statement isn’t true. But more, I just don’t agree with the titular claim that we know what works in composition. And I don’t agree because I don’t think we know what it means to “know,” in composition.
Hesse’s article says that his perspective on what works is “informed by decades of research.” There’s no citations provided for a laundry list of claims he makes, which is fine – it’s an essay, not a journal article. But what the average reader might not understand is that the research Hesse refers to is, in dominant majorities, completely lacking in the kind of statistical and methodological controls that have been steadily developed in the social sciences over the past century. Adequate sample size, stratification of those samples, use of control groups, reporting of statistical significance levels and effect size, adjusting for the distortions of alpha common to nested data through multilevel modeling, utilization of techniques like quadratic regression to counter data that doesn’t conform to the assumptions of typical multivariable models like linear regression and ANOVA – these things are almost unheard of in contemporary composition scholarship. Indeed, much of the research published in the most exclusive composition journals lacks even the most basic statistical controls at all.
There are, of course, a host of problems with contemporary social science and its methods. I am not at all naive about those problems. And despite a frequent accusation, I’ve never disdained strong qualitative work. A good ethnography or case study can fill in the gaps left by large-scale quantitative approaches, in a way that truly deepens our knowledge. The problem is that we don’t have that quantitative work, in college writing, and that which exists tends to come from research fields other than our own. On a fundamental level, I believe that claims about what works and what doesn’t in writing education writ large have to emerge from carefully planned and executed empirical work that reflects a vast body of research on how to make generalizable observations about student performance. The claims Hesse endorses mostly don’t.
Instead, if they are like the vast majority of composition scholarship, they come from very small-n studies utilizing convenience samples from instructors studying their own classes, involve that instructor-researcher making subjective observations based on vague and underdeveloped criteria, and utilize no statistical controls whatsoever to account for the vast number of ways educational research can go wrong. Such studies can provide insight when they are paired with rigorous large-n quantitative work, but on their own, they are not a way to “know” anything about our students and how best to teach them. This is my position.
Why have compositionists disdained that quantitative work? As has been documented in articles like Richard Haswell’s “NCTE/CCCC’s Recent War on Scholarship” and Susan Peck MacDonald’s “The Erasure of Language,” composition scholarship was once welcoming to quantitative and empirical work, but saw that side of the research tradition shrivel. That shriveling was in large measure the consequence of the “cultural turn” in composition, in which cultural studies essentially ate the field. Scholars like James Berlin, Elizabeth Flynn, Carl Herndl developed a research literature that insisted that quantitative scholarship was a tool of hegemony and bigotry. To put it in Flynn’s terms, “feminist critiques of the sciences and the social sciences have also made evident the dangers inherent in identifications with fields that have traditionally been male-dominated and valorize epistemologies that endanger those in marginalized positions.”
That vocabulary is neither idle nor accidental: the attitude is not merely that quantitative (“scientistic”) research is limited or contingent, which is of course true, but that it actively endangers students and scholars from marginalized backgrounds. By 2005, Richard Fulkerson could write of composition, “in point of fact, virtually no one in contemporary composition theory assumes any epistemology other than a vaguely interactionist constructivism. We have rejected quantification and any attempts to reach Truth about our business by scientific means.”
Composition’s public spaces, at conferences and on listservs, are generally deeply antagonistic to any arguments for rigorous, large-n quantitative studies. Even carefully worded appeals to a both/and approach that values qualitative or theoretical work while it pursues more quantification often meet with defensiveness and politicized rejection. The field does see periodic calls for more empirical work and more quantification, but somehow those rallying calls never seem to inspire more space in our prominent journals, which of course is profoundly relevant to what is professionally valuable to individual academics.
To take a pertinent and depressing example, the premiere journal College Composition and Communication published a special issue in 2012 documenting all the exciting new methodological possibilities for the field, including various forms of quantitative research. The journal then proceeded to published almost no numbers-based scholarship whatsoever for the next 4 years. When I was coming up in my MA program I was told that I should look at Research in the Teaching of English as the kind of journal I might one day write for, but in the past half decade or so RTE has published almost no large-scale quantitative work at all. The annual 4Cs conference has become something of a joke for those of us who favor quantitative approaches, in that it has adopted an extremely loose approach to what counts as writing research… unless that research involves statistics. Young academics eager for recognition as researchers – and the professional incentives that come with it – cannot help but notice these dynamics.
It’s not hard to understand the social dynamics of this equation. When one side says “we value the narrative, theoretical, and qualitative work being done, but we need to balance it with rigorous quantitative work to understand our students and their learning,” and the other side says “quantification is part of a lineage of racism, sexism, and colonialism,” it’s not hard to guess what the prevailing sentiment will be in today’s humanities programs.
This dynamic – a divide between the methodological arguments of quantitative scholars and the political arguments of cultural studies and critical theory scholars – plays out in far more realms than just composition. I identify as an applied linguist because the majority of my graduate training, my independent reading and tutoring, my research, and my professional life have occurred in that field. The divide there is stark and growing too: the fields of ESL/TESOL/applied linguistics is marked by a growing inability for differing groups to communicate with each other. In my experience, those fields feature small and shrinking groups of quantitative scholars undertaking empirical investigations into linguistic and assessment data and a larger group of scholars who call the former tools of colonialism. I have been in academic spaces, in the real world and online, where the two sides differ so greatly in vocabulary, methods, and culture that they are essentially unable to communicate, but are forced alongside each other by the small and shrinking professional and academic landscape.
Ultimately, the definition of knowledge and how it is produced are the product of incentives. The professional incentives in composition today simply cut too hard against quantification.
Suppose a young compositionist were to ask me for career advice and said that they wanted to do an empirical dissertation. I would tell them to identify some small and impossibly specific sub-population of students and conduct a case study on two or three of them, to grab a convenience sample of whatever students were ready at hand, to assign arbitrary and vague codes to what they’ve found, and to draw vast conclusions from their sample of two or three students, despite the fact that the kind of researchers who love case studies typically disdain generalization. I would tell them that because experience tells me that’s the kind of empirical work that gets people jobs. I would caution them against conducting a large-n, quantitative study, because experience suggests that this is a bad way to go about getting a job. Write a dissertation like that in composition and the most likely outcome is a lot of defensiveness and dismissal in your job interviews, the weird, “you think you’re better than me” attitude that academics who don’t do quantitative work often harbor for those that do.
I’ve said this before: from a purely careerist perspective, it would have been far more lucrative for me as someone working in composition to write a dissertation on the rhetoric of Doctor Who than an examination of the history and theory of standardized tests of collegiate learning. I don’t say that to insult scholars in pop culture studies or to suggest that there is no value to that kind of inquiry. But in general there appears to be an inversion of the expected, in the field, when it comes to what is a common approach and what is a niche concern. I value and respect work in critical pedagogy, cultural studies, minority rhetorics, pop culture studies, “the digital,” and critical theory. But that work has so crowded out efforts to rigorously examine how students learn to write that I’m surprised Hesse can be so confident in what he knows.
I don’t disagree with most of Hesse’s prescriptions for composition. In particular, when he writes, “Students learn to write by writing, by getting advice and feedback on their writing, and then writing some more,” I want to applaud. Writing is like playing a sport or learning a musical instrument: there is no substitute for repetition. You must practice! Students need to be writing, a lot. I would personally prefer that they be working at much smaller scales than is typical in contemporary composition classrooms, taking apart their own paragraphs, finding what doesn’t work, and rewriting them until they’re polished and strong. But yes, there is simply no substitute for practice, for repetition, in training young writers. What I question is Hesse’s apparent confidence that students are actually getting that much opportunity to practice.
My own take is that in fact in many introductory composition courses students produce very little writing over the course of a semester. Indeed, my limited, anecdotal impression is that the average American composition class involves a few weeks of unfocused intro-to-argumentation lessons, a couple of short papers, and then weeks spent on podcasting, Photoshop, pop culture analysis, and whatever else the instructor finds fun to teach. Indeed I’ve been at conferences where composition grad students bragged to each other about how little actual writing their writing students do. In part this is a labor issue: with so many composition instructors coming from the ranks of adjuncts and grad students, there is a widespread fear of angering students by giving them too much work. In part it is an incentives issue: it’s far better to go on the job market as a compositionist who teaches podcasting and video game design than it is to go on the job market as a compositionist who teaches writing as it is traditionally understood. Universities may fund writing programs in the understanding that writing papers is important, both for other college learning and professionally, but it is not universities who hire professors. It’s other professors, and they gauge status and esteem through what’s novel, current, and “innovative,” not what’s pedagogically necessary.
Of course, there is some fact of the matter about what is actually happening in our composition classes. We don’t have to rely on my impression vs. Hesse’s. We could use quantitative methods to better understand what actually gets taught in composition programs. But that requires all of the stuff that composition has disdained for decades, those techniques and standards that people like Flynn insist are just an excuse for patriarchy and white supremacy. To get that work done, established scholars like Hesse must actually fight for its presence in the field – fight for it as peer reviewers at journals and conferences, fight for it in the hiring and tenure process, fight for it even when it is politically and socially uncomfortable to do so. My experience in composition tells me they are deeply resistant to doing so. After all, once you’ve climbed to the top of a reward structure yourself, why would you try to chance what precisely it rewards?
Hesse dings Joseph Teller for spreading “lore” and refers to “overwhelming empirical evidence” against him. But in fact, as someone with a broad grasp on what composition has called knowledge for decades, I find the evidence very far from overwhelming. I would in fact call it lore myself. Pull aside different compositionists and ask them what we definitively know as a field that we didn’t know ten years ago; you will more likely get a lecture on the hegemony of knowledge than consistent answers based on a shared reading of rigorous and replicated research. There are ways that human beings can answer questions about how students learn to write effectively – never perfectly, never without doubt, but constructively, with greater and greater confidence over time. The question for Hesse and the field he defends is whether or not to use them.