I have written a (futile, naturally) please to my university to get out of the football business. Check it out.
- fdeboer AT purdue DOT com
Recent Blog Posts
I have written a (futile, naturally) please to my university to get out of the football business. Check it out.
Validity, as you likely know, typically refers to how well a given experimental or test instrument actually measures the given construct it is meant to measure. If an experiment is meant to test vocabulary knowledge, for example, then its validity is dependent on how well it actually assesses and reports the vocabulary knowledge of its participants. In the social sciences, there is often a great deal of controversy over what constitutes validity in experiments.
One popular concept of validity is ecological validity. This refers to the “real world” nature of tests or research instruments. How well do they approximate the actual conditions under which the assessed construct is encountered in real life? Like many issues within methodology and epistemology, both the term and the concept are debated, not only as a matter of what constitutes ecological validity but also whether we should make ecological validity a criterion of research success at all. Many will tell you that what really matters is just external validity– that is, does the construct assessed in the research port into the context in which we actually care about, regardless of whether the research conditions match real world conditions? So in the vocabulary example, ecological validity would be concerned with how well our research conditions matched real world, natural language production of vocabulary. External validity, in contrast, would just concern whether we could accurately generalize from the findings into a real world context.
So consider the current debates about testing in K-12 education. Many parents, teachers, and concerned parties worry that the rise of endless testing fails because it lacks ecological validity– that is, that so much testing removes learning from the real world contexts that we actually care about in the first place. Some in the world of standardized testing have replied that, in fact, these tests are authentic– they are authentic tests. That is, they are ecologically valid in that they approximate the conditions under which students will surely be tested in their future lives. They argue, not entirely unreasonably, that testing will be a part of a student’s educational life, whether those tests are standardized and enforced from above or not. So a test might not authentically test a given construct in real world application, but it would authentically test conditions that are a given in education.
That’s true, I suppose, but the problem with it seems obvious: once you’ve made that leap, you can justify literally any new educational assessment under the theory that the students will encounter it in the future. It’s tautological: we have to test students in order to prepare them for the tests they’ll have to take.
That thinking, to me, is indicative of a broader problem in the whole school reform debate: the way the snake starts to eat the tail, and we lose sight of just what we cared about in the first place
So probably my best instinct is just to point out that this long post by Tom Scocca on smarm was followed, just a few posts later, by a post titled “Monkey Teaches Man to Play His Favorite Game.” It’s a perfect argument in and of itself, and if I were smarter, I’d just leave it at that.
Neetzan Zimmerman wrote that post. He’s both Gawker’s biggest money generator, a very talented viral aggregator, and one of the most consistently smarmy writers on the internet. Post after post of his reaches for that empty, facile uplift that is the bread and butter of sites like Upworthy and Buzzfeed, the kind of sites that produce the content he hates so much. A very large portion of what Zimmerman does– and thus a large portion of the cash that comes into Gawker– is of the “when you see what this autistic mother did to save this dying seal pup, you won’t believe your eyes” variety. And that subsidizes Tom Scocca’s writing. Don’t take it from me! Take it away, Farhad Manjoo: “Mr. Zimmerman’s dominance is part of Gawker’s plan. By earning so much traffic on his own, he effectively subsidizes the rest of the staff, liberating them to pursue deeper, longer, more experimental pieces.”
Now this is a plan I have defended in the past. Not just in general, but for Gawker specifically. Because they publish a lot of good stuff, stuff you can’t get elsewherez. So I don’t mind having to wade through the soul-destroying bullshit that Zimmerman and a few other Gawker writers post to get to the good stuff. You gotta pay the bills. And this dynamic isn’t gonna change. I hate Buzzfeed and Upworthy and I wouldn’t hand Henry Blodget a glass of water in the desert, but this is reality: you want to read stuff online? Great. You’re gonna get the smarm that your cousin insists on posting on Facebook every quarter hour. So you’d like to see a little self-knowledge, a little acknowledgement of Scocca’s implication in all this, under the rippling waves of self-regard. But, nope. That’s the problem with “snark,” or whatever you want to call it: it destroys self-knowledge. If Scocca has ever written a self-critical word, I have yet to read it, and that makes him a far less useful intelligence than he otherwise could be.
Just look at some of the comments on that post. A commenter writes,
Finally! Someone has called out Eggers and Upworthy. Just, thank you.
Now, pardon me, but you have to have suffered some kind of massive brain injury to think that Dave Eggers and Upworthy have gone without criticism. I mean, really. Sometimes I think Nick Denton grew Dave Eggers in a lab just to give bloggers someone to make fun of. Upworthy is so hated that even that idiot cousin I mentioned is starting to get annoyed with it. This is the kind of cluelessness that blank, omnidirectional criticism produces: you end up arguing flatly untrue things, and pretending that people and publications that are actively reviled aren’t. You end up unironically arguing that there’s not enough negativity on the internet. And by the way– this comment is close to the Platonic ideal of smarm. Bravo!
Now I said some of these things in the comments to the piece, and as was to be expected, I got a lot of wagon-circling pushback. You see– this will blow your mind– people don’t agree about what’s snark and what’s smarm, and their designation of each has everything to do with what they like and which team they’re on. Crazy! It’s almost as if the world is a complicated place, and that attempts to get beyond the flat reality of human subjectivity and disagreement are useless. (Great for generating traffic, though. Just not “Monkey Teaches Man to Play His Favorite Game”-level traffic.)
The commenters really loved Scocca’s piece. They always do. One of the funny things about Scocca’s exquisitely polished stance as The Last Honest Man On the Internet is that most of what he writes is pure red meat for Gawker commenters. Look at his piece on how white people ruin everything. It was vintage Scocca: right on the facts and on the substance, analytically adroit, not quite as good as he clearly thought it was, and perfectly designed to flatter the sensibilities of the average reader of Gawker. Nothing could have played to their self-image more; each of the white people loudly celebrating it was sure that it was actually about those other white people, which of course destroyed its entire point. In any event, it was portrayed, by the hundreds and hundreds of connected people who praised it, as somehow a work of unpopular daring. That might be Scocca’s real genius: to write pieces that appear risky but which are actually money in the bank, perfect for the kind of people who read Gawker or the Awl.
Of course, no group is more susceptible to self-flattery and fake “risky” pieces than our chattering class, the savvy, connected set who define the agenda for what proles like you and I get to read on the Internet. If you’d like to understand the difference between fake negativity and real negativity, please take a moment and drop Scocca’s piece into the search bar of Twitter. Because I have never seen such a tsunami of empty plaudits in my life. There’s just dozens and dozens of toothless, banal compliments, precisely the kind of pious, lukewarm positivity that Scocca thinks he’s lampooning. And I genuinely wonder: do they not see the flatly self-defeating nature of this? Can they not understand that every empty piece of obligatory, transactional praise they deliver totally undercuts the point Scocca thinks he’s making?
The truth is that bloggers and journalists like Scocca, elites who write for other elites, like negativity only in the abstract. A paean to negativity that results in nothing but praise is by definition an empty shell, a useless work designed to be celebrated by a wagon-circling, self-interested class of influence peddlers who endorse the idea of independence but punish it whenever they actually find it. An actually risky essay is one that most people hate. An actual willingness to be actually negative means that most people will not like you. But they never, ever, ever allow themselves to even consider that, that the way in which they talk up criticism and being oh-so-tough is totally disarmed by the fact that they all treat writing like a cool party where they hang out and trade praise for each other.
My challenge to Tom Scocca would be to write a piece that doesn’t result in hundreds of people kissing his ass. If he did that, he might understand what it means to actually be negative, to actually be independent. But he’s a paid-up insider working for a powerful publication, living in the same fortress of Twitter followers and media friends they all do, and so the thought won’t ever occur to him.
Here’s the reality: writers are self-interested creatures, desperate for approval, whose self-esteem and financial security are dependent on being liked. They produce criticism. That criticism is sometimes too harsh and sometimes not harsh enough. Sometimes that criticism is productive and sometimes it’s not. Sometimes affected negativity is well done and sometimes it’s not. Sometimes you should be jokey and sometimes you should be serious. Sometimes you should play to the crowd and sometimes you’ve actually got to risk writing something that will make people dislike you. The reality is that there’s no blanket formula. There isn’t too much snark and there isn’t too much smarm. There isn’t too much positivity and there isn’t too much negativity. You’ve just got to pick your way through what’s out there and criticize what deserves to be criticized and praise what deserves to be praised. That’s all you can do. That’s adult life. Grow up.
This is me, doing what Scocca told me to do, and doing what almost no one else is doing, applying my critical faculties to his self-aggrandizing call to be more critical. So tell me, keepers of the sacred flame of negative criticism: how did I do?
As I’ve written many times, the plight of the adjunct in American colleges and universities is a true nightmare, and a profound stain on the character of those colleges and universities. You could read all about it in more detail and at greater length than I can present here, so it’s enough to say that adjuncts are terribly treated, teaching classes for low pay and without benefits in a university system that does almost nothing to protect them or their interests. It must change, and I hope the nascent efforts by adjuncts to unionize will prompt that change. The status quo is practically unsustainable and morally indefensible.
At the same time, there are many ways in which the adjunct crisis, and particularly the relationship between adjuncts and tenure-track faculty, demonstrates why labor issues can be so dysfunctional.
After all, what’s the oldest story in management’s attempts to oppose the union? Turning the non-unionized labor against the unionized, rather than against management. I’ve read countless pieces in the last few years that argue for the plight of the adjunct in a way that declares tenure-track faculty the enemy, rather than the administrations that largely determine labor conditions for contingent faculty. Please don’t misunderstand: faculty who oppose better conditions for adjuncts are simply wrong, and need to be criticized. I am frequently horrified by the thoughtlessness of many in the ranks of the tenured when it comes to conditions for adjuncts, and yes, as people have repeatedly pointed out, there’s great hypocrisy in otherwise left-wing professors failing to support adjunct or graduate unions. But dispassionate labor organizing requires an intense focus on management, and management in the university means the administrators who are gradually transforming colleges along neoliberal grounds. Again, I have no patience whatsoever for tenured faculty who fail to support or actively obstruct improving conditions for adjuncts, and I have argued with such people many times. But from the most dispassionate position imaginable, the actual power to determine adjunct labor conditions comes from administration, not from faculty.
In part, this is an example of a common human tendency to yell at the people who you have access to yell at. Many within the faculty are willing to listen; they can be engaged with. The administration, to many academics, appears to be an entirely faceless monolith, one that they have no access to. I have no doubt that the case for the moral and practical necessity of improved conditions for adjuncts needs to be made to faculty, but I also find the relentless focus of these pieces on faculty to be misplaced. It is the neoliberal ideologues in positions of administrative power who are working relentlessly to curtail labor power within the university. They are the bosses.
My guess is that the failure to identify the adjunct-TT faculty dispute in the tradition of divides between unionized and non-unionized labor is a continuing refusal to see academic work as simply work. One thing I’ve learned, over time, is that many of the people who want to appear the most cynical about the American university system are in fact not cynical enough. In reading and talking with adjuncts or others who are critical of the political economy of the academy, there’s an emotional edge to the criticism that reflects a sense of betrayal. Anger is perfectly warranted, and yet the particular flavor of anger is indicative of an emotional commitment to the university that betrays the cynicism many attempt to project.
It is precisely an overly romantic view of our university system that prompts the really intense negative emotions: they stem from a continued belief in the traditional values that are associated with the university. But the university never really had those values. I’m someone who grew up in academic culture, surrounded by academics and administrators my entire life, so I never had a romantic view of the academy in the first place. The American university system is just a widget factory. That’s all it’s ever been. It is an American workplace, and in an American workplace, the default state is exploitation of labor. That’s not defeatism. The working conditions in widget factories can be improved, through hard work and organization. It’s just an acknowledgement that a sense of unique betrayal stems from a flawed concept of what the American university system always has been.
There is a strong chance that, in the coming decades, we will see the conditions of tenure-track faculty and adjuncts converge. It’s just that, when they do, it will be much more likely as a result of the death of tenure and all college teaching being performed by contingent labor. As a student at a school sprinting in the direction of the neoliberal education ideal, I have no doubt that administrators writ large would love to crush tenure-track faculty and their unions. That is a very distinct possibility. It would engender a kind of equality, sure, but it would only be an equality of misery. The question for us is whether that’s a kind of equality we want to pursue. If you think I’m being unfair in asking that question, I suggest you immerse yourself more in the writing of many who have recently gone through the tenure-track hiring process. Their identification of the problems with the system is commendable. But it is frequently matched with rage and resentment that are not conducive to any kind of organized improvement of the lives of adjuncts. It will be far easier to hurt the tenure-track professors than to actually make a material positive improvement in the lives of adjuncts, if we aren’t careful, if we don’t speak carefully.
We’re living in a time of broad and deep immiseration of workers. Across the country and around the world, the conditions of workers as workers seem to get worse and worse. Stagnant wages are combined with increasing costs of living in medicine and housing. Organized labor has been crushed, with worker power at low ebb. The petty degradation of workers grows and grows. What twists the knife, for me, is the lack of solidarity. The broad decline in working conditions should prompt more worker organizing and more support between workers. If anything, we’ve seen a rise in petty resentment and anger between workers. I am profoundly ambivalent about the “99% vs. 1%” frame, and yet it has the elementary advantage of reflecting the huge divide between a tiny, ludicrously privileged population of haves and an ever-growing population of have-nots. Destroying the professoriate would only eliminate one of the last remaining good jobs for those who are not rentiers and plutocrats. We have to work to make working conditions better for adjuncts rather than worse for the tenured, and that will take a focused and dispassionate message and a pragmatic strategy.
I disagree with Matt Yglesias on many things. (I would say “we disagree on many things,” but I’m sure he doesn’t spend any time thinking about me at all.) I’m also not a fan of his writing as writing, as craft. So I might be expected to be the natural audience for this takedown of his prose. Certainly, the passages that CJ Ciaramella identifies are very bad writing and deserve criticism. But his proposed fixes are little better, and they echo the worst thing ever to happen to American writing, which is American minimalism. I don’t want to declare a philosophy for Ciaramella, but he uses classic pieces of writing advice that slot comfortably into minimalism: cut the fat, drop the ten-cent words, be active and not passive, avoid abstraction, etc. It’s hoary, and old, and misguided advice.
Worse still is Ciaramella’s endorsement of the martial school of writing, where writing is like fighting, because men write! And men fight! And by god, are we not men, who want to write about bullfighting and gladiators and tits and such? It’s not just that I find that kind of writing almost uniformly useless, but also that we are certainly not facing a deficit of that kind of writing. Trust me: the world is not suffering from a lack of young white dudes who are eager to demonstrate their power through their writing. They crowd MFA programs and coffee houses– the cool, manly kind of coffee houses, not the queer kind. There are few cheaper commodities in American writing than men who try to write with “force,” the way Ciaramella wants them to.
The martial style, at least, tends to be confined to a particular demographic. Minimalism, on the other hand, is something universal. American minimalism is a a writing philosophy that has dominated our country’s writing, both fiction and nonfiction, more than any other. Minimalism suggests that the major problem with most writing is that there’s too much: too much length, too many words, too many ideas, too many syllables, too much fancy-pants abstraction. Minimalism has been popular in fiction and non-fiction alike. Writers like Sherwood Anderson helped to popularize the style, but it is most deeply and commonly associated with Ernest Hemingway. (Although he is occasionally lumped in, I would argue rather strenuously that F. Scott Fizgerald does not belong in the same category.) A more contemporary example is Elmore Leonard, whose recent death spurred a big push for more minimalism. In nonfiction, minimalism is the default assumption of many house styles and handbooks. The minimalist ethic is taken to an extreme in the most famous book on writing style ever, William Strunk and E.B White’s The Elements of Style, still a college campus mainstay and beloved of countless editors. (And, actually, the Unabomber.)
In fiction or nonfiction, the relentless push in minimalism is to cut, cut, cut– to cut words, to cut syllables, to cut abstraction, to cut ideas. Minimalism shuns subordination, shuns the passive voice, shuns flowery description (or sometimes just description), shuns abstraction. Minimalism’s many converts ascribe poor writing to too much abstraction, which they contrast with the concrete; to too much jargon, which they contrast with the brute simplicity of shorter words; to too much hedging, which they contrast with the manly power of unapologetic expression. Minimalist teachers and editors buy red ink by the barrel. There’s always more to cut. Ornamentation, sentiment, complication, self-doubt, metaphor, artifice, art– all to be pruned. What should be left is sentences like “the boat was small. The water was clear. The day was cold.”
Now some might argue that my take is actually a parody of minimalism, an unfair exaggeration of its traits. But that’s actually part of the trouble: the style, even when well-used, drifts uncomfortably close to self-parody, and given the way in which ideas travel, a certain exaggeration of any style’s maxims is inevitable. Hemingway was clearly a giant of a writer, and at his best, he wrote with exquisite precision. But in my estimation, he spent more of his career at his worst than at his best, and at its worst, I find his work nearly indistinguishable from parody. His style often works best in small chunks. “Indian Camp” is a great short story, in large measure because it is a short short story; I wouldn’t want to read it stretched out much further. 20 pages into The Old Man and the Sea, I’m convinced it’s a work of genius. By the end, I’ve been so bludgeoned with the monotony that I want to read just about anything else. Leonard was also a master. But he was careful to work in a limited arena, in a particular set of genres, narratives, and registers. And in both cases, really diving in to either writer’s best work will disabuse you of the notion that they followed all of the cramped rules you encounter so often on the internet.
There are differences in what makes for good writing in fiction and in nonfiction, though the conversation is largely the same. The primary difference is that American fiction eventually rejected minimalism. It took writers like Saul Bellow to rescue the American novel from an army of young writers telling us that the boat was small and the water was clear. But in nonfiction writing, minimalism never went away. The drive to cut everything, to say less rather than more, and to eliminate nasty business like ideas and emotion predominates in expository writing advice. I have skimmed through far more freshman composition textbooks than I care to admit, and I’ve read dozens of essays on writing well, and a clear majority of them replicate the tired old mandates about what not to put in and what to take out. So what good does dusting off that old copy of Strunk and White do, when we never really left Strunk and White? If minimalism was the cure, bad writing would be as rare as polio. That’s because minimalism is not some forgotten brilliance we should return to but rather the closest thing we have to a writing orthodoxy.
As someone who teaches college writing, I encounter the assumption of the superiority of the shorter sentence all the time. Most frustrating are the dudes (always dudes) who have taken the minimalist creed to such an extreme that writing with artificial brevity is almost existential. They are easy to identify. In peer review, they are always critiquing writing as “limp” or “flaccid,” in implied contrast to their own prose– which, I guess, they see as hard and throbbing and veiny and engorged. (I don’t say this to my undergrads, but I find it a good rule of thumb: avoid critiquing writing with terms that could reasonably be used to describe a penis.) Some use that style better, some do it worse, but as we move through a semester’s worth of writing purposes and contexts, its deep inadequacy becomes clear. What I tell the young men who are enamored of this approach is that what works in a hard-boiled detective novel does not always work in, say, a research paper for BIO 212. “The Krebs cycle is complex. ATP is the output. The oxidation of acetates, the key.” In large measure, this is because no human beings actually communicate that way. There’s irony in the artifice of minimalist style. Nothing in writing seems more affected than prose that is written to seem devoid of affect.
It’s hard to get them to listen. Look, simply saying “I often teach students who demonstrate particular weaknesses as writers” is just another way of saying that I teach writing, and I’m glad to do it. When I came to undergraduate education, I myself was a bad writer who thought he was a good writer, and it took a lot of work from patient and sympathetic teachers to change. The deeper issue with these students is that they are so often unteachable. Part of that is their tendency to really embrace the idea that their prose is a part of their souls and to alter it is to give in to a fallen world…. I appreciate that romantic streak, and I will certainly take it over apathy. But it can become a destructive obstinacy in a context where they are always being told to get more minimal. It’s hard, as a teacher, to suggest to a young writer that his problem can’t be solved by cutting more when he keeps reading that on the internet.
Could we run Matt Yglesias through the Strunk and White bootcamp? Sure. “The demand is too low. The rent, too high. The debt is unimportant. You make too much money. There’s dill in my salad.” But for a stylist who is as uninterested in being a stylist as Yglesias, I’m not sure it matters. Yglesias, I think, belongs to that school of people who wants his writing merely to be a conduit for his ideas. I think that’s a mistake– Ciaramella is right to suggest that ideas are inseparable from their expression– but it’s his prerogative. What Yglesias needs is not a summer in the Strunk and White reeducation camp but, well, an editor. (Seriously, bro– those typos are embarrassing.)
In the meantime, I expect a lot of bad writing to be produced thanks to bad writing advice. There are times and places for minimalism, but advice in that direction almost always mistakes what is obvious for what is the actual problem. Yes, you might have written a bad novel. And yes, you might have included a lot of adverbs. And the adverbs might have been a bad idea. But the adverbs are not why the novel was a bad novel. People identify obvious, explicit things like adverbs or the passive voice as the problem with writing because they are easy to identify and easy to remove. But they are very rarely the actual problem. The actual problems are almost always far deeper and far less easily fixed.
This is not to question the value of teaching and advice, merely the value of a particular school of advice. I’ve personally seen many bad writers become competent ones, and I’ve seen some competent ones become good ones. I’ve read drafts of novels and stories before they’ve been workshopped and then read them after, and they’ve often been immeasurably improved. But the value of a good class or a good workshop lies in its specificity, its individual attention to the individual piece of writing. The advice towards minimalism that is so common attempts to speak to everyone, and in doing so, speaks to almost no one.
Awhile back, I wrote a piece about writers I love, in part to make an argument about the kind of writing I want to read. None of them writes in a minimalist style. On the contrary, they each give me what I want most as a reader: writing that is teeming, unpredictable, risky, alive.
It’s not unusual for me to feel exasperated by things I read online, but reading this piece on “sentient code” from VentureBeat had me throwing my hands in the air. It’s everything wrong with popular writing about artificial intelligence: filled with hype, vague where specificity is needed most, and generally credulous in a breathless, gee-whiz style that should not appear in professional journalism. You’d hardly know, from reading the piece, just how much of a failure AI has been throughout its 60 year history.
As a corrective, here are some resources that display an appropriate skepticism towards AI, amidst all the hype.
Just to offer a brief followup to my recent piece on the myth of a STEM shortage, I want to focus a bit more on the uncontroversial value of being a star. I think that maybe the essential point is the one my engineer friend was making: companies may be hungry for STEM workers, but if so, they are hungry for a small number of the highest achievers from prestigious programs. The numbers simply do not show anything like a broad shortage of STEM majors in the labor market, but I’m very willing to concede that there are firms who feel that they have trouble filling certain positions. My guess is that this is an example of artificial scarcity: they’ve set the bar so high that they find there’s few people who can clear it. And I think that they take a kind of pride in that. Saying “our standards are so high we can’t fill all of our positions” is exactly the kind of chest-beating that is popular in tech culture, in startup culture.
Now some of the people who claim a STEM shortage might argue that this is the point. “We should be producing more STEM stars! We should have higher standards so that we can fill these positions and give all of our young people access to the good life.” But there are certain unalterable problems with that. The first is that “star” is a relative term. You might raise the average or the ceiling, but in either case, companies are still going to fight over a certain percentage near the top. Part of the problem with the notion of education as a cure for all of our social and economic ills is the willful avoidance of the fact that social sorting is almost always about relative position, not absolute. I find it profoundly naive to see companies like Google and Facebook as merely being interested in hiring people with a certain skill set, regardless of how well those people perform relative to others. So you see the essential problem: when everybody wants to hire the same 10% of workers, the rewards for those who can make it up there are great, but from a societal standpoint, it’s a zero sum game.
And that brings us to the second problem: not everybody can make the top 10%. That’s true in terms of the nature of percentages, of course, but it’s also true in terms of human ability. This is a point that is at once something most people believe to be true in their day-to-day lives, and yet is also one of the most controversial things I consistently argue: human beings are substantially unequal in their abilities, thanks to a variety of factors that are largely out of their control, and education has never been demonstrated to be capable of eliminating that inequality. In my “day job,” I read pedagogical and educational research. In my self-destructive online nattering, I read and debate political debates about educational policy. In both cases, it is truly remarkable how rarely people discuss the bare fact that human beings do not possess equal ability. You can read thousands and thousands of words online about educational policy without once encountering an admission that not everyone can become an intellectual or educational star. The admixture of variables that contribute to this inequality are controversial, but I cannot take seriously arguments that do not concede its existence. Yet there’s this big hole in our educational debate. Liberals don’t want to admit to it because they have confused the equal dignity and value of all people with equal ability. Conservatives don’t want to admit to it because they are invested in bootstraps mythology and because education has become the cudgel with which they beat on unions and teachers. Yet I’m willing to bet that most everyone, when operating outside of the political and intellectual space of debate, operates in the world as if it were plainly true, that different people have different aptitudes for certain intellectual tasks.
The consequences for this particular argument are clear. The argument that there is a STEM shortage, after all, is not merely that a shortage exists, but that it is both to the benefit of our economy and to our students if they go into STEM fields. But in a context where individual students are always going to be competing with each other for the same jobs, in fields where the drive to automate and cut labor costs is almost existential, and in a world where human beings are substantially unequal in their abilities, majoring in a STEM field is in no way a guarantee of employment or material security. Yes, it’s great to have an engineering degree from MIT. But the whole point of MIT is that they exclude the vast majority of people from attending. That’s as much of a reason these firms want to hire MIT students as the actual education. Meanwhile, the notion that the average student from an uncompetitive state school (like the one I went to) will reap an economic benefit from getting a STEM degree as opposed to something they actually want to study remains utterly unproven.
I think this discussion is important because it points to a basic contradiction within our educational philosophy: as a society, we imagine education both as the means through which we can elevate essentially all of our people into better quality of life– and at the same time, as a sorting system for identifying our most talented and deserving. Education is cast both as an equalizer and as a stratifier. That tension is palpable at college campuses, given two of our central anxieties: we are worried both about the high number of people who fail out of school and about grade inflation. We are worried that college is both too hard and too easy. That’s education, writ large, a series of controversies driven by incompatible goals. This basic, non-negotiably paradoxical character is a major source of the dysfunction of our educational debates, and no real progress can be made until we acknowledge it.
The uncomfortable reality that our political commentators and our people must be forced to face is that we cannot simultaneously pursue reward for those who best succeed in competitive educational and professional systems and work for equality at the same time. It is either the “meritocracy” or higher standards of living for everyone. We cannot have both.
I have, in the past, been accused of being an “edunihilist” because of my pessimistic take on the power of education to solve social problems and because I know that not all students are of equal ability. On the contrary: I believe very strongly that everyone can learn and have their lives deeply enriched through attending school. I have taught students at the elementary, middle school, high school, and college levels. I have taught students in special education and in mainstream. I have taught private school kids and public. I have taught stars and I have taught struggling students. In each and every case, I have found value and growth. What I have not found, and what the empirical literature has not found, is that every student can be made to succeed on the standardized tests and limited definitions of achievement that the forces of education privatization are pushing relentlessly. In a world where we define education broadly and look for growth and improvement that aren’t tied to capitalist interests, there is reason for hope. In a world where we think of education as the effort to achieve constant improvement for everyone on standardized tests, there is no such hope.
If that sounds cruel, I would agree. I would also argue that this is the central reason for why we should abandon the myth of meritocracy altogether. The playing field is not level, it never will be, and the pretense that we can make it so hurts far more than it helps. It’s time to separate people’s material security from our flawed perception of their merit once and for all.
I wrote a post over at Medium aggregating the evidence that there’s no STEM shortage.
“My idea of the end of the world would be the hive, the hive mind. Sven Birkerts has a wonderful description of how the horizontal, linked world is gradually evolving in that direction, to where nobody is ever really alone. Nobody is able to just sink deep into his or her own imagination or feelings. Where just the constant pressure of the horizontal connections keeps you from descending below a certain depth. If he’s right about that, it’s the end of individuality as I’ve known it and come to admire and treasure it.” -George Scialabba
All my deepest fears.
“I have been, for over twenty years, entirely persuaded by Garry Wills’s argument: that the Gettysburg Address, given by Abraham Lincoln on this day, exactly 150 years ago, was the great catalyzing rhetorical act in the–probably inevitable–transformation of the United States’s imagination of itself from a localized republican culture which accepted diverse, and unequal, communities, to a nationalized republican one which demanded the equal treatment of all its individual members.” — “The Greatest Presidential Speech of All Time,” In Medias Res
Here’s two important ideas from statistics that I wish would filter out a bit more, so that people could better evaluate statistics that are in the news.
1. It’s not the sample size, it’s the sample mechanism. Well, OK. It’s somewhat the sample size, obviously. My point is that most people who encounter a study’s methodology are much more likely to remark on the sample size– and pronounce it too small– than to remark on the sampling mechanism. I can’t tell you have often I’ve seen studies with an n = 100 that have been dismissed by commenters online as too small to take seriously. Depending on the design of the study, and the variables being evaluated, 100 is often a very large sample size. Under certain circumstances, an n of 30 is sufficient to draw broad conclusions about populations. We can’t say with 100% accuracy what a population’s average for a given trait is when we use inferential statistics. (We actually can’t say that with 100% accuracy even when taking a census, but that’s another discussion.) But we can say with a chosen level of confidence (usually 95%, by convention) that the average lies in a particular range, which can often be quite small, and from which we can make predictions of remarkable accuracy– provided the sampling mechanism was adequately random. By random, we mean that every member of the population has an equivalent chance of being selected for the sample. If there are factors that make one group more or less likely to be selected for the sample, that is statistical bias (as opposed to statistical error).
Part of this is because of the declining influence of sample size in reducing statistical error as sample size grows. Because calculating confidence intervals and margins of error involve placing the n under a square root sign, the power of sample size declines exponentially. Indeed: the positive value, in reducing statistical error, of going from an n = 90 to an n = 100 is the same as the positive value in reducing statistical error of going from an n = 100 to an n = 1000. Given the outlay of resources necessary for attracting truly large samples, it’s often not worth it to get samples of the size that people intuitively see as “big enough.”
Now compare a rigorously controlled study with an n = 30 which was drawn with a random sampling mechanism to, say, those surveys that ESPN.com runs all the time. Those very often get sample sizes in the thousands, sometimes hundreds of thousands. But the sampling mechanism is a nightmare. They’re voluntary response instruments that are biased in any number of ways: underrepresenting people without internet access, people who aren’t interested in sports, people who go to SI.com instead of ESPN.com, on and on. The value of the 30 person instrument is far higher than that of the ESPN.com data. The sampling mechanism makes the sample size irrelevant.
Sample size does matter, but in common discussions of statistics, its importance is misunderstood, and the value of increasing sample size declines exponentially.
2. For any reasonable definition of a sample, population size relative to sample size is irrelevant for the statistical precision of findings. A 1,000 person sample, if drawn with some sort of rigorous random sampling mechanism, is exactly as descriptive and predictive of the ~570,000 person population of Wyoming as it is of the ~315 million person population of the United States. I have found this one very hard to wrap my mind around, but it’s the case. The formulas for margin of error, confidence intervals, and the like do not involve any reference to the size of the total population. You can think about it this way: each time you pull a sample at random from some population, the odds of your sample being unlike the population goes down regardless of the size of that population. The mistake lies in thinking that the point of increasing sample size lies in making it closer in proportion to population. In reality, the point is just to increase the number of attempts in order
to reduce the possibility that correct our perception in case previous attempts produced unlikely results.
The essential caveat lies in “for any reasonable definition of a sample.” Yes, testing 900 out of a population of 1000 is more accurate than testing 900 out of a population of 1,000,000. But nobody would ever call 90% of a population a sample. You see different thresholds for where a sample begins and ends; some people say that anything larger than 1/100th of the total population is no longer a sample, but it varies. The point holds: when we’re dealing with real-world samples, where the population we care about is vastly larger than any reasonable sample size, the population size is irrelevant to the error in our statistical inferences.
Inferential statistics are powerful things.
The Texas Sharpshooter Fallacy functions as both a joke and a warning. The idea is simple but powerful. A Texan decides he wants to prove his shooting prowess to his friends. He takes out his handgun and empties its magazine into the side of his barn. He then paints a bulls-eye where his shots are clustered.
This joke is connected to empirical investigation in a very important way: when we go look for any connection, rather than a specific one that’s generated from theory and hypotheses, we are likely to find something. Right now, I’m working with a very large, data-rich corpus of essays by second language writers. A lot of my research involves using computer programs to mine large collections of texts for their patterns and features. To say that this work is bolstered by the ready availability of data would be understating things: it is largely only possible thanks to that availability. The collection of this corpus, under controlled conditions, is itself an artifact of the power of computing and the internet, as is my ability to access it. But access to data invites temptation. I have a spreadsheet with tons of information under a large number of categories. It would only take a couple clicks for me to generate correlations for all of the quantitative data in the spreadsheet. Similarly, it would be trivially easy for me to run a chi square test on all of the categorical data and look for associations.
To make matters worse, the nature of statistical significance tells us that, just by chance, if we look for significant relationships between enough variables, we will eventually find some, simply by random chance. I’ve never felt the temptation too keenly. But were I in desperate need of a publication, and aware that studies that show a correlation or association are far more likely to be published than those that don’t? I might be tempted to just see what’s in there. And while I can’t be sure, often when I read studies from education or experimental psychology– fields that, unlike my own, typically require statistically significant results in order to publish– I suspect that someone’s gone barn hunting. There are some statistical checks that we can do to help ascertain when someone’s done this sort of thing, but ultimately we are at the mercy of researchers to responsibly report the order of events in their chain of research and to tell us important details like how many variables they used in a multilinear regression or fed into an ANOVA.
All of this, incidentally, is another reason why it’s profoundly misguided to speak of only being interested in empiricism, not theory or ideology, a la Ezra Klein. There is no such thing as empiricism without theory; theory is necessary to generate, analyze, and understand data. Insisting on the necessity of theory doesn’t spring from any aesthetic or romantic commitments or place one on a humanistic-empirical divide. We embed empiricism in theory not because we choose to but because there is no alternative. The assumptions inherent to methods, methodologies, and epistemologies all impact both the collection and analysis of empirical results. And while we might identify the Texas Sharpshooter as a particularly pernicious or dishonest failure to interrogate the theories that underlie our empirical work, that failure exists on a continuum of sins that occur when we refuse to acknowledge the social, political, and theoretical framework in which all empiricism is embedded. We must remember that more data does not liberate us from the need for careful and skeptical epistemology; indeed, more data only makes the careful consideration of epistemological questions more important.
… but, nevertheless, something I think is true: the one indispensable trait for a reader is humility. The one indispensable trait for a writer is arrogance.