my n+1 piece on Hartford

Gang, I’ve got a new piece out for n+1 today about my old home, Hartford, Connecticut. It’s a subject of great personal and emotional commitment for me. You should read it. It’s really good.

Posted in Popular & Digital Writing | 8 Comments

is/ought

Today, Hamilton Nolan argued that Twitter is public. That is true. I mean this in the old fashioned sense, that is, it accurately reflects reality. I know because I have a web browser open right now and it enables me to access public Twitter feeds. Predictably, some people are mad at Nolan, calling him condescending or saying he lacks empathy. But Nolan links to some people who are under the impression that Twitter is not public, and so he is saying that Twitter is public. Ought to be is a separate question, although given the many ways to write privately, the many ways to communicate with others privately, and the fact that Twitter has an option that easily enables you to share your thoughts privately, I find it hard to be mad that Twitter is public. Setting that aside: if your Twitter is public, it’s public. Is, not ought to be.

You can’t yell at people for being condescending when there is controversy about what they are supposedly being condescending about. Take it from someone who has been routinely yelled at for quoting public Twitter feeds: this is not an obvious notion for many people.

The failure to distinguish “is” from “ought” is a large and growing problem for what you might call the online social justice movement. This was part of my point when I talked about the juvenile theory of the moral universe shared by those who think they can uncritically support the carceral state, when it comes to preventing sexual violence, without fear of contributing to the carceral state’s violence and oppression. It would be much better to live in a universe where that is true– it ought to be true. But it isn’t true. Likewise with those who question why it is the responsibility of those who are right to teach those who are wrong, the people who say that it shouldn’t be women who have to teach men not to be sexist or people of color who have to teach white people not to be racist. This is bizarre. It is up to good people to argue with the bad because otherwise bad wins. “Ought” does not enter it. There is no should. Good people have to do the political work of creating change or there will be no change.

I have been saying this for a long time: the myth is that the internet is by and for everybody. The reality is that the internet is by and for a tiny sliver of the world’s people. But that sliver believes in the myth. They believe it so, so much. And the myth enables them to hide from the fact that they are just a sliver, and that their concerns are not the world’s concerns, and that everyone in the world does not in fact share their assumptions. When people wonder why it is the responsibility of the good to argue against the bad, they betray a mindset that has mistaken the sliver for the world. Only someone who has lost perspective could imagine that the good people so outnumber the bad that we can safely say that it is not the responsibility of the good to argue with the bad. People argue about trigger warnings as if everyone agrees what behavior is permissible, and yet the large majority of the undergraduates I’ve taught treat words like “feminism” or “white supremacy” as their triggers, and I have taught many, and they are not some reactionary fringe.

I think, frankly, that the elect are mad because while they say they want their Twitter to be readable by everybody, what they really want is for their Twitter to be readable by every Somebody, and people like Nolan remind them that the great unwashed is out there in the public too. There is no button on Twitter to make your feed only readable by those with the right kind of zip code, no box to check that makes you invisible to that majority of people who don’t know the steps to the affluent dance.

Politics is not a way to create a perfect universe. It is a typically futile attempt to make a broken world just slightly less broken. The people in the sliver know that; they catalog the brokenness every day. Yet they forget that; they imagine that every “is” secretly hides an “ought.” And since they believe that the sliver is the world, they only need to burrow deeper into the sliver to avoid the wider world– until the brokenness of the world, inevitably, shakes their little house.

You can get mad at me, but I’m not saying that this is how it should be. I’m just saying it’s so.

Posted in Uncategorized | 3 Comments

the limits of puzzlebox fiction

This post contains spoilers for the first season of True Detective.

The moderate, qualified disappointment being expressed about the ending of the first season of HBO’s True Detective, and with it the end of that storyline and characters, seemed highly predictable to me from the earliest stages of the show’s ecstatic critical reaction. You could argue that this is merely a consequence of that very ecstasy. The serial format of current middlebrow television, and the enormous attention devoted to these shows by professionals and amateurs alike, are uniquely suited to producing disappointment. But I actually think the problem lies in the limits of True Detective‘s real genre, which is puzzlebox fiction.

Early in Lost‘s run, when it was attracted reviews as rapturous as the first episodes of True Detective, I argued that the show would inevitably disappoint its fans. Early on, both Lost‘s creators and its many supporters argued endlessly that the show would not turn into Twin Peaks, David Lynch’s brilliant, notoriously maddening mystery series. But as I argued at the time, Lost‘s real analog was not Twin Peaks, but rather The X-Files, and I suspected it would suffer the same fate. Each were based on the development of complex mythologies, and each rewarded viewers with small nuggets of revelation that provided limited answers to existing mysteries. The problem for X-Files, and for Lost, is that this dramatic architecture is essentially a pyramid scheme: you can never pay off as many answers as you’ve built mysteries, because once you run out of mysteries, there’s no more show. So X-Files would constantly entice viewers with incredibly! important! episodes! that revealed the truths! But in order to keep the show going, they also had to create brand new mysteries as they solved old ones. I remember sitting in the theater, feeling cheated by the first X-Files movie. Some old mysteries were paid off, but on net I didn’t feel any closer to the truth that Fox Mulder and Dana Scully had been chasing.

A similar dynamic afflicted Lost, and resulted in its ending, which is generally but not universally considered a disappointment. As I said at the time, this was an inevitable result of all the vast number of plot threads and mysteries and characters and developments that the show had spooled out for years. No ending could adequately address all of them, certainly not in a satisfying way. So the show’s creators decided simply to let them loose in an abstract, spiritual finale. The years of speculative theories (they’re in Purgatory, it’s a military experiment, they were abducted by aliens, they crashed on the Garden of Eden) presumed that there was a coherent, literal explanation to the mysteries, but instead the mysteries were ultimately of a more symbolic nature. Some people hated that and some enjoyed it. (I never liked the show, personally.)

I would argue that True Detective, despite its pedigree, its status as a limited-run series of 8 episodes, and its resolute dedication to realism, had the same problem as Lost. After all, the enormous public engagement and commentary on the show was largely dedicated to crackpot theories, the great fun of trying to piece together convoluted explanations of plot points both large and minute. That’s the fun of puzzlebox fiction, and why it has such obvious commercial appeal: the participatory nature of solving the puzzle fits perfectly in with the current way many people engage with fiction, which is by analyzing it in a way once reserved for critics and academics. The problem is that as you generate more and more outlandish theories, the expectations about the real conclusion become impossible to meet. Reality will always be a disappointment in relation to imagination. (I remember watching the original Hangover movie and realizing that the story was becoming less and less interesting as it went along; there was no explanation that could be nearly as funny as the possibility of the imagined explanation.) Lost and The X-Files at least had the advantage of taking place in explicitly fantasy universes; for a show as stuffed with realist self-importance as True Detective, any attempt to pay off all of the clues would have been too ridiculous. Thus the disappointment among some.

In the post-Shyamalan Hollywood, there’s a great market for intricate, twisty fiction that features improbable reversals and showy reveals. But there’s a supply problem: these stories are hard, really hard to pull off, and so most twist movies are usually bad. (Like, say, most of Shyamalan’s movies.) We are spoiled by once-in-a-generation genius like Chinatown into mistaking how difficult it is to stitch together these complex narratives without ending up with a totally implausible story. At its worst, you end up with Ocean’s Twelve, maybe the most insulting movie I’ve ever seen, where nobody particularly expects anything to make sense and plot twists and reversals are inserted with such breezy disdain for the audience that the movie seems to scold you for ever taking its plot seriously in the first place.

There’s been a movement afoot, recently, to dismiss complaints about plot holes and oversights as missing the point. (See, for example, this piece by Film Crit Hulk.) While I sympathize, and think that theme and character development are ultimately more important than strict adherence to plot logic, there are problems with this attitude. First, it’s particularly ill-suited as a defense when it comes to puzzlebox fiction, which is appealing in large measure precisely because of the intricacy and care of its plot points. You can’t hop from one foot to another. Second, it’s problematic in a world where the conventional wisdom is that character must always be revealed through plot– the ubiquitous “show, don’t tell” dictum that is still given out like wisdom from the mountain. If writers are obliged to only reveal character through plot, then plot has to matter. Indeed, the most frustrating aspect of lots of puzzlebox fiction is the way in which character motivations become nonsensical after one-too-many twists.

My own disappointment, ultimately, is more thematic than narrative: I find the notion that “the light is winning” to be entirely unearned and not supportable from the plot, and if it’s intended as a clearly false summation, that is not sold nearly well enough. But the plot problems are real problems. Chris Orr has a very effective consideration of what’s missing from the finale, most importantly and glaringly the total failure to really explain the extent of the show’s central conspiracy, and how exactly a clearly mentally-damaged landscaper who lives in squalor in the woods could have coexisted easily with the power players we are to believe were a part of the conspiracy. I do want to say, though, that Orr is setting himself up for future disappointment when he speaks about the finale’s failings as a specific problem with this show or this episode. It is no coincidence that the first few episodes were the best; they always are. It’s the inevitable outcome in puzzlebox fiction.

What’s ultimately to blame for all of this is neither the massive amounts of attention that we now devote to pop culture, or the failings of the show’s creators. It’s our persistent, forgivable belief in the omnipotence of the Great Artist, the frequently-unrewarded conviction that we have among us creators of such genius that they can square the circle and connect all of these many plot lines into a realistic, satisfying, and surprising conclusion. Every once in awhile, we get lucky in that regard. But generally, I think it’d be better if writers are a bit more careful about the narrative checks they write, and audiences a bit more self-defensively skeptical about the ones they expect to cash.

Update: If you’d like to read a great mystery that comments on the nature of mystery stories and chance in the way many people hoped True Detective would, I can’t recommend Umberto Eco’s The Name of the Rose highly enough. (Even if you don’t care about True Detective at all… I still highly recommend that book.)

Posted in Literature, Popular & Digital Writing, Prose Style and Substance | 14 Comments

actual competitive behavior in charter schools

I have argued, for some time, that there’s a basic misunderstanding about why some schools and school districts are considered good. Many argue that, thanks to the vagaries of local districting and funding of public education through property taxes, the hardest to educate kids are systematically excluded from the best schools. They aren’t wrong to say so. However, they miss a key detail: in large measure, those schools are perceived to be the best precisely because they systematically exclude the hardest to educate kids. If the three most powerfully correlative demographic factors for educational success are parental income, race, and parental education, then our system of educational districting is a powerful tool for separating those demographically predisposed for success from those predisposed for failure. I am firmly in support of efforts to integrate these separate groups, for a variety of reasons. But those who suggest that doing so would result in the worst-performing students suddenly and significantly improving are almost certainly mistaking cause for effect.

I would argue that the charter school movement has, in an oblique way, provided a great deal of evidence for my position. You can find that evidence in this thorough, necessary piece of reporting from Reuters. You should really read the whole thing. It details, at great length, the extraordinary measures that administrators in charter schools take to exclude the students that are difficult to educate, thereby gaming the system to appear effective in relation to the public schools that have no similar screening process. The methods they utilize to do so are, at times, jaw-dropping; their effect, to keep out children who are poor, from racial and ethnic minorities, from broken families or bad parents, and the like. All in all, it’s behavior reminiscent of the private high school in my hometown that bragged about the test scores its students received, neglecting to mention that they had to get good scores on a standardized tests to get in. All from advocates of “parental choice,” “expanding access,” or whatever other self-aggrandizing language you can think of.

In the great efforts they are expending to exclude the students that are the most difficult to educate, charter schools are lending more credence to my argument about the arrow of causation in our perception of school quality than I could ever generate.

The big question is why anyone would be  surprised. Recently I mentioned Campbell’s Law, and pointed out that we don’t have to excuse efforts to cheat standardized assessment measures, but we do  have to expect them, if we’re being rational students of human behavior. It’s a similar story here: for decades, ed reformers have insisted that the key to improving American education is to open it up to competition. “Competition,” like “innovation” or “disruption,” is one of those educational bywords that seems to have no particular character outside of whatever policy the person using it wants to advocate. But to the extent that it actually refers to real-world competitive behavior, let’s be clear: these administrators, as hypocritical and destructive as they are, are living up to that mantra. Cheating, or rule-bending, or playing it fast and loose, or following the letter but not the spirit, or juking the stats, or whatever– all are a part of what actual competitive behavior entails. Outside of the heady confines of Davos, principals working hard to keep out poor, needy kids from their charter schools while they pleat on about no child being left behind is a great example of how people actually compete. Be careful what you wish for.

Posted in Education | 6 Comments

for want of data

One of the hardest parts of being a researcher is getting access to data. This is particularly acute if you, like me, work in research fields where there is very little grant funding available, making it difficult to give language users incentives to give you samples created under the controlled conditions that are necessary for rigorous research. (There’s lots of money in education research, but it comes with a ton of strings attached, which I may write about someday.) I think this is changing, and as someone who does a lot of corpus linguistics, the availability of focused corpora like the ICNALE is a great boon. Also, broader, less prescribed corpora like the MICUSP are potentially useful if you’re prepared to do some work to standardize what you’re looking for. Still, now and for the foreseeable future, it’s hard to get access to the kind of texts that you need, particularly if you’re looking at a specific task, group, or instrument.

I’m recently feeling this difficulty acutely, as I experienced a setback with my dissertation along these lines. (I’m not trying to be arch. I’ll write about it soon; I just want time to process the next step and speak with fairness and friendliness about it.)

I actually thought of this in regards to the Satoshi Nakamoto story that has been lighting up the internet today. Some have expressed doubt about Newsweek‘s outing of Dorian S. Nakamoto as the Satoshi Nakamoto, in part because of the comparison between these two emails. If I had access to more of the writing of Dorian S. Nakamoto, it would be easy for me to use the kind of textual processing tools I use everyday to assess the similarity in writing styles between the two. Computerized textual processing is still remarkably limited, in many ways, and can’t readily assess things like stylistic or rhetorical quality. But one thing this software is very good at is analyzing texts for their indicative features and comparing them to a reference corpus. Expressed with the necessary caveats and with a responsibly-generated statistical probability, you could do a useful analysis very quickly.

Unfortunately, I’m not aware of any other available writing from Dorian S. Nakamoto, and this one email is not nearly sufficient to make a responsible comparison. I’m not saying that you shouldn’t do an eyeball test or use common sense to compare them. I’m just saying that this is not nearly enough textual information to do a responsible statistical, automated analysis. Of course, it wouldn’t surprise me if someone does one with just that email anyway– or if someone already has!

Posted in Language and Linguistics, Prose Style and Substance, Tech Stuff | 1 Comment

nowhere to turn

Like many people, I was both convinced and discouraged by this recent piece by Adolph Reed. Reed is one of the most eloquent and useful commentators we have today on the left, and a credible and intelligent critic of the kind of obsessive word policing that many now mistake for left-wing politics. Similarly, I was very impressed by David Atkins’s recent piece at Digby’s blog. It eloquently summarizes decades of political and policy failures by American liberals and the Democrats, and comes from someone who works in electoral politics frequently. He voiced many opinions that I’ve expressed in the past, in particular the folly and failure of procedural equality. What both Reed and Atkins make clear is a crucial point that anyone on the lefthand side of American politics has to reckon with: these developments were the product of choices, intentional political and policy choices by Democratic and liberal leaders. We cannot ascribe these failures to chance.

Inevitably, the pushback began, and inevitably, it came in service to the idea that anyone on the left must support Democrats above all else. This missive came from Michelle Goldberg writing, of course, for the Nation, still putting the “less” in bloodless after all these years. Goldberg accused Reed of nihilism and insisted, as so many do, that all roads lead to the Democrats. Says Goldberg, expressing the Stockholm Syndrome the broad American left has felt with the Democrats for a long time, “yes, for liberals, there is only one option in an election year, and that is to elect, at whatever cost, whichever Democrat is running.”

It’s the sort of statement that has made me give up the self-identifier of liberal, after many years of using it. And it makes me wonder what left-wing critics have to do, rhetorically, to not be condescended to by liberals, if someone as eloquent and respected as Adolph Reed does not escape the paternalistic sighing of liberals. But what it really made me think about was Ned Lamont.

The 2006 Lamont campaign was everything that establishment Democrats say they want. I’m a Connecticut native. I argued and voted for Ned Lamont in the primary and in the general election. Everything about the primary campaign played out according to the script that people like Goldberg constantly endorse: liberal activists in a liberal state generated enthusiasm for a more liberal candidate against a connected member of the Democratic establishment, someone who was a career centrist (which in international and historical context makes him actually a conservative) who had a record of militarism, coziness with corporations, and collaboration with Republicans generally and President Bush specifically. Lamont is no Eugene Debs. But he offered a substantially superior alternative to Lieberman, in a state that had 18 years to learn why Lieberman had to go. So we went through the official channels, had a primary campaign, and won it.

So: did the establishment Democrats praise us for “electing better Democrats”? Did they rally around Lamont, and use his campaign as a way to move the national party further to the left, in a way that suggest they’re open to in the abstract? No. No, of course not. On the national level and the state level, Democrats and media liberals excoriated Connecticut Democrats for having the gall to reject a made man like Lieberman. If you’d like a perfect distillation of the Very Serious opinions of the Very Serious people who helped lose the Lamont general election campaign, you can find none better than this absurd, self-important piece by Jacob Weisberg. It really runs all the bases, including an illustration of a donkey dressed as a hippie and an invocation of Hubert Humphrey. And it’s perfectly indicative of what it was like to be one of those liberal Connecticut Democrats who campaigned for Lamont– Weisberg is condescending, imperious, and convinced that the people with whom he disagrees are not just wrong but have deeply flawed characters. That was all part of the tenor of Connecticut politics at the time, with establishment Democrats railing against the liberals who had voted for their interests and worked within the system the way people like Goldberg constantly insist we should. Meanwhile, the national Democrats, having pledged to respect the primary process, reneged on that deal once their guy lost, and Democrats like Chuck Schumer stumped vociferously for Lieberman. The usual establishment Connecticut Democrats banded together with Connecticut’s sizable plutocrat faction and elected Lieberman on a third party ticket.

They were wrong, and we were right. We were 100%, unquestionably, unambiguously right. Lieberman was a horrid Senator, both before and especially after the 2006 campaign. Lieberman is genuinely one of the most loathsome politicians of the last 25 years. By the end of his last term, he was an immensely unpopular politician in Connecticut, couldn’t have gotten elected student body president. It turns out that the people who best understood what was in Connecticut and the nation’s best interests were Connecticut’s liberals and lefties. It turns out that the billionaires in Greenwich didn’t actually have the best interests of the working man in mind. (Who could have known!) It turns out that someone who had invested great effort in the support of neoconservatives and the War on Terror was not the best representative for an anti-war state in a country turning very quickly against militarism and the two brutal conflicts that were raging. (No one could have predicted!) What’s the counterfactual? The Republican candidate got less than 10% of the vote. Connecticut was not going to elect a Republican senator in 2006. Lamont would have voted for Obamacare too, and if anything, would have been a stronger advocate for it. Lieberman was the lamest of lame ducks by the end of his term, a pariah in his state and vastly less powerful in the Senate than he had been. There was no upside.

We did everything that they say we need to do, and not only did they not support us, they worked hand-in-hand with lifelong Republicans to undercut us. And they did it with the nastiest, most personal form of political insults and degradation. I know. I was there. Have these establishment Democrats come out with mea culpas? Did Jacob Weisberg suddenly start supporting left-wing primary opponents of moderate Democrats? Did the people who constantly tell us to elect better Democrats start some national conversation about how badly the Democrats fucked up? No. Of course not. Advocating for Democratic candidates means never having to say you’re sorry.

Think the Lieberman-Lamont was some sort of one-off? It isn’t. A few months ago there was a brief conversation about an Elizabeth Warren primary against Hilary, and people absolutely lost their minds about the idea. For a hypothetical left-wing primary challenge, years in advance! And it’s not like Warren is Trotsky. Or take the 2012 presidential campaign. You can check my record: I voted for and supported Obama in 2008. I wrote dozens of blog posts articulating why. I had the stickers and the buttons. By the time of the 2012 election, it had become clear that on a variety of issues, Obama did not support my political convictions. That’s how politics are supposed to work. You’re supposed to vote for people who will advance your interests. After 4 years, I could not in good conscience say that Obama still did. The reaction to my saying so was poisonous, aggressive, and voiced in the strongest terms of personal condemnation. Like many, I was called a traitor, objectively pro-homophobia, and all the usual names by the professional hippie punching “liberal” bloggers.

The idea of primarying the candidate you don’t like– so often invoked by liberal critics of the left– was, with Obama in 2012, immediately and vociferously dismissed by the usual suspects, who derided the very idea, even if attempted symbolically, as the worst kind of political idealism. This is essentially the dynamic in any prominent race that features a more left-wing Democrat challenger of a connected Democrat, thanks to the preemptive surrender of electability arguments. So I am so, so tired of being told by liberals that our solution is to primary people. That solution is only ever embraced in the abstract; it is ridiculed in the particular. Please: until you actually show that you are willing to support an actual primary here on planet Earth, rather than in the purely abstract space, stop telling people to elect better Democrats. It has no connection to reality.

I violated Goldberg’s dictum in the first election I ever voted for, actually. In 2000, I did not vote for Lieberman for Senator in Connecticut. I couldn’t do it; he did not represent my interests. Does that eject me from the ranks of the Very Serious? Even for Lieberman? How bad, exactly, does it have to get before I am allowed to demur? Is there literally no limit, as long as the person wears the blue hat? Well, I’m sorry: I didn’t vote for Joe Lieberman. And I wouldn’t vote for Rahm Emanuel, and I won’t vote for Hilary Clinton, who not only voted for the Iraq war, but was one of its most important and influential architects. For years, we have been told that Ralph Nader is a detestable figure, because in a very convenient analysis for Democrats, he is supposedly responsible for a war that he actively and vociferously opposed. (The comprehensive awfulness of the Gore campaign, even given a hostile media and Florida, is never discussed.) Yet in short order, we will be told that it is our absolute duty to vote for a woman who made it safe for Democrats to support that same war. If your political calculations require you to square that circle, fine. But please, spare me the sanctimony.

I hear people complain that socialists and other leftists don’t engage with politics and thus write themselves out of the political conversation. Can you blame them? The people who push the “Democrats uber alles” are contemptuous towards activism and non-partisan political organizing. They are dismissive of actual primary campaigns and hateful towards third parties. They claim to think that liberals should hold Democrat feet to the fire, but they go crazy if you criticize Obama about Social Security or killing civilians in Yemen. And this is not merely a conflict of opinions; I have never in my life been excoriated with the same anger, derision, and glee as I have been by partisan Democrats. I have said many times: many who count themselves liberals demonstrate far deeper anger and hatred towards left-wing critics of Democrats than they do towards conservatives. That’s is my no-bullshit experience.

So I don’t know what to do. I mean I have no idea what these people want. They have shut off every conceivable angle of approach for those of us who are to the left of Ben Nelson, and they’ve done so in terms so rhetorically violent that it’s no surprise many young lefties abandon electoral politics forever. I have been arguing with Democrats my entire adult life, as an antiwar activist in the earl 2000s, with the Lamont campaign in 2005 and 2006, and online for the last six years. And I have no idea what aggressive Democrats want from me. Like Reed and Atkins, I have never said that electoral politics should be abandoned or that there is no difference between the parties. But it is eminently clear that the Democratic party does not offer a credible alternative to plutocracy, austerity, or perpetual war. So I simply do not know what people want me to do. And since there is seemingly no internal critical mechanism among those people at all, no sense of shame for giving us NSA overreach or drone escalation or the Afghanistan surge or proposed Social Security cuts or Bob Rubin or deregulation, I don’t believe I will ever in my life see Democrats say, “you know, the hippies were right.”

Update: I’m digging through old media on this still, but several readers have pointed out that most prominent Senate Dems in fact supported Lamont, at least as far as public statements go, including Chuck Schumer. Those that didn’t were largely centrist or conservative Dems like Ben Nelson. This is a very important correction for this piece. Some have suggested that several Democrats who initially pledged to back Lamont reneged on that pledge later on in the campaign, but I am not finding evidence of that at present. I do think it’s important to continue to have a discussion about when it is in the best interests of liberals to abandon Democratic candidates, such as Alex Pareene recently did in the pages of Salon. I apologize for my error and I appreciate the effort of those who let me know.

Memory can be a funny thing. I suspect that the experience of living through the general election in Connecticut, and my interaction with many state Dems then, colored my perception of national Democrats.

Posted in Uncategorized | 21 Comments

Campbell’s Law, statistical caveats, and standardized testing

A colleague of mine sent along this excellent piece by John Ewing of Math for America. It’s a very necessary and important critique of the use of value added models in education assessment. (It’s also– are you listening, Nick Kristof?– an excellent example of academic writing that is accessible and understandable.) I highly, highly encourage you to read the whole thing; it’s only five and a half pages. I want to pull out a couple pieces of this argument because I really think it’s immensely important for our policy debates.

First is Ewing’s reference to Campbell’s Law: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to measure.” This can be seen in a variety of arenas, with the Wire‘s depiction of police stat-juking as a perfectly indicative example. In education, the most notorious examples are the widespread cheating observed in school systems like Washington DC during the Michelle Rhee era. What’s essential to understand is that Campbell’s Law is descriptive, not normative: this corruption will happen regardless of the question of whether it should happen. A lot of ed reform types respond to cheating in the face of pressure to improve on standardized tests by groaning about the immorality of our public educators. I certainly don’t condone cheating. But when the ability to remain employed is tied to measures that educators have little ability to actually control, you are directly incentivizing cheating, and in fact often leaving people with the choice between cheating and losing their jobs, with no other alternative. I’m not excusing cheating under those conditions. I’m arguing that it’s inevitable.

Second, Ewing elegantly distills the evidence that value added measures of teacher performance are, in a perverse way, an argument against the assumptions that undergird the ed reform movement: that teacher quality is a static value, which can be validly measured and quantified, and be reliably and consistently separated from student inputs, and that this quality will demonstrate over time which teachers to pay better and which to fire. As Ewing points out, the actual result of many or most value added models is to show wildly inconsistent results for individual teachers and schools. I really can’t overstate this: if we take value added models seriously in the way that ed reformers want us to, the only reasonable conclusion is that static teacher quality does not exist. These models return such wildly divergent results for individual teachers that, if you really believe they accurately reflect teacher quality, you must conclude that teacher quality is so variable from year to year and class to class that it is next to impossible to predict, and thus to fairly reward. Extant value added models are either invalid or unreliable indicators, or teaching quality is not stable.

I say this all from the perspective of social science. I am not trying to make a political plea, here. I’m saying that in the most ruthless sense of empirically-justifiable policy. It’s a consistent and major frustration of mine that, in these debates, arguments about the soundness (or lack thereof) of teaching assessment regimes are so often dismissed as politicized defenses of the status quo. Ewing is a mathematician making a mathematician’s argument, and anyone defending value added models must address his concerns.

Finally and most importantly, there’s Ewing’s discussion of the way in which statistical and empirical caveats get sanded away over time. This is a constant frustration, and I think it’s important in light of recent discussions about the mutual mistrust and lack of understanding between academics and journalists. Ewing rightfully goes after the Los Angeles Times for its notorious publication of LA teacher value added metrics. As Ewing points out, the reporters involved waved briefly at the many caveats and limitations that the developers of these metrics include, then completely ignored those caveats and limitations. This is a dynamic I’ve found in the popular media, when it comes to empirical research, again and again. It’s a facet of simply reading abstracts or press releases without going into the actual methodology, and it’s very destructive.

So take the College Learning Assessment +, the subject of my dissertation. I have a complicated relationship to the test. I have a great deal of criticisms of this kind of instrument in general, and I think that there are specific dangers with this test. At the same time, the procedures and documentation of the CLA+ are admirable in the way that they admit limitation, argue against a high-stakes approach to using the test, and are scrupulous in saying that the mechanism is specifically designed to measure education across instructors and departments, and cannot be used to assess between-department differences. This is a point made again and again in the supporting documentation of the CLA+; the entire purpose of its mechanism is to show aspects of collegiate learning working in concert, and thus it cannot be used to assess the instructional quality of particular majors or instructors.

For these reasons among others, I see the CLA+ as a potentially better test than some of the proposed alternatives. As one tool among others, it could demonstrate the actual intellectual growth that I deeply believe goes on at college campuses, and enable students to show this growth to employers in a brutal labor market. But in order for the test to actually function the way it was intended by its creators. Actual, on-the-ground implementation at universities requires administrators to understand these caveats and limitations and draw conclusions accordingly. As someone who is observing such an implementation in real time, it’s not at all clear that the people pushing for this test here at Purdue really understand these limitations or take them seriously. You cannot hope to draw real-world value from any test or empirical research without scrupulously understanding and taking seriously the limitations that the developers or researchers announce. Otherwise, you end up making a significant investment in terms of resources and effort and getting a lot of statistical noise back for your trouble.

What has been remarkable for me, as I have gradually acquired an education as a quantitative empirical researcher, is how more and more knowledge leads to greater skepticism, not less. This is not research nihilism; I am a researcher, after all, and I believe that we can learn many things about the world. But I cannot tell you how easy it is for these metrics to go wrong. They are so manipulable, so subject to corruption, and so easily misinterpreted if you aren’t rigorously careful. So I genuinely beg (beg) the popularizers and explainers out there: take these caveats and limitations very seriously. They are not the fine print; they are, often, the most essential part of the text. The work of explaining complex research to the public is noble and essential in our society. I am a bit cynical about the ability of the web to make prominent figures in the media more accessible, at this stage in the internet’s development, but I maintain a streak of hope in that regard. So I use my limited platform ask that people like Matt Yglesias and Ezra Klein and Dylan  Matthews and those at Wonkbook and The Atlantic and Slate and The New York Times to please, please, please, be skeptical and rigorous readers of this research. Used carefully, empirical research can improve our policy. But it’s always easier to simply become a nation of stats jukers, moving the metrics to move the metrics and avoiding uncomfortable conclusions.

Posted in Education, Popular & Digital Writing, Prose Style and Substance | 4 Comments

academic writing: who are we talking about, here?

So there’s been another spate of journalists complaining about academics lately, which is a cyclical exercise. This time, they’re really working the  angle where they claim to show respect to academics in the abstract, but use this ostensible respect as a way to heap more disrespect onto actual academics. The latest flavor: how come academics use those big words? Why aren’t academics working harder to explain their work to journalists? Why aren’t academics just like journalists? Inquiring minds want to know. So under the guise of trying to get more engagement from academics, you get Nick Kristof yearning for imaginary academics by deriding actual academics, Josh Marshall dredging up his grad school stories from decades ago to complain about actual academics, and so on. I think Corey Robin did a good job of replying to Kristof, but really, I think the questions essentially answer themselves: they’re expressed with so much derision, it’s no wonder that many academics retreat from contact with journalists. Why put up with the abuse? The Dish’s headline for their post aggregating this stuff was “Can the Ivory Tower Be Stormed?” Sounds like an invitation for a mutually respectful interchange to me!

Now it happens that I think tons of academics are in fact fantastic communicators and great writers. I thought about mentioning some here, but then I realized the pointlessness of that sort of exercise: that academic writing is deliberately obscure and incomprehensible is one of those things that journalists just know, so they don’t feel a need to mention any actual, contemporary academic writers who are willfully obscure. This whole extended conversation has featured a really bizarre lack of specificity and evidence. Which academics are deliberately obscure? How so? In what articles or books? One of the frustrating things about the way the academy is discussed in the media is the existence of things that “everybody knows” that aren’t ever really ascribed to individual people, making these points essentially irrefutable. That college professors hate to teach undergraduates is the most common and the most toxic. In my experience, the average college professor loves to teach and is deeply committed to his or her undergrads. But when there’s no evidence cited, there’s no possibility for rebuttal. Same with the charge of deliberate obscurity: nobody bothers to cite anybody, with the possible exception of some postmodernists who have been out of fashion for 20 years, so there’s no way to push back against the claim.

I also think that what frequently gets derided by journalists as jargon or obscurity are necessary specialized vocabularies that have been developed to reflect the incredibly complexity of our world. I read some of these pieces yesterday in the computer lab at the Oral English Proficiency Program where I work. As I did so, some colleagues of mine were analyzing the waveform of one of their research subjects. They had to utilize a complex and specialized vocabulary to meaningfully address the issues of phonetics, phonology, audiology, syntax, and psycholinguistics that was necessary for their particular task. If they felt that they had to have the conversation in a plain style, it would have been far less efficient and far less precise. Of course, we should be able and willing to discuss  complex ideas in simpler terms and be ready to engage the public with our work. But we also, in our journals and at our conferences, have to be able to utilize a vocabulary of sufficient complexity to effectively and efficiently communicate intricate ideas.

If journalists actually want to know why there’s not a better interface between academics and the media, they should start by engaging in self criticism, which would be the best way to create positive change. They would recognize that many within our media have enthusiastically embraced the anti-intellectual, anti-academic aspects of contemporary American culture. They would admit that academics rarely reach out to journalists because journalists are, as a class, frequently hostile to academics. They would consider how much resentment there is on the part of the media over the tendency of academics to privilege slow, careful knowledge-making over the immediacy and lack of corroboration that are the habits of journalists. They would admit that many journalists jealously guard their role as creators of knowledge, and don’t like sharing that task with academics. And they could ask if perhaps the routine trumpeting of the supposed uselessness of academics is in fact a self-fulfilling prophecy, that the reflexive tendency of the media to see academics as irrelevant actually contributes to the problem. Instead of writing thousands of words about how academics are failing journalists, maybe journalists could spend five minutes considering how they’ve failed academics.

In my tiny, low-traffic way here, I’m trying to present some academic ideas for a popular audience, at least some of the time. I have a couple pieces in progress for here that do just that. But if you start from the assumption that I’m an irrelevant figure as a grad student and academic, then there’s no possibility that you’ll ever learn from what I write. Maybe it’s time for the other half of this relationship to engage in a little accountability.

Posted in Education, Prose Style and Substance | 4 Comments

getting past emotional truth

This post by Ann Larson discusses, briefly, the rise of the “rage of the adjunct” essay genre. I’m glad that it does, as it itself is indicative of my frustrated, conflicted relationship towards that genre. It’s profoundly necessary, and has finally generated some much-needed media attention for the plight of adjuncts, which is a moral stain on the contemporary university. But there are problems with the genre which are exemplified in Larson’s post.

First, there’s the tenuous grasp on the factual accuracy of the arguments involved. I’ve said, at length and repeatedly, that the general argument about the fundamental brokenness of the academic job market is entirely correct. I’ve also said that there is some distortion and looseness with the facts in these essays that merely makes it easier for those in power to ignore them. So look at Larson’s discussion of the rhetoric and composition job market, relative to the literature market. It happens that I frequently counsel people in my field that composition is by no means a safe haven, that a rhet/comp degree is not a guarantee of anything, that people can and do graduate into unemployment from these programs all the time. But this is insufficient for Larson, so in her piece she writes

Composition does not defy our rotten economic system; it exemplifies it.

The literature on contingency leaves no room for doubt. A 2011 special issue of College English directly addressed the issue of Composition exceptionalism. “[F]aculty teaching courses in composition,” wrote Mike Palmquist and Susan Doe, “have been affected most by [higher education's] growing reliance on contingent faculty. Nearly 70 percent of all composition courses . . . are now taught by faculty in contingent positions.”

This is frustratingly uninformative. Clearly, what’s relevant is not merely what percentage of composition courses writ large are taught by adjuncts, but what percentage of rhet/comp PhDs get long-term contracts and TT jobs relative to literature. To ignore that question is misleading. As Larson well knows, the numbers she’s discussing here include the many, many literature PhDs who graduate without jobs. She’s lumping together groups that this discussion needs to keep distinct. How many composition PhDs are unemployed one or three or five years after graduation? I’m sure that it’s far too high. I’m also sure that it’s far lower than a similar share of literature PhDs, which anyone who is exposed to the labor market of English can tell you. That isn’t an argument that compositionists are safe– they aren’t. It isn’t an argument that the labor market is healthy– it’s the opposite. It doesn’t change the fact that far too many in both literature and composition will go on to exploitative labor situations. But it is very important context for thinking through these issues clearly, and I know some would dismiss Larson’s piece out of hand for not talking about it.

This is the problem with speaking the “emotional truth,” a common invocation for adjunct essayists and part of a lot of rhetorically counterproductive strategies that, I’m sorry to say, creep into this genre a lot. The emotional truth is invoked on the ground in the day-to-day discussions I have with adjuncts. People will make claims that I know to be factually inaccurate, or will advance ideas that I find politically misguided, and I will push back. When confronted, they will say something like “I am entitled to my anger,” leaping back and forth from a position of making a dispassionate economic analysis to a position of emotional truth that I am therefore, in their minds, obliged not to contradict. There are all sorts of ways bad arguments and misleading information get excused in these debates– “it’s agitprop! it’s not intended to be factual! it’s meant first to provoke!”– and I think each of these, while certainly understandable, are ultimately unproductive. And they have made this argumentative space one of bullying and rejection. I know young academics, including TT faculty, who would potentially be allies in this cause, but because these essays always gravitate towards anger over reconciliation, they decide instead to ignore the issue altogether. It’s simply too fraught to engage.

Most academics are people who think that facts matter, and so when you are loose with the facts, you make it harder to get their support. For example: the claim that 76% of college instructors are adjuncts. The actual figure is 41%, as Matt Bruenig has demonstrated. That’s far too high! Those people deserve steady paychecks, manageable teaching loads, long-term contracts, and benefits. 41% is the opposite of an acceptable condition. But  look: that’s the truth. Tell the truth. Not the “emotional truth.” The old-fashioned kind of truth.

In her essay, Larson contrasts the current order with a very common false history of the academy. This is a myth that permeates the study of literature: English was once the noble study of literature and the human ideals contained within, but has since been devalued and debased thanks to the teaching of composition. Literature was among the most prized of the liberal arts, as it helped students reach for the deeper, more meaningful values that make human life worth living, apart from their narrow economic interests. But now, thanks to the rise of the neoliberal university, English has been reduced to the study of the practical, service discipline of teaching writing, which is a betrayal of traditional values and the particular instrument for casualizing the English professoriate.

None of that is true. The equivalence of English with the study of literature is a historical curio, almost entirely a 20th century phenomenon. The kind of close examination of texts for their symbolic and aesthetic value that we now think of as the study of literature was traditionally undertaken in the study of classics and religion, neither of which was typically undertaken in the vulgate. The study of rhetoric and argumentation goes back to the very foundation of the modern research university, and is discussed as an essential aspect of liberal education by John Locke, Francis Bacon, Adam Smith, John Milton, among others.  Meanwhile, the study of great texts was directly and uncontroversially undertaken with the explicit purpose of inculcating traditional values. The association between the liberal arts and humanities and the postmodern rejection of normativity is a very new phenomenon. For most of their history, the humanities and liberal arts have been based on the exact opposite philosophy than their current non-normative assumptions. They were intended to pass on values– aristocratic values, Christian values, hegemonic values. That was their purpose.

This made sense, given the historical purpose of the university, and this is the most important point: the university of the past– the one which Larson thinks was once alive and is now dead– never existed. Larson contrasts the present with a dead past, but it’s a figment of her imagination. It’s remarkable to me how often intelligent people who have historical chops repeat this false notion of a prelapsarian American university where knowledge was pursued for its own sake. When Larson says that the university is dead, I want to ask: when was her version of the university alive? It’s easy to find people lamenting the decline of knowledge for its own sake, but harder to find documented evidence from the past that shows that the university was ever involved in its production To whatever degree that was once true, it was true because of the initial purpose of the American university: to train and perpetuate an elite aristocracy. If colleges were places where practicality never entered into the equation, it was because their students had nothing to fear from the job market. They were already wealthy and destined for more wealth.

Now if this is a discussion of “pure research,” then of course I’m in favor of it, and like most humanists I believe that human inquiry has ended if all of it must be connected to immediate material or pecuniary gain. At the same time, I am perpetually confused by the notion that it’s a failure if humanistic knowledge is also demonstrated to have practical value. I think that the study of literature is a noble and necessary pursuit, one that does indeed have the potential to access deeper values and more expansive human good. At the same time, I think there’s lots of practicality in that study too, and I don’t see why anyone who is interested in the teaching of literature would want to deny that. Our pedagogy and our research can and must be both about generating knowledge for its own sake and for the practical good of students and humankind. That always seemed straightforward to me.

Finally, arguing that the study of composition is the wedge cudgel through which administrators have rendered English a discipline of adjuncts seems strange, given the relative (only relative) health of the rhet/comp job market. Rhetoric and composition, at some places, functions as a defender of literature because the programs help secure the funding necessary to keep literature programs viable. I don’t mistake that for a sustainable solution, nor do I doubt that there are rhet/comp scholars and faculty who are partially to blame for the current labor market. (I don’t excuse faculty in general, although I do insist on pointing out that the greater part of the blame lies with administrators and state legislators.) But I also think it’s relevant that, in demonstrating the ways that English is important for the (yes) practical needs of students, rhetoric and composition can help English departments and literature scholars. There’s nothing dishonest about using the university’s focus on capitalist goods to support and sustain humanistic research and careers. On the contrary, it strikes me as just sensible.

Worst of all is Larson’s advice, such as it is. She writes, “As public institutions are dismantled around us, those who identify as Compositionists should take the radical step of refusing to apply our knowledge and expertise in the corrupt institutions as they are.”

As I’ve said in the past: this is Underpants Gnome theory. Refuse… and then what? I’ll tell you what: if the good people who care about adjunct labor, cultural studies, and the political economy of the university drop out, then the university will simply replace them with people who don’t care about those things. It would be a great victory for conservative academics and those who really are collaborators, but not much use for current adjuncts. Larson mentions the UIC protests, one of the most invigorating developments in the university in the last several years, for me any way. But if they took her advice and refused to participate in their jobs, there would have been no one in those corrupt institutions to fight back. Does Larson think we can really escape those institutions entirely? Does she think we can escape those institutions entirely, rebuild a new university system? I find that deeply unpersuasive. We need to do our best to reform the current institutions, precisely because that is where our adjuncts are working and need the help.

There are times when I feel genuine hope about these issues. The UIC strike is only a small beginning, but it demonstrates the ways in which permanent faculty and contingent labor can work together. And in making the essential argument that labor conditions and undergraduate education quality are intimately linked, the strikers and their supporters are taking up the necessary argument to create change. When Rebecca Schuman makes her brilliant suggestion to reflect adjunct labor in the US News and World Report rankings, or JD Hoff points out that the deck is stacked against adjuncts doing quality teaching, I feel energized. The universities care about the opinion of parents, they care about what potential students care about, so if we can use journalism and cultural commentary to demonstrate the ways in which adjunct labor hurts students, we can make real headway. But when I read posts like Larson, or when I engage with people who insist that I am a collaborator if I refuse to engage in useless emoting, I feel real despair.

Crisis necessitates that we respond carefully and intelligently. We cannot emote our way out of this problem. We can make real and substantial progress, but we need to tell the truth, and we need to gather what allies we have, not eject them because they haven’t ritualistically quit their jobs in a way that would only make things worse for everyone. Save the emotional truth for your diary. We need to organize, we need to unionize, and we need to strike.

Posted in Education | 13 Comments

10 addenda to 10 Ways to Share

Austin Kleon’s 10 Ways to Share Your Work and Get Discovered

Offered in sympathy and solidarity with makers, artists, and strivers everywhere.

1. You don’t need to be a genius… but it don’t hurt.

2. Think process, not product… but if you don’t end up with a product, you’ve got nothing to share.

3. Share something small every day… but make sure it’s something worth sharing.

4. Open up your cabinet of curiosities… but remember other people sometimes have fuller cabinets, and more interesting too.

5. Tell good stories… if you happen to have good stories to tell.

6. Teach what you know… but you will probably end up teaching precisely what you feel costs you the least in giving away.

7. Don’t turn into human spam… but if you don’t get enough likes and favorites, you’ll cease to exist.

8. Learn to take a punch… but also learn how to deal when they don’t bother to throw any.

9. Sell out… in the unlikely event you get the chance.

10. Stick around… but if it’s hurting you not to quit, then quit.

You’ve got to reconcile so many different things. Self-belief may be a prerequisite for success, but if so it is necessary but not sufficient. Most people who make it may be legitimately talented, but neither legitimacy nor talent are fairly distributed. Bad work can only get you so far, but the history of creation is a history of great work that was neglected, forgotten, and ignored. There are people who are deserving who will never make it; there are people who will make it who are not deserving. So if you’ve got work to do you should do the work, and I hope that you make it, but at some point the important work becomes the work of forgiving yourself for not making it. That’s just coming from me as a guy, from me as a human person, someone with no expertise other than having observed people who have both made it and people who have failed to do so, and seen the ways in which both are treated by their culture and themselves.

In the future, what we will be forced to talk about, more than anything else, is a reality that is increasingly hard to deny: that human beings are not substantially in control of the outcomes of their own lives. That fight will affect economics, politics, crime, education, and the future of the human race. And it will, eventually, affect how we talk about art. Until then, take every risk you find sensible and work as hard as you can, but understand that there will always be money in telling people they can make it and there will never be money in telling them they probably can’t. I wish you only the best.

Posted in Literature, Meta, Prose Style and Substance, The Discipline | 1 Comment

it’s good to know who you are

You are aware of the rules.

You want people to think you’re smart, but you never want to be seen as a self-consciously intellectual. You benefited from great teachers your whole life but you represent yourself as self-taught. You loved college but you now argue that college isn’t worth it. You might have a graduate degree, but never a doctorate, and you now represent your masters as some embarrassing thing you did in the past, its contribution to your earning power notwithstanding. You want to be known as well read, maybe even a reader, but not a reader reader. You want to have had exposure to “the classics” but not to be the kind of person who reads them now. You don’t mind being ignorant but you have stress dreams about appearing affected. You run all of your conspicuous cultural consumption through an exquisitely crafted defensive process, a mental machine that you are only partially in control of which eliminates from your cultural repertoire any commitments that risk being seen as showy, willfully obscure, or academic.

You are to be the kind of person who “overthinks it,” enjoying the protection of the preemptive irrelevance that term contains. You use the traditional literary analysis tools of symbolic reading and references, but you never apply them to actual literature, only pop culture– movies, or even better, TV. You will have a long conversation about the Christ allegory in Breaking Bad but would never dream of doing the same with “A Hunger Artist.” You rage against a high culture that no longer exists, but you are contemptuous towards anyone who has not absorbed a vast and subtle range of attitudes and opinions towards the middlebrow media you treat as scripture. You call True Detective operatic but disdain opera; you call violence in the latest Tarantino movie balletic but despise ballet. You are quietly patronizing towards people who enjoy CBS sitcoms and procedural dramas, but you are openly enraged by the notion that your cultural consumption should ever involve hard work. You complain that cultural disrespect towards video games and comic books narrows the realm of the possible and then insist that no one could ever read Gertrude Stein for pleasure.

More tragically, you want your life to have narrative and meaning, but you are so desperately afraid of appearing pretentious– the cardinal sin– that you have systematically eliminated from your life those aspects of it which were once most moving or treasured to you, to be rescued only, perhaps, when you become a parent, and can pawn off the work of being yourself as the work of making someone else.

We have entered a phase of the life of the mind and of culture that is the intellectual equivalent of “spiritual but not religious,” a time where everyone wants the social privileges that accrue to the smart but where no one wants to risk the social punishments that fall on intellectuals. Everyone wants to be sophisticated; nobody wants to be perceived as a sophisticate. They want to be cultured but never to be associated with what was once called “culture.” We lament ignorance but we are utterly corrosive towards the work of becoming smart. We love information; we hate to be taught. People have only those commitments that they can shroud in so many layers of plausible deniability that they can be discarded, when necessary, without real cost.

I can take all of it but the hypocrisy. I know that I’m a snob. I don’t think much of many people. I feel that way because I exist in the world, and I see the way that stupid people ruin the world every day. But I can deal with the choice to be ignorant. What I can’t deal with is the people who simultaneously deny the legitimacy of expertise while writing for publications that get millions of hits, who make feints towards egalitarianism while they are involved in the incredibly inegalitarian world of online writing, who are able to be seen waxing populist precisely because they are in decidedly non-populist environs. Don’t tell me that we’re all the same when You have to decide if you think being smart is better than being stupid, if you think education is better than ignorance. Then just live that and leave all of the bullshit kabuki out of it. If I have read you, you are no everyman. If I have read you writing about art and culture, congratulations, you are pretentious, even if you have assiduously restricted your analysis to topics covered by the AV Club and have mercilessly trimmed anything that might appear to take art, yourself, or your audience seriously. You are already pretentious, so please, drop the pretense. I cannot stand the fake, inconsistent populism. I can’t take it.

Posted in Uncategorized | 7 Comments

why I’m a crabby patty about AI and cognitive science

I’m sorry if I am a grump about artificial intelligence. It just happens to be a subject on which our media frequently appears both insufficiently educated and unwilling to learn. My frustration stems from a basic category error, which can be boiled down to this:

My cellphone is much better than my cellphone five years ago, ergo artificial intelligence/the Singularity/techno-utopia is right around the corner. 

If that’s an exaggeration, it’s not much of one. Now it happens that this is a generally unhelpful way to think about technology. Technological progress is constant, but it is stunning how unevenly distributed it is. This leads to complaints of the type “they can put a man on the moon but they can’t make a deoderant that lasts past 2 PM.” This crops up in specific fields all the time. There’s been a well-documented problem in personal electronics where battery development has not kept pace with development in processors, leading to lower effective usage time thanks to the increased power requirement of faster processors. But you can extend this observation in all manner of directions, which is why futurism from the past is often so funny.

This kind of thinking is especially unhelpful in the realm of artificial intelligence because it so thoroughly misunderstands the problem. The problem with AI is that we don’t really know what the problem is, or agree with what success would look like. With your cellphone (or any number of similar rapidly-improving technologies) we are perfectly aware of what constitutes success, and we know pretty well how to improve them. With AI, defining the questions remains a major task, and success a major disagreement. That is fundamentally different from issues like increasing processor power, squeezing more pixels onto a screen, or speeding up wireless internet. Failing to see that difference is massively unhelpful.

If people want to reflect meaningfully on this issue, they should start with the central controversy in artificial intelligence: probabilistic vs. cognitive models of intelligence. I happen to have sitting around an outline and research materials for an article I’d like to write about these topics. The Noam Chomsky – Peter Norvig argument got press recently, and I’m glad it did, but I think it’s essential to say: this fundamental argument goes back 50 years, to when Chomsky was first becoming the dominant voice in linguistics and cognitivie science, and engaged in his initial assault on corpus linguistics. And it goes back to an even older and deeper question about what constitutes scientific knowledge. I’d love to write about these issues at great length and with rigorous research, but it would be a major investment of effort and time, so I would want to do it for a publication other than here, and unfortunately, none of the places I pitched it to got back to me. (Which does not surprise me at all, of course.) I hope to someday write it. But let me give you just the basic contours of the problem.

The initial project of artificial intelligence was to create machines capable of substantially approximating human thought. This had advantages in both a pure science standpoint and an engineering standpoint; it was important to know how the human brain actually functions because the purpose of science is to better understand the world, but it was also important because we know that there are a host of tasks that human brains perform far better than any extant machine, and it is therefore in our best interest to learn how human brains think so that we can apply those techniques to the computerized domain. What we need to find out– and what we have made staggeringly little progress in finding out– is how the human brain receives information, how it interprets information, how it stores information, and how it retrieves information. I would consider those minimal tasks for cognitive science, and if the purpose of AI is to approximate human cognitive function, necessary prerequisites for achieving it.

In contrast, you have the Google/Big Data/Bayesian alternative. This is a probablistic model where human cognitive functions are not understood and then replicated in terms of inputs and outputs, but are rather approximated through massive statistical models, usually involving naive Bayesian classifiers. This is the model through which essentially every recommendation engine, translation service, natural language processing, and similar recent technologies works. Whether you think these technologies are successes or failures likely depends on your point of view. I would argue that what Google Translate does is very impressive from a technical standpoint. I would also argue that as far as actually fulfilling its intended function, Google Translate is laughably bad, and all the people who say that you can use it for real-world communication have never actually tried to use it for that function. And there are some very smart people who will tell you it’s not improving. One of the great questions for the decade ahead is whether there is a plateau effect in many of these Bayesian models, at what point exponentially increasing the available data in the systems ceases to result in meaningful improvements. Regardless of your view on this or similar technologies, it’s essential that anyone talking about AI reflect understanding of this divide, what the controversies are regarding it, who the players are, and why they argue the way they argue.

There are many people who are not interested in the old school vision of AI. They think that what we should actually care about is using computers to satisfy useful tasks and that we shouldn’t worry about the way human thinking works or getting computers to model it. That’s a reputable position. I think in its stronger form, it’s essentially declaring defeat in the pursuit of science and its purpose, but there are a lot of dedicated, well-connected, well-respected people who simply want to build useful systems and leave cognitive science to others. (That’s where the money is, for obvious reasons.) But even for those who are task-oriented, there are profound reasons to want to know how the human brain works. Because what some very smart people will tell you is that the fancy Big Data applications that rely on these Bayesian probability models are in fact incredibly crude compared to animal intelligence, and require a tremendous amount of calibration and verification by human beings behind the scenes. Does Amazon really know what you like? Are its product recommendations very helpful? Are they much better today than they were five years ago?

In this wonderful profile, Doug Hofstadter expresses the pessimistic view of AI very well. AI of the old fashioned school has had such little progress because cognitive science has had such little progress. I really don’t think the average person understands just how little we understand about the cognitive process, or just how stuck we are in investigating it. I constantly talk with people who assume that neuroscience is already solving these mysteries. But that’s the dog that hasn’t barked. Neuroscience has given us an incredibly sophisticated picture of the anatomy of the brain. It has done remarkably little to tell us about the cognitive process of the brain. In a very real way, we’re still stuck with the same crude Hebbian associationism that we have been for 50 years. Randy Gallistel (who, in my estimation, is simply the guy when it comes to this discussion) analogizes it to a computer scientist looking at the parts of a computer. The computer scientist knows what the processor does, what the RAM does, what the hard drive does, but only because he knows the computational process. He knows the base-2 processing system of a CPU. He knows how it encodes and decodes information. He knows how the parts work together to make the input-output system work. The brain? We still have almost no idea, and looking at the parts is not working. It’s great that people are doing all of these studies looking at how the brain lights up in an MRI when exposed to different inputs, but the actual understanding that has stemmed from this research is limited.

Now people have a variety of ways to dismiss these issues. For example, there’s the notion of intelligence as an “emergent phenomenon.” That is, we don’t really need to understand the computational system of the brain because intelligence/consciousness/whatever is an “emergent phenomenon” that somehow arises from the process of thinking. I promise: anyone telling you something is an emergent property is trying to distract you. Calling intelligence an emergent property is a way of saying “I don’t really know what’s happening here, and I don’t really know where it’s happening, so I’m going to call it emergent.” It’s a profoundly unscientific argument. Next is the claim that we only need to build very basic AI; once we have a rudimentary AI system, we can tell that system to improve itself, and presto! Singularity achieved! But this is asserted without a clear story of how it would actually work. Computers, for all of the ways in which they can iterate proscribed functions, still rely very heavily on the directives of human programmers. What would the programming look like to tell this rudimentary artificial intelligence to improve itself? If we knew that, we’d already have solved the first problem. And we have no idea how such a system would actually work, or how well. This notion often is expressed with a kind of religious faith that I find disturbing.

C. Elegans is a nematode, a microscopic worm. It’s got something like 300 neurons. We know everything about it. We know everything about its anatomy. We know everything about its genome. We know everything about its neurology. We can perfectly control its environment. And we have no ability to predict its behavior. We simply do not know how  its brain works. But you can’t blame the people studying it; so much of the money and attention is sucked up by probabilistic approaches to cognitive science and artificial intelligence that there is a real lack of manpower and resources for solving a set of questions that are thousands of years old. You and me? We’ve got 80 billion neurons, and we don’t know what they’re really up to.

Now read this post from Matt Yglesias. I just choose it as an indicative example; it’s pretty typical of the ways in which this discussion happens in our media. Does it reflect on any of this controversy and difficulty? It does not. Now maybe Yglesias is perfectly educated on these issues. He’s a bright guy. But there’s no indication that he’s interacting with the actual question of AI as it exists now. He’s just giving the typical “throw some more processing power on it!” And the most important point is– and I’m going to italicize and bold it because it’s so important– the current lack of progress in artificial intelligence is not a problem of insufficient processing power. Talking about progress in artificial intelligence by talking about increasing processor power is simply a non sequitur. If we knew the problems to be solved by more powerful processors, we’d already have solved some of the central questions! It’s so, so frustrating.

I am but a humble applied linguist. I understand most of this on the level of a dedicated amateur, and on a deeper level in some specific applications that I research like latent semantic analysis. I’m not claiming expertise. And I think there is absolutely a way to be a responsible optimist when it comes to artificial intelligence and cognitive science. I am not at all the type to say “computers will never be able to do X.” That’s a bad bet. But many people believe we’re getting close to Data from Star Trek right now, and that’s just so far from the reality. Journalists and writers have actually got to engage with the actual content. Saying “hey technology keeps getting better so skeptics are wrong” only deepens our collective ignorance– and is even more unhelpful in the context of a media that has abandoned any pretense to prudence or care when it comes to technology, a media that is addicted to techno-hype.

OK, so my short version is almost 2000 words. It’s a sickness.

Posted in Language and Linguistics, Tech Stuff | 38 Comments

black armbands for the Taliban

So it seems to me to be straightforwardly and superficially the case that the Olympics, while a lot of fun, are a corrupt nightmare as an institution. Their apolitical stance, meanwhile, is absurd on its face; everything about their presentation is drenched in competing nationalism, and as usual, the denial that an event or expression is political is just a way to play politics. Like most people, I’ve been inspired not just by the implicit politics of competitors like Jesse Owens but by the explicit protest of Tommie Smith and John Carlos. I think an international stage like the Olympics is a great opportunity to demonstrate against repressive governments. All of that said, I think the current hand-wringing about the IOC’s denial of Ukrainian athletes’ request to wear black armbands, in support for the protesters involved in the brutal conflict in Kiev, demonstrates the Western media’s inconsistent and self-interested support for political protest.

To the degree that the support of a powerless American like me matters, I support the protesters in Kiev, although I have necessarily limited ability to really understand them. But that’s just to say that I have the politics I do. If the right to protest is merely dependent on the whims of white social progressives in the western media, then it is no right at all, and instead merely another way in which the west assumes its right to dictate political right and wrong. So take the thoughts of Travis Waldron of ThinkProgress, an organization which I would nominate as perfectly typical of general liberalish sentiment.

He writes,

The IOC gains plenty out of the perception that its Games foster global harmony and understanding, which is why it has no interest in making that perception a reality. Reality is ugly. In the world where the Olympics don’t naturally cure all ills and problems, living up to the Olympic Charter requires action or at least the accommodation of it. It means allowing “Free Tibet” posters in Beijing and LGBT rights protests in Russia. It requires standing up to governments that jail dissenters and kill protesters. It means accepting more John Carloses and Tommie Smiths, more “Under Protest” banners, and more questions about appropriate costs, security policies, and basic rights.

So I ask: would Waldron support the right of athletes from Afghanistan to wear black armbands in support of the Taliban, to memorialize their dead? If not, why not? After all, the supporters of the Taliban see them as an organic resistance movement fighting against a corrupt and undemocratic government that was enforced from above by imperialists. I have no idea how popular the Taliban are in Afghanistan, although an armed resistance movement could not have persisted for decades, including the past 12 years against the might of the American military, without some popular support. It happens that I also don’t like the Taliban, at all. But both of these points are irrelevant, if we’re really making a claim about the principle of a right to protest by all peoples. Waldron maintains that the armbands would be an apolitical memorial to everyone killed in the protests, but I doubt even he would defend defining such a display as really apolitical, if pressed. You can’t sneak out of these issues that way.

Would Waldron support, say, the right of athletes from the Palestinian territories to commemorate the Munich attacks of the 1972 games? I doubt it! Would the IOC ever tolerate a commemoration of the Viet Cong’s resistance against destructive occupying forces from France and the United States? I doubt it! Waldron name checks Free Tibet protesters, but doesn’t consider Chinese counter-protests, which would be inevitable given that the occupation of Tibet is very popular in China. I think that’s a mistake, but again, my opinion is irrelevant. Waldron mentions LGBT protests in Russia, but says nothing about Ugandans protesting in countries where gay marriage is legal. He’s presented an argument about the principle of a right to protest and yet only invoked issues that his American readership is likely to agree with.

Now if the idea is just “I’m Travis Waldron [or any of the other people who are mad about this] and I like the Kiev protesters and not the Putin government,” then fine. Just say that, then. Say that you support the right of protest when you agree with the protesters and not in other occasions. But when you claim to argue from principle, but only actually support that principle when it supports the interests of the United States government, you’re full of it. Just as the US government was full of it when it claimed to support democratic elections in the occupied territories and then dropped that support the minute elections resulted in a majority for Hamas. Be a part of the ample American propaganda effort, or argue for the neutral principles of free protest, but don’t do one and call it the other.

Posted in Popular & Digital Writing, Rhetoric | 1 Comment

have you heard the Good News?

People get on me for being a curmudgeon about techno-utopianism. And I don’t doubt that there’s lots of great benefits to technology that I enjoy every day. But so many people who write about technology make it so, so hard.

To wit, “The Dawn of the Age of Artificial Intelligence.”

The advances we’ve seen in the past few years—cars that drive themselves, useful humanoid robots, speech recognition and synthesis systems, 3D printers,Jeopardy!-champion computers—are not the crowning achievements of the computer era. They’re the warm-up acts.

Humanoid robots, speech recognition and  synthesis, and 3D printers are not examples of artificial intelligence by any definition. Self-driving cars and Watson (the Jeopardy-playing-computer) are not artificial intelligence under the original intent of artificial intelligence, which was to approximate human cognitive function, under the sensible theory that human cognitive function has been massively successful at changing the world.

How can we be so sure? Because the exponential, digital, and recombinant powers of the second machine age have made it possible for humanity to create two of the most important one-time events in our history: the emergence of real, useful artificial intelligence (AI) and the connection of most of the people on the planet via a common digital network.

I am not sure what the words “exponential,” “digital,” or “recombinant” explain here. Processing power has increased exponentially over its history, but has actually slowed in its growth, as we’ve reached plateaus in heat, and this is likely to slow growth in mobile chips just as it has already down with CPUs. Alternatives like quantum computers appear years away from practical implementation. Current technologies are digital, but then, so is the abacus. Recombinant, I suppose, refers to the dream that computers will become self-teaching, despite the lack of a clear explanation of how this process will work beyond the very specific and limited ways in which computers are already self-teaching. As for the greater claim that AI will lead the way to the next great stage of human evolution, well, I have aggregated links so that you can read people who know much better than me that AI has been one of the great scientific failures of the last century.

Trivial uses of AI include recognizing our friends’ faces in photos and recommending products. More substantive ones include automatically driving cars on the road, guiding robots in warehouses, and better matching jobs and job seekers. But these remarkable advances pale against the life-changing potential of artificial intelligence.

Again, I’m not sure why some of these are considered AI at all. Facial recognition, automated product recommendations a la Amazon, and matching jobs and job seekers are all impressive but based upon the Bayesian probability techniques that computers have been using successfully for decades. Self-driving cars  can be considered a form of very limited artificial intelligence, but you’re talking about a domain which is remarkably rule-bound and which involves very little need for the kind of “learning” that is typically imagined in these discussions.

To take just one recent example, innovators at the Israeli company OrCam have combined a small but powerful computer, digital sensors, and excellent algorithms to give key aspects of sight to the visually impaired (a population numbering more than twenty million in the United States alone). A user of the OrCam system, which was introduced in 2013, clips onto her glasses a combination of a tiny digital camera and speaker that works by conducting sound waves through the bones of the head. If she points her finger at a source of text such as a billboard, package of food, or newspaper article, the computer immediately analyzes the images the camera sends to it, then reads the text to her via the speaker.

This does sound somewhat impressive– if it works, as advertised, and is vetted by impartial and external experts who give it a rigorous and skeptical review. I would recommend that anyone remember examples like Siri or Google Translate, both of which have attracted breathless praise from the media but eventual, grudging admission from the people who actually use them day-to-day that they are remarkably limited tools, even while admitting how impressive they are technologically.

Reading text ‘in the wild’—in a variety of fonts, sizes, surfaces, and lighting conditions—has historically been yet another area where humans outpaced even the most advanced hardware and software. OrCam and similar innovations show that this is no longer the case, and that here again technology is racing ahead.

They don’t actually show that, because you haven’t shown it yet.

Digital technologies are also restoring hearing to the deaf via cochlear implants and will probably bring sight back to the fully blind; the FDA recently approved a first-generation retinal implant. AI’s benefits extend even to quadriplegics, since wheelchairs can now be controlled by thoughts. Considered objectively, these advances are something close to miracles—and they’re still in their infancy.

Not to be repetitive, but cochlear implants are not AI, and saying “digital technologies… will probably bring sight back to the fully blind” is pure assertion and entirely lacking in specificity. Controlling wheelchairs with thoughts, even if we accept the unproven premise that they are currently viable technologies for most quadriplegics, is in fact the opposite of AI.

There is no better resource for improving the world and bettering the state of humanity than the world’s humans—all 7.1 billion of us.

This is a statement without content.

Our good ideas and innovations will address the challenges that arise, improve the quality of our lives, allow us to live more lightly on the planet, and help us take better care of one another.

Of course, “we” are also responsible for the challenges that have arisen, the inequality of our lives, our heavy footprint on this planet, and our own failure to take care of one another.

It is a remarkable and unmistakable fact that, with the exception of climate change, virtually all environmental, social, and individual indicators of health have improved over time, even as human population has increased.

I tip my hat to this deployment of “unmistakable,” for its sheer audacity. First, the notion that virtually all environmental indicators have improved over time, even excepting climate change, is so ludicrous that I don’t know how to process it. We’re poisoning the oceans with carbon dioxide, threatening to leave vast expanses of water incapable of sustaining life. We’re staring down another mass extinction, caused by human beings. An incredibly lax regulatory culture, across a large variety of countries, leaves people questioning whether they can breathe the air or drink the water. And that’s all before pointing out that excepting climate change from a list of environmental ills is like excepting a knife between your ribs in a review of your current health. As far as the social and individual indicators go, well, I’ve long since given up convincing the teleologists out there that progress is neither constant nor inevitable. But I will insist on saying that, just to pick a few, the fires that are burning Kiev and Caracas and Cairo and Thailand and Syria, etc etc etc, probably should count as indicators of social health, even if you insist that things are good on balance. Or perhaps the fact that the advantage of family wealth lasts 10 to 15 generations might be a negative indicator. Individual indicators of health? You mean like physical health? Well we have an obesity epidemic you might have heard of. I could go on.

The economist Julian Simon was one of the first to make this optimistic argument, and he advanced it repeatedly and forcefully throughout his career. He wrote, “It is your mind that matters economically, as much or more than your mouth or hands. In the long run, the most important economic effect of population size and growth is the contribution of additional people to our stock of useful knowledge. And this contribution is large enough in the long run to overcome all the costs of population growth.”

Sounds great. Where’s the proof? How is this a responsible statement?

We do have one quibble with Simon, however. He wrote that, “The main fuel to speed the world’s progress is our stock of knowledge, and the brake is our lack of imagination.” We agree about the fuel but disagree about the brake. The main impediment to progress has been that, until quite recently, a sizable portion of the world’s people had no effective way to access the world’s stock of knowledge or to add to it.

In the industrialized West we have long been accustomed to having libraries, telephones, and computers at our disposal, but these have been unimaginable luxuries to the people of the developing world. That situation is rapidly changing. In 2000, for example, there were approximately seven hundred million mobile phone subscriptions in the world, fewer than 30 percent of which were in developing countries.

By 2012 there were more than six billion subscriptions, over 75 percent of which were in the developing world. The World Bank estimates that three-quarters of the people on the planet now have access to a mobile phone, and that in some countries mobile telephony is more widespread than electricity or clean water.

So let’s look at the industrialized West, where everybody with a library card has been able to get access to troves of information for decades. Do all of our people contribute meaningfully to the stock of useful information? No, of course not. Even with those libraries and the free public education and access to the internet. A tiny sliver of the population, actually, is contributing to the stock of useful information. You could argue– I am arguing– that a big part of the reason that more people aren’t is because of social and economic inequalities that prevent them from doing so. But the idea thrown around here, that increasing access to the internet is going to suddenly result in this vast explosion of people powering an all-candy-themed-app economy and in so doing result in utopia, is worse than unscientific. It’s an impediment to the incredibly hard work, tough choices, and inter-class conflict that real progress will require.

Today, people with connected smartphones or tablets anywhere in the world have access to many (if not most) of the same communication resources and information that we do while sitting in our offices at MIT. They can search the Web and browse Wikipedia. They can follow online courses, some of them taught by the best in the academic world. They can share their insights on blogs, Facebook, Twitter, and many other services, most of which are free. They can even conduct sophisticated data analyses using cloud resources such as Amazon Web Services and R, an open source application for statistics.13 In short, they can be full contributors in the work of innovation and knowledge creation, taking advantage of what Autodesk CEO Carl Bass calls “infinite computing.”

They can do those things. Most of them don’t do those things, and the vast preponderance of history– you know, history, that stuffy old subject we should throw out so that we can teach kids to code– tells us that increasing access to knowledge is woefully insufficient to resulting in actual productive use of that knowledge by those who struggle under economic exploitation.

Until quite recently rapid communication, information acquisition, and knowledge sharing, especially over long distances, were essentially limited to the planet’s elite. Now they’re much more democratic and egalitarian, and getting more so all the time. The journalist A. J. Liebling famously remarked that, “Freedom of the press is limited to those who own one.” It is no exaggeration to say that billions of people will soon have a printing press, reference library, school, and computer all at their fingertips.

Millions of people now have a printing press, reference library, school, and computer all at their fingertips. The number of them who do the things that the authors of this piece want people to do fits comfortably in a rounding error. When books ceased to be hand-copied by monks in monasteries, books stopped being the purview of the aristocratic and clerical elite, and yet most of the world’s population remained illiterate for hundreds of years. When televisions became ubiquitous, people wrote breathless pieces arguing that they would give all students the same skills as those taught in the most expensive schools, and yet the vast inequalities in our education system only deepened. And since I was in high school, we’ve been cramming internet-enabled computers into our places of learning, and arguing against evidence that the inevitable result would be the rapid democratization of knowledge and creation. We have seen nothing of the kind, and we have no satisfactory reason to believe that this will change in the coming days ahead. It’s great that you can go on Wikipedia and use it write your own blog. Most people are not doing even that, and that is a far cry from the absurd revolutionary utopianism of this piece and hundreds more like it.

We believe that this development will boost human progress. We can’t predict exactly what new insights, products, and solutions will arrive in the coming years, but we are fully confident that they’ll be impressive. The second machine age will be characterized by countless instances of machine intelligence and billions of interconnected brains working together to better understand and improve our world. It will make mockery out of all that came before.

I encourage you to read this paragraph out loud, and understand it as encompassing the spirit of the piece. The sound you hear as you read will be the sound of nothing at all.

Posted in Tech Stuff | 6 Comments

computer scientist, poet, or ship breaker, it’s you against the bosses

This piece from Mike Konczal on the revelations that major tech firms colluded against their workers is, like most of Konczal’s work, a must-read, and I encourage you to really sink your teeth into what this says about the balance (or “balance”) between employers and employees in contemporary America. At the risk of descending into full-on “the man with only a hammer” territory, I actually think this situation is more reason to think that there isn’t a STEM shortage. When I go on (and on and on, I know) about why I think there’s no STEM shortage, I’m frequently told that the fact that these firms are pushing for more H-1B visas is compelling evidence that there is a shortage. Why would these massively successful tech firms, where almost anyone would want to work, have to push for more skilled worker visas if there wasn’t a shortage? But the lengths that these firms went to in order to artificially suppress wages should suggest to us that they’ll go to great lengths to reduce their labor costs. If you’re a tech company, H-1B visas are a great bit of leverage to use against domestic employees who want more money, particularly considering how vulnerable the workers on such visas are once they’re here.

And let’s think about this in the broader sense: I’ve been saying for some time that people push the myth of the STEM shortage as a way to blame individuals for the bad macroeconomic conditions that hurt them. “Hey, you should have gotten a computer science degree, but you didn’t– that’s why you’re unemployed.” But think about these workers who were colluded against by their employers. Sure, by almost any metric imaginable, they’re winners, and I’m sure if you showed me their tax rates, I’d argue for them to go higher. But still: the people who followed every Davos Man dictum for a good life, the people who David Brooks fantasizes about as he settles in to sleep at night, were nevertheless treated as antagonists to be manipulated and controlled by their employers. These people did everything “right,” according to the conventional wisdom, and yet they were exploited and defrauded.

It’s just one more little piece of evidence in what has become the single most important story in domestic American politics of my lifetime: the total collapse of worker power, under the direct blessing and conscious effort of our government and ruling class. The Reagan/Thatcher plan was to break labor, not just unions but workers in general, and they’ve succeeded beyond their wildest dreams, and never stopped the breaking. I could pile charts and graphs and citations and stories here to make that point. The balance between workers and employers isn’t a balance at all; it’s the utter domination of the former by the latter, and only getting worse. Konczal points out how this sort of thing undermines the ECON 101 argument against the minimum wage. I myself constantly hear theoretical arguments, based on simplistic principles, about why eventually the pendulum has to swing back. But they seem to have no credible connection to actually-existing reality.

And yet there’s simply no organized political campaign to claw back worker power. The Republicans and Democrats simply bicker among themselves about how much austerity is best. It’s maddening, depressing, and destructive.

Posted in Education, Tech Stuff | 4 Comments