data presentation

This is the sort of thing that I should learn to just keep to myself, but here goes.

The just-released issue of Research in the Teaching of English (46.3) has a study I really admire, “Placement of Students into First-Year Writing Courses.” (In the unfortunate custom of academic articles, the PDF is gated, though an abstract is available here.) The researchers attempt to determine empirically whether ACCUPLACER, an automated first-year composition placement system developed by ETS, places students appropriately. Success is defined in part by an individual student’s performance in the first-year composition class that ACCUPLACER has chosen. The study’s authors, following analysis, are skeptical towards ACCUPLACER, which doesn’t surprise me. (ACCUPLACER is based on item-response theory. I’ve been attempting to get some thoughts together on item response theory in writing assessment for awhile now, but it’s slow going.) Interestingly, they find the SAT Writing to be a better predictor of a student’s FYC performance.

This kind of research, which attempts to both determine best practices in composition studies and to empirically justify what we are doing in composition, is the type that I myself am interested in producing. The rising call for better quantitative assessment makes this kind of research necessary, whether we like it or not. I understand and largely agree with skeptical attitudes towards reductive measures of writing proficiency, but we simply cannot afford to ignore the loud calls for greater assessment. We need to be able to justify our pedagogy, and we need to be able to do it at least in part with numbers. We can certainly do so while maintaining a critical and skeptical eye towards those numbers; that is, after all, an appropriately rhetorical consideration of audience. If we don’t perform this kind of assessment ourselves, they’ll find people who will. (OK, sermon over.)

Anyway, I’m just a bit bugged by some of the data presentation here, especially because I dig the study. Check out this correlation matrix.

First, I’m never really sure why publishers don’t just put the 1.0 self-correlations in, rather than dashes, as here. I guess it’s just a stylistic choice, given the redundancy involved, but as an elementary principle I don’t see why you wouldn’t just put the information in there. Second, from an “eyeballing it” standpoint, I’d like at least abbreviations listed along the top, not just reference numbers. Also, I’d really like some notation on the chart that tells us which coefficient of correlation is being used. As these are ordinal scales, the Spearman rank-order coefficient would probably  be most appropriate, but given conventions in psychometrics and educational research, I’m assuming it’s a Pearson R. Would be nice to know. I’m sure people with a better eye for stats can suss it out from the data, but they shouldn’t have to.

As for this, well…. I’m just gonna leave this here. You tell me.

Leave a Comment

Your email address will not be published. Required fields are marked *