Study of the Week: To Remediate or Not to Remediate?

Today’s Study of the Week comes from researchers at my own university, the City University of New York, and concerns an issue of profound professional interest to me: the success of students who are required to attend remedial math classes in our community colleges. CUNY is a system of vast differences between and within its institutions, playing host to programs at senior colleges with well-prepared students that could succeed anywhere and also to many severely under-prepared students who struggle and drop out at unacceptable rates. In this diversity of outcomes, you have a microcosm of American higher education writ large, which like seemingly all things American is plagued by profound inequality.

Here at Brooklyn College, fully two thirds of undergraduates are transfer students, the vast majority of them having come from the CUNY community college system. (Those who get a sufficient number of credits from the community colleges must be admitted to the institution under CUNY policy, even when they would ordinarily not have met the necessary academic standards.) Typical academic outcomes data for these students, as distinct from the third who start and finish their careers at Brooklyn College, are vastly different, and in a discouraging direction. Since my job entails demonstrating to people in positions of power that students are learning adequately here, and in explaining why unfortunate numbers may look that way, this difference is important. But CUNY policy and the overall rhetoric of American college agitates against this nuance. Indeed, the recent adoption of the Pathways system of credits is based on a simple premise, that a CUNY student is a CUNY student, a CUNY class a CUNY class, and a CUNY credit a CUNY credit. This assumption of equivalence across the very large system makes life easier for students and administrators. It is also, I would argue, empirically wrong. But this is a question far above my pay grade.

In any event, the fact is that CUNY colleges host many students who lack the level of prerequisite ability we would hope for. Today’s study asks an essential question: is the best way to serve CUNY community college students who lack basic math skills to send them to non-credit bearing remedial algebra classes? Or is it to substitute a credit-bearing college course in statistics? The question has relevance far beyond CUNY.

Algebra is a Problem

When we’re talking about incoming students who fail to meet standards, we’re also talking about how they fared at the high school level. And the failure to meet entrance requirements for college corresponds with a failure to meet graduation requirements for high school. Among the biggest, most intractable problems with getting students to meet standards comes from algebra. A raft of evidence tells us that algebra requirements stand as one of the biggest impediments to students graduating from high school in our system. Here in New York City, the pass rate for the relevant sections of the Regents Exam has fluctuated with changes to standards, with 65% passing the Algebra I in 2014, 52% passing in 2015, and 62% in 2016. Even with changing standards, in other words, more than a third of all NYC students are failing to meet Algebra I requirements – and that’s despite longstanding complaints that the standards are too low. Low standards in math might help explain why 57% of undergraduates in the CUNY system writ large were found to be unable to pass their math requirements in a 2012 study.

Indeed, rising graduation rates nation-wide have come along with concerns that this improvement is the product of lower standards. You can see this dynamic play out with allegations against “online credit recovery” or in the example of a San Diego charter school where graduation rates and grades are totally contrary to test performance. Someone I know who works in education policy in the think tank world told me recently that he suspects that less than half of American high school graduates actually have the skills and knowledge required of them by math standards, as distinct from just formally passing.

The political scientist Andrew Hacker, himself of CUNY, has made the case against algebra requirements at book length in his recent The Math Myth. As Hacker says, the screening mechanism of getting through algebra, pre-calculus, and similar required courses prevents many students who are otherwise academically sufficient for higher education from attending college. He marries this argument to a critique of the funnel-every-student-into-STEM-career school of ed philosophy that has become so dominant and which I myself have argued, at length, is empirically unjustifiable, economically illiterate, and educationally impossible. Rather than trying to get every kid to be an aspiring quant, Hacker recommends replacing algebra and calculus requirements with more forgiving, practically-aligned and conceptual courses in quantitative literacy.

The question is, can we do what Hacker has asked without wholesale remaking the college system, a very large boat that’s notoriously slow to turn? That’s the question this Study of the Week is intended to answer.

The Study

Today’s study was conducted by A. W. Logue, Mari Watanabe-Rose, and Daniel Douglas, all of CUNY. They were able to take advantage of an unusual degree of administrative access to conduct a true randomized experiment, assigning students to conditions randomly in a way very rarely possible in practical educational settings. The researchers conducted their study at three (unnamed) CUNY community colleges. Their research subjects were students who would ordinarily be required to take a remedial non-credit-bearing algebra course. These students were randomly assigned to three groups: the traditional elementary algebra class (which we can think of as a control), an elementary algebra class where students were required to participate in a support workshop of a type often recommended as a remediation effort, and a undergraduate level, course-bearing introduction to statistics course with its own workshop.

In order to control for instructor effects, all instructors in the research taught one section each of the various classes, helping to minimize systematic differences between the experimental groups. Additionally, there was an important quality check in place regarding non-participants. In an ideal world, true randomization would mean that everyone selected for a treatment or group would participate, but of course you can’t force participation in experiments. That means that there might be some bias if students assigned to one treatment were more likely to decline to participate. Because of the nature of this study, the researchers were able to track the performance of non-participants, who took the standard elementary algebra class. Those students performance similarly to the in-study control group, an important source of confidence in the research.

The researchers used several different techniques to examine their relationships of interest, specifically the odds of passing the course and the amount of credits earned in the following year. One technique was an intent-to-treat (ITT) analysis, which is a kind of model used to address the fact that participants in randomized controlled trials will often drop out or otherwise not comply with the experimental program. It generates conservative effect size estimates by simply assuming that everyone who was randomized into a group stayed there for statistical purposes, even if we know we had some attrition and non-compliance along the way. (“Once randomized, always analyzed.” ) Why would we do that? Because we know that in a real-world scenario “subjects” won’t stick with their assigned “treatments” either, and we want to avoid overly optimistic effect sizes that might come with only looking at compliance.

(As always, if you want the real skinny on these model adjustments I urge you to reading people who really know this stuff. Here’s a useful, simply stated brief on intent to treat.)

The results seem like a pretty big deal to me: after analysis, including throwing in some covariates, they find that there is no significant difference in passing the course between students enrolled in the traditional elementary algebra class and that class plus a workshop, but there is a significant and fairly large (16% without covariates in the model, 14% with) difference in odds of passing the course for those randomized to the intro stats course compared to the elementary algebra course. That is, after randomization students were 16% more likely to pass a credit-bearing college-level course than a non-credit-bearing elementary algebra course. Additionally, the stats group had a significantly higher number of total credits accumulated during the experimental semester and next year, even after subtracting the credits earned for that stats course.

(Please do take a look at the confidence interval numbers listed in brackets below, which tells you a range of effects that we can say with 95% confidence contains the true average effect. Starting to look at confidence intervals is an important step in reading research reports if you’re just getting started.)

Additionally, as the authors write, “as of 1 year after the end of the experiment, 57.32% of the Stat-WS students had passed a college-level quantitative course…, while 37.80% still had remedial need. In contrast, only 15.98% of the EA students had passed a college-level quantitative course and 50.00% still had remedial need.”

Another thing that jumped out at me: in an aside, the authors note that there was no significant difference between the groups in their likelihood of taking credits a year after the experimental semester, with all groups around 66% still enrolled. Think about that – just a year out, fully a third of all students in the study were not enrolled, reflecting the tendency to stop out or drop out that is endemic to community colleges.

Of course, none of this would be inconsistent with assuming a good deal of explanatory power of incoming ability effects, and the relationship between performance on the Compass algebra placement test and the odds of passing are about what you’d expect. Prerequisite ability matters.

In short, students who were randomly selected into an elementary algebra class with a supportive workshop attached were no more likely to pass that class than those sorted into a regular algebra class, but those sorted into an introductory statistics class were 16% more likely to have passed that course. Additionally, the latter group earned significantly more college credits in the following year than the other groups, and were much more likely to have completed a quantitative skills class.

OK. So what do we think about all of this?

First, I would be very skeptical about extrapolating these results into other knowledge domains such as literacy, writing, or similar. I don’t think all remediation efforts are the same across content domains and it’s likely that research will need to be done in other fields. Second, the fact that a supporting workshop did little to improve outcomes compared to students without such a workshop is discouraging but hardly surprising. Such interventions have been attempted for a long time and at scale, but their results have been frustratingly limited.

All in all, the evidence in this study supports Hacker’s point of view, and I suppose my own: students can achieve better results in terms of pure moving towards graduation quickly if we just let them take college stats instead of forcing them to take remedial algebra first. But there’s a dimension that the researchers leave largely unexplored, which is the question of whether this all just represents the benefits of lowering standards.

Are We Just Avoiding Rigor?

The authors examine many potential explanations about why the stats-taking students outperformed the other group, including potential non-random differences in groups, motivation, and similar, but seem oddly uninterested in what strikes me as the most obvious read of the data: that it’s just easier to pass Intro to Stats than it is to pass even a remedial algebra course. They do obliquely get at this point in the discussion, writing

degree progression is not the only consideration in setting remediation policy. The participants in Group Stat-WS were only taught elementary algebra material to the extent that such material was needed to understand the statistics material. Whether students should be graduating from college having learned statistics but without having learned all of elementary algebra is one of the many decisions that a college must make regarding which particular areas of knowledge should be required for a college degree. Views can differ as to which quantitative subjects a college graduate should know.

They sure can! This seems to me to be the root of the policy issue: should we substitute stats courses for algebra courses if we think doing so will make it less likely for students to drop out or otherwise be disrupted on the path to graduation?

This is not really a criticism of this research, though I’d have liked a little more straightforward discussion of this from the authors. But I will hold with Hacker in suggesting that this does represent a lowering of standards, and that this is a feature, not a bug. That is, I think we should allow some students to avoid harder math requirements precisely because the current standards are too high. Students in deeply quantitative fields will have higher in-major math requirements anyway. Of course, in order to take advantage of this, we’d have to acknowledge that the “every student a future engineer” school of educational policy is ill-conceived and likely to result only in a lot of otherwise talented students butting their heads up against the wall of math standards. But unlike most ed policy people, I am willing to say straightforwardly that there are real and obvious differences in the specific academic talents of different individual students, and that these differences cannot be closed through normal pedagogical means. That’s what the best evidence tells us, including this very study.

Hacker says that many ostensibly quantitative professions, like computer programmer or doctor, require far less abstract math skills than is presumed. I don’t doubt he’s correct. The question is whether we as a society – and, more important, whether employers – are will to accept a world where some significant % of people in such jobs never had to pass an Algebra II or Calculus class. Or, failing that, can we redefine our sense of what is valuable work so that the many people who seem incapable of reaching standards in math can go on to have productive, financially-secure lives?

What We’re Attempting with College is Very Hard

Colleges and universities have found themselves under a great deal of pressure, internal and external, in recent years. This is to be expected; they are charging exorbitant tuition and fees from their students, after all, and despite an army of concern trolls doubting their value, the degrees they hand out in return are arguably more essential than ever for securing the good life. Though enrollment rates have slowed in recent years, over time the trend is clear:

Policymakers and politicians must understand: these new enrollments are coming overwhelmingly from the ranks of those who would once have been considered unprepared for higher education, and this has increased the difficulty of what we’re attempting dramatically.

What we’re attempting is to admit millions of more people into the higher education system than before, almost all of whom come from educational and demographic backgrounds that would once have screened them out from attendance. Because those backgrounds are so deeply intertwined with traditional inequalities and social injustice, we have rightly felt a moral need to expand opportunity to those from them. Because the 21st century economy grants such economic rewards to those who earn a bachelor’s degree, we have developed a policy regime designed to do so. I cannot help but see the moral logic behind these sentiments. And yet.

Let’s set aside my perpetual questions about the difference between relative and absolute academic performance and how they are rewarded. (Can the economic advantage of a college degree survive the erosion of the rarity of holding one? How could it possibly under any basic theory of market goods?) We’re still left with this dilemma: can we possibly maintain some coherent standards for what a college degree means while dramatically expanding the people who get them?

One way or another, everyone with an interest in college must understand that the transformation that we’re attempting as a community of schools, educators, and policymakers is unprecedented. Today, the messages we receive in higher education seem impossible: we must educated more cheaply, we must educate more quickly, we must educated far more underprepared students, and we must do so without sacrificing standards. This seems quixotic, to me. Adjusting curricula in the way proposed in this research, and accepting that higher completion rates probably require lower standards, is one way forward. Or we can refuse to adapt to the clear implication of a mountain of discouraging data for students at the lower end of the performance distribution, and get back failure in return.

One thought on “Study of the Week: To Remediate or Not to Remediate?”

Comments are closed.