As Libby Nelson wrote, it seems like everybody and their brother, including your cousin Freddie, wrote about the Brookings Institution paper on student loan debts today. It’s an emotional conversation. Given the limitations of our information, it’s also a frustrating conversation. But it’s a profoundly necessary type of conversation, and one that we’re going to have to have more and more.
I am a little frustrated by people who think my position is somehow a betrayal of the movement to address student loan debt. Again, I support broad forgiveness of loan debt held by the federal government, which is massive. We control the world’s fiat currency, a powerful central bank, and the printing press. If that’s too radical, then we surely could reduce interest rates down to the point of inflation. Why should the federal government be making a dime off of student loans, which are supposed to be a subsidy, not a predatory source of profit? (As Mike Konczal has said, raking in interest off of student loans is the opposite of a subsidy.) And for god’s sake, let’s restore bankruptcy protections to student debt.
I am also disappointed that some people think I’m in the “data uber alles” camp, which is certainly not true. I strongly believe that all data is generated and understood through a lens of ideology and theory, and I encourage everyone to be skeptical, careful readers of research. Empirical data is neither everything nor nothing. We should be critical of data and claims to objectivity. But we simply cannot afford to abandon interrogation of data altogether. As Amber A’Lee Frost said, if we fail to engage in data-driven arguments and critiques of same, we simply give up on fighting in that arena. We can embrace qualitative and other non-numbers based ways of understanding while still using quantitative arguments where appropriate. I have tried to talk about empiricism and research methods in this space in a way that sees the value in quantitative research while remaining aware of the many pitfalls and seductions of this kind of knowledge.
I want to be abundantly clear: when I critiqued that piece in the Awl, I was in no way trying to represent myself as an expert. I was instead trying to publicly talk through the ideas that were being considered and to bring a little of the learning I’ve done to bear. Five years ago, I knew essentially nothing about research methods at all. I’ve worked hard and done a lot of reading and spent a lot of time in office hours with patient and sympathetic professors. That has helped me grow, in terms of my understanding, and it has also helped me recognize the very real limits on my knowledge. For example, I was signed up to take a class in Bayesian probability theory this summer. But my statistics instructor, who has guided me through tons of complex reasoning and mathematical processes, advised me not to take the class, and he was right to — I just don’t have the calculus chops. And I never will. Purdue’s upper level statistics classes for graduate students are notoriously brutal, and I would most likely have failed the class. I have read a lot of condensed and simplified explanations of Bayesian reasoning, and I think I have a fairly strong grasp of how those models work, but there’s a certain level where you just have to have the math, and I don’t.
So I rely on others and do my best, which is what we all have to do. Because it certainly seems like data journalism is here to stay. I’ve had a few people email me, with the rise of sites like Five Thirty Eight, to ask how I feel about data journalism. And I guess I just find that like asking how you feel about investigative journalism. The question is, which data journalism? Which pieces, which writers, which publications? We have to work on becoming better consumers of data, and we have to work hard on being the right kind of skeptics. I find these issues totally resistant to hard rules. Some days I’m convinced that we’re all too credulous towards quantitative data, that we fail to understand the deep caveats and limitations of all studies. Some days I’m sure that we’ve all become too cynical and critical and that we don’t appreciate the importance of good data. The truth is that we have to take each case separately and try our best.
I know that it’s not realistic to expect everyone to have the time to subject everything they read to careful methodological review. We’ll always have to rely on some to do that kind of close, appropriately skeptical reading. But to the degree we’re able, we need to crowdsource methodological critique, and to create a culture where it’s alright to stick our necks out a little bit, the way Choire Sicha did. Even though I disagree with some of his conclusions, I value his criticism and am glad he engaged. We’re all going to have to check each other’s work, and we’re going to have to forgive each other when we inevitably make mistakes. Because, again– this stuff is hard. I can’t tell you how often I’ve labored for hours on a research proposal, trying to get the methodology tight, only to give it to a peer to look at and have them identify a flaw in like five minutes. If we’re all going to be steeped in data in the future — and it seems we will be — then we all have to be critics of methods and methodology. We have to watch each other’s backs. There’s no alternative.