I’m a teacher, linguist, and writer whose scholarly work concerns writing assessment, corpus linguistics, writing program administration, and higher education policy. I attempt to inform administrators, instructors, and others on how better to teach writing and language, particularly to college students and to second language writers, and how to assess writing in a fairer, more constructive way. To that end, I utilize empirical methods to investigate how these populations write and learn, frequently drawing from applications in computational linguistics to make observations of real-world writing. These observations, I hope, can help us make effective administrative and pedagogical decisions. More about my research can be found below under “Research Interests.”
I completed a doctorate in English at Purdue University in May of 2015, with secondary areas in Writing Program Administration and English as a Second Language. My dissertation considered the College Learning Assessment+ (CLA+), a standardized test of college learning, and its potential impact on Purdue specifically and higher education generally. A white paper I wrote for the New America Foundation on standardized tests of college learning can be found here. I have been published in the academic journals System, Kairos, Teacher-Scholar, and Writing Commons, along with a variety of textbooks and edited collections. I hold a Bachelor of Arts in English with a minor in Philosophy from Central Connecticut State University and a Master of Arts in English with a focus in Writing and Rhetoric from the University of Rhode Island.
I also am a regular blogger and writer. I have been published in places like Harper’s, The New York Times Magazine, Foreign Policy, The Los Angeles Times, The New Republic, The New York Times, Politico, Playboy, The New York Observer, Full Stop Quarterly, Vox, Salon, Talking Points Memo, N+1, Jacobin, Pacific Standard, In These Times, The Week, The New Inquiry, Quartz, The Huffington Post, and others. I sometimes provide professional editing and ghostwriting work as well.
My Research Philosophy
My research reflects the hybridity that I have always pursued as an academic. I believe in the practical, theoretical, and empirical value that can be derived from working within the intersection(s) of rhetoric and composition, literacy education, and applied linguistics. Rhetoric and composition gives us the traditional values of the humanities, such as the value of narrative and skepticism towards universalizing and certain knowledge claims, as well as a focus on tailoring our communicative acts towards particular audiences and purposes. Literacy education research allows us to better educate students from across the age range, and to influence policy decisions in a way that protects those students and traditional values of liberal education and humanism. Applied linguistics provides techniques and technologies that help us to investigate language use in large samples, helping direct our teaching and our administrative policies in a way that benefits students and educators alike.
My dissertation attempts to weave these three related traditions together. This research concerns the Collegiate Learning Assessment+ standardized test of college learning, its implementation at Purdue University, and how the CLA+ reflects traditional conflicts between the educational testing community and writing instructors, researchers, and administrators. My research traces the history of the current assessment movement within the American college system, examining the political, economic, and institutional forces that have inspired it. I then detail the history and theory of the CLA+, examining its origins and its assessment mechanism, with particular focus on its claims to validity and reliability and how they interface with the definition of these concepts in practitioner writing assessment. Utilizing oral interviews conducted with campus administrators and other stakeholders in the test, and documents such as meeting minutes and emails from the administrative process of selecting an assessment mechanism, I build a local history of how the test was chosen, piloted, and implemented, to demonstrate how institutions enact major policy changes, and to show how local, administrative histories can interface with national movements and trends. Finally, this research analyzes the initial results from Purdue’s early piloting of the CLA+, comparing it to national averages and similar institutions, and reflects on the future of the university and the test. I argue that the historical, empirical, and theoretical evidence I have assembled demonstrates the way in which national political concerns influence institution-level policy in a way that benefits ideological commitments more than students, instructors, or institutions.
Research Experience and Interests
Beyond my dissertation research, I have been happy to have the opportunity to pursue some of these interests in positions assisting the research of others. In fall of 2014, I am working as the Assessment Coordinator for Purdue’s Introductory Composition program, assisting Director Jennifer Bay in a large-scale assessment of the program. At 185+ sections a year, and embedded in an institution with a very large international population, ICAP is a program of unusual scope and diversity. Developing an assessment for such a program that is fair and effective requires considerable care. In this role, I have gained practical and theoretical knowledge in writing program administration and writing assessment. I have also worked with Dr. Tony Silva on mapping the demographic trends of international student writers at Purdue university, helping to track their success as student writers in the institution; with the late Dr. Linda Bergmann, on a project developing an online social network for young writers to collaborate on and share their writing; and with the Military Research Families Institute, editing and developing articles about an effort to provide better psychological and social support for returning National Guard members. This practical experience has been invaluable for my research identity.
My individual research uses computerized techniques to evaluate large sets of student writing, looking for patterns and features that are consistent to different types of writing and writers. In turn, this knowledge can help develop pedagogical techniques and administrative practices that help students reach their goals as writers. For example, I have worked extensively with the International Corpus Network of Asian Learners of English, a publicly-accessible corpus of student essays written by thousands of writers from across Asia and the United States. By comparing the work of second-language writers at different levels of proficiency, for example, or examining differences between writers of different language backgrounds, research can contribute to a clearer picture of how second language writers learn best. In this effort, I use techniques such as contrastive corpus analysis and vectorial semantics, which demonstrate what textual features are associated with different texts and groups. In order to make this type of analysis truly useful, we must continue to develop these techniques and tools. My project “Evaluating the Comparability of VOCD-D and HD-D” is a part of this effort. The article is an empirical evaluation of two popular computerized metrics for assessing diversity in vocabulary. My research indicates that the two are essentially interchangeable. This type of work is important because it helps lead us to more valid, more useful measures with which to evaluate writing via computerized textual processing.