hack vs. yack meets big data vs. small data.
I’m enjoying the provocative collection of essays about digital humanities and theory in the recent issue (are we still calling it issue?) of Journal of Digital Humanities. Brought together by Natalia Cecire, the essays ask us to think carefully about the dangers of positivism in the DH turn toward computational power. Theory, which makes many DHers roll their eyes, appears in these articles as the required grounding for DH (even as a number of writers seek to unground assumptions about the digital). Perhaps, the essays suggest, theory is even the very thing that makes DH a humanities endeavor instead of a purely digital one.
I’ve been particularly thinking about Benjamin M. Schmidt’s “Theory First,” which in many ways makes this point. As Cecire herself argues, Schmidt too claims that the digital humanities must draw upon theoretical inquiries into the very nature of epistemological meaning-making from evidence, especially from “big data,” with its intimations of positivist, statistical truth claims.
What I am struck by in this wonderful piece of writing by Schmidt is that there are two debates wrapped into one here. The first has to do with what DHers call hack vs. yack, which is to say whether DH is about simply diving in and building and coding projects or whether DH is about investigating the ethical, ideological, and, yes, theoretical dimensions of computational power and its uses. As almost all these essayists note, DH is most intriguing when it is about both.
But there is also a second debate lurking here, and it has to do with scale. Schmidt calls for DH to focus on the pursuit of processing big data as a means to access “deeper structures are readable in the historical record.” But might not small data also lead to these deeper structures? How do we know that history is statistically quantifiable? Perhaps causalities, meanings, truths, significances concentrate in particular objects, texts, events, people? Perhaps history is ultimately uneven and it is a dangerous distortion to even it out? Or better said, maybe different kinds of history lurk at different scales of looking and listening to the past.
My point is that computers might help us access small data too. We can go both microscopic and macroscopic in order to continue to probe the nature of evidence for historical meaning-making. I’ve gone on this rant before, but I see no reason to assume that more necessarily means more true when it comes to historical evidence. Big data might offer something profitable to marketers, the Pentagon, and corporations, but small data might matter just as much to our understanding of history.
Of course, DHers should, and must, explore the historical record through all lenses—epic and minute, qualitative and quantitative, telescopic and microscopic. Moreover, one collective project, it seems to me, might be for us not only to continue to enhance the dialectic between hack and yack, but also to think about how the digital can enable movement between scales—between the micro and the macro perceptual levels of historical analysis (not to mention the scales of justice, which Benjamin evocatively refers to in his explorations of how theory is much more crucial for the historical losers than the winners).
Hacking this movement between macro and micro scales of the historical record while also yacking about it will be important work.