thinking about algorithmic racism & how we teach with/about algorithms.
DLA’s <Digital Fluencies> Series investigates what it means to develop more critical facility and engagement with digital technologies. Meetings usually combine 1-3 readings and a case study for hands-on exploration. Faculty, students, and staff at all levels of digital skill are welcome to attend.
Our meeting on algorithmic racism explored how we increasingly live in an algorithmic society, our everyday lives shaped by interactions with Google searches, social media platforms, artificial intelligence software, and myriad devices and programs that rely on the execution of computational algorithms. At the broadest level, We wanted to ask, updating Robert Staughton Lynd’s famous book-title phrase Knowledge For What?, algorithms for what? More specifically, we hoped to explore what it would mean to become “algorithmically fluent” and more critically aware of the ways in which algorithms reinforce or extend larger structures of racism, oppression, injustice, and misrepresentation. And how might we harness the power of algorithms for better ends in scholarship, teaching, inclusivity, freedom, and citizenship in the contemporary world?
We turned to the following readings and case study:
- Safiya Umoja Noble, “Introduction: The Power of Algorithms,” in Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018)
- Navneet Alang, “Turns Out Algorithms Are Racist,” New Republic, 31 August 2017
- Zeynep Tufekci, “YouTube, the Great Radicalizer,” New York Times, 10 March 2018
- Virginia Eubanks, “The Digital Poorhouse,” Harper’s, January 2018
- Zeynep Tufekci, “What Happens to #Ferguson Affects Ferguson: Net Neutrality, Algorithmic Filtering and Ferguson,” The Message, 14 August 2014
- Benjamin Schmidt, “Do Digital Humanists Need to Understand Algorithms?,” Debates in the Digital Humanities 2016
- Benjamin Schmidt, “Why Digital Humanists don’t need to understand algorithms, but do need to understand transformations,” Sapping Attention, 20 July 2016
- Yeshi Milner, “An Open Letter to Facebook from the Data for Black Lives Movement,” Medium, 4 April 2018
- Li Zhou, “Is Your Software Racist,” Politico, 7 February 2018
Much of our conversation pivoted on two issues: how do we become aware of the effects of algorithms in our lives as citizens, and what kind of curricular interventions at Middlebury might best prepare students for navigating a world of algorithms?
On the former question, we returned repeatedly to the need for awareness, while on the latter we pondered how to enhance this awareness in a liberal arts college curriculum. Our overarching sense seemed to be that not everyone must become proficient in designing algorithms as coders or programmers to develop more contextual understanding of how they function in the Internet and other digital technologies as currently designed. We can learn basic underlying histories and guiding principles for algorithmic construction that help us all better identify when algorithms are causing harm, when they turn into what Cathy O’Neil calls Weapons of Math Destruction.
We can also, fascinatingly, do the reverse: we can use algorithms as a way to glimpse deeper issues of structural racism (not to mention sexism and other isms that name systemic modes of injustice, violence, and suffering). Algorithms, we learned from our readings, get designed and implemented within social conditions that are already systemically racist; is it no wonder that they then, as computational processors of data, information, and knowledge, reproduce racism? What has been most striking, as Safiya Umoja Noble and Zeynep Tufekci show, is how the particular contexts in which algorithms now dominate our lives, amplify these underlying and persistent historical forces? The problem is not algorithmic thinking per se, but rather the frameworks in which algorithms are employed.
And what are these frameworks? We noticed a few from our readings and discussion:
- Advertising as the business model for Silicon Valley. Our authors repeatedly pointed to the ways in which Google, Facebook, Twitter, and other dominant forces on the Internet are all driven by attracting attention to sell advertising. This, as Tufekci contends in the case of YouTube, seems to create algorithmic designs that intensify extremist views and controversy while hollowing out a common middle ground of cultural experience and exchange (although perhaps cat videos might sustain that common space, which is to say perhaps there are certain kinds of kitsch that create commonality!?).
- The bubble effect of social media. Because social media carves up the distinction between private and public spheres in new ways, it undercuts previous assumptions and models about shared culture. Tufekci’s work on Twitter, Facebook, and Ferguson catches the ways in which the idea of the public sphere has fragmented into a multitude of semi-public spaces. The network models that social media algorithmically generate pose new challenges for giving the public sphere and shared public culture a robust virtual life.
- State power and government regulation. A focus on advertising points to the role of the state in possibly regulating algorithmic activities. However, Virginia Eubanks uncovers ways in which state power has also been misused to exacerbate long-running problems of managing the poor rather than addressing poverty itself. Sometimes this has to do with cynical political decisions or extreme political views, but the managerial-algorithmic complex operating in both corporate-commercial Silicon Valley spaces and governmental decisions may well be as crucial to confront as the problems of consumerist economics underlying the Internet infrastructure.
- How does greater awareness of the role of algorithms in contemporary society relate to the need for increased numeracy? How might we better understand the logics and statistics and approaches of math, statistics, and numbers as part of our civic obligation when it comes to digital technologies, the Internet, and the presence of algorithms in larger systems of oppression? And how do we do so not in a Luddite fantasy of blaming the machines, but rather with a goal of liberation—or at least reform to systems that continue to sustain regimes of racial inequity and injustice?
These are just a few issues that arose in our conversation, which also touched on matters of how we handle the benefits and drawbacks of automation through algorithmic computation, whether the makers of algorithms are ethically responsible for their creations, at one point those who use the algorithms become ethically responsible for their actions, and how we might notice or imagine alternatives to the current technological systems that, despite the more wildly utopian rhetoric about digital culture, have not only reinforced long-running forces of racism, but even escalated them further. How can we devise practical solutions and reforms as well as continue to imagine more wild, utopian alternatives and imaginaries?
In addition to noticing the presence of algorithms in our shared lives as citizens, our conversation also turned to the classroom and curriculum at Middlebury. How do we teach in new ways to advance digital fluencies when it comes to the relationship between algorithms and racism? A debate emerged between two models: does one concentrate on core courses that explore digital fluencies around topics such as the ethics of algorithms or should awareness and thinking about algorithmic thinking suffuse the curriculum across multiple disciplines?
Perhaps the answer is both should happen. Core courses in Computer Science, the social sciences, information environmentalism, philosophy, and history of technology can go deep with the many facets of algorithmic analysis. Our freshman seminars might all contain some kind of digital fluency component. There might be other moments to create cross-campus engagements with the problem and possibilities of the algorithm.
At the same time, heightened awareness of algorithmic thinking might also appear within many different disciplinary areas. The challenge would be to use the increased consciousness of what algorithms are up to in order to deepen student learning about particular fields of study. A good model for this approach might be found in Benjamin Schmidt‘s work on the effort to apply algorithmic thinking to specialized scholarship in literary studies and history (in his commentary on the Jockers/Swafford debates about the Syuzhet Package and sentiment analysis of nineteenth-century European novels). Here, a seemingly esoteric scholarly disagreement cracks open a view on issues not only of algorithms but also the history of the European novel. To be sure, we’ve moved away from racism in its contemporary or historical context in this instance, but we might be able to delve deeper into all sorts of topics such as racism via considerations of the algorithm in various departments, disciplines, courses, units of courses, and fields of study.
In short, we need the history and context of algorithms to understand their workings more critically; at the same time, we might be able to use the growing prevalence of algorithms in society—and debates about their effectiveness and accuracy—as opportunities to gain deeper comprehension of the histories, contexts, methods, approaches, modes of inquiry, information, data, and knowledge that algorithms now increasingly mediate.