[I recently had the pleasure of responding to a creative and beautifully grounded talk by Kevin Hamilton of the University of Illinois, called “Beyond the Reveal: Living with Black Boxes.” Kevin spoke as part of a workshop on “Algorithmic Cultures,” hosted by Chad Wellmon at UVa’s Institute for Advanced Studies in Culture. The great Frank Pasquale also presented on themes from his new book, The Black Box Society: The Secret Algorithms that Control Money and Information, to which Siva Vaidhyanathan offered an illuminating response. My thanks to Chad and the IASC for hosting the conversation, and to Frank and Kevin for their encouragement to post these remarks. I hope Kevin will publish his terrific paper! You’ll only get glimpses of it in what is to follow.]
I want to begin from Kevin Hamilton’s own, very effective jumping-off point. By doing that, I’ll hope to encourage some further historical and contextual thinking about these problems in much the same way Kevin did, with his situating of the “black box” metaphor in changing 20th-century conceptions of agency and work—in our evolving notions of the relation of laborers to the systems and environments they inhabit. My context is a little different, though, if closely aligned, because I’m thinking of modes of interpretive work, of scholarship and creativity in the humanities. I’ll also talk a bit about the formal definition of the algorithm, and why I think it’s useful—particularly for practitioners and critics of the digital humanities but really for all scholars engaged in a discussion of algorithmic culture—to be clear on what an algorithm is and is not, especially in its connection to the kind of work we and most of our academic colleagues do.
“What do we do,” Kevin productively asks, “when the sociotechnical system we hope to study is obscured from view?” You’ve heard from him about a range of experimental approaches, all tending toward the conclusion—which resonates strongly with my own experience in digital project and platform design—that the most fruitful research paths may lie beyond or alongside the impulse to “reveal” the contents of a so-called algorithmic black box: even to include making a kind of peace with our platforms and our growing awareness of own situated positions within them.
But I’ll ask again. Traditionally, when we become interested in obscured systems, what do we do? Well, “we” (the sort of folks, that is, in the room today)—go to grad school.
Nobody lives with conceptual black boxes and the allure of revelation more than the philologist or the scholarly editor. Unless it’s the historian—or the archaeologist—or the interpreter of the aesthetic dimension of arts and letters. Okay, nobody lives with black boxes more than the modern humanities scholar, and not only because of the ever-more-evident algorithmic and proprietary nature of our shared infrastructure for scholarly communication. She lives with black boxes for two further reasons: both because her subjects of inquiry are themselves products of systems obscured by time and loss (opaque or inaccessible, in part or in whole), and because she operates on datasets that, generally, come to her through the multiple, muddy layers of accident, selection, possessiveness, generosity, intellectual honesty, outright deception, and hard-to-parse interoperating subjectivities that we call a library.
It is the game of the scholar to reverse-engineer lost, embodied, material processes, whether those processes are the workings of ancient temple complexes or of nineteenth-century publishing houses, and to interpret and fashion narratives from incomplete information. I think it’s worth emphasizing the great continuity between activities like the Facebook research Kevin has described and, say, the creation of a critical edition of poetry from a collection of inherited and discovered variants. Both are acts of scholarship that require an understanding of socio-technical systems of textual production, whether those are networked and computational, or scribal, or having to do with letterpress-to-linotype-to-digital technologies of print. Both result in a reconstruction—or, at the least, require the imaginative invocation—of missing bits and pieces. This is an invocation, in the humanities (and, I might argue, in the social sciences too), that is always partly evidence-based and partly intuitive.
What do we do when we take an interest in obscured socio-technological systems? We become their excavators and editors—we begin to experiment and play with them, and make ourselves scholars of them, in a long and honorable tradition. That’s the first point I wish to make: that there is continuity between the established concerns of the humanities and the conditions we respond to as jarring in our “algorithmic culture” today.
And this is why I think it’s crucial that we take a scholarly and creative stance toward the very notion of the algorithm, if we are going to speak of it sometimes as the imagined or cleverly-deduced tenant of a computational black box, and sometimes as a stand-in for whole processes we are tempted to call black boxes simply because we do not find them immediately legible or—worse, but sadly often, on the part of academics not trained or encouraged to think of their work as procedural in nature—because we do not consider them seemly targets for humanistic decipherment.
One way to do that is to make sure we are clear on the definition of an algorithm. (Understanding its long history as a concept and object of cultural criticism is crucial too, though we won’t have time to delve into that today.)
In the simplest terms, an algorithm is any effective procedure that pursues the solution of a problem by means of a predetermined sequence of actions, or steps. It is stepwise behavior toward an end—different, though, from less formal heuristics or so-called “rules of thumb,” which proceed by open-ended, situational trial and error, rather than in constrained and finite activity according to a predetermined set of rules. Genetic and learning algorithms are getting ever closer to intuitive, human-like heuristic behavior—which is one reason the ultimate black box has (not just since the mid-20th century but with increasing frequency since the start of the Industrial Revolution) been figured as the “technological singularity,” when machine intelligence will outstrip the human and we’ll find ourselves permanently and irrevocably out of our depth. But when we speak of algorithms we are generally talking about something smaller, more discrete, and plainly man-made—the deterministic algorithm: a limited, linear, and generalizable sequence of steps designed to guarantee that the agent performing the sequence will either reach a pre-defined goal or establish without a doubt that the goal is unreachable.
Usually these are little machinic agents, going about their business silently and beneath our notice. But we increasingly understand—not least from Frank Pasquale’s introduction and from our discussion at today’s workshop—the extent to which we, as complex bundles of thought and action, live in and in among algorithmic black boxes. We live. So I’m prompted to ask: how might a subjective, human agent’s interpretation of algorithmic processes in which she, too, is an element alter those stepwise algorithms? That is the autopoietic, entangled question that I feel Kevin’s art and research gets at so nicely. In other words, what are the assumptions with which we approach algorithmic or procedural activities of all sorts, and how might those assumptions both shape and be shaped by the ways we operate in computational, new media systems that constrain and (increasingly) define us?
This is another way of suggesting, to return to the philological concerns with which I began, that algorithmic methods are productive not only of new texts, but of new readings. My old friend and colleague Steve Ramsay has argued, in a book called Reading Machines, that all “critical reading practices already contain elements of the algorithmic.” And the reverse is true: the design of an algorithm—the composition of code—is inherently subjective and, at its best, critical. Even the most clinically perfect and formally unambiguous algorithmic processes embed their designers’ aesthetic judgments and theoretical stances toward problems, conditions, contexts, and solutions.
There’s a certain strain in scholarship and the arts (arts “useful,” in the sense that Siva Vaidhyanathan so helpfully brought into play today, and decidedly otherwise) that never met a black box without seeing it as a kind of a game: a dark game, in many cases, a rigged game, maybe, but a game nonetheless, in which we are invited to interpret, inform, perform, respond, and even compose a kind of countering ludic algorithm.
Repositioning closed, mechanical or computational operations as participatory or playful algorithms requires what economist and formal game theorist Martin Shubik called “an explicit consideration of the role of the rules.” This algorithmic literacy, this consciousness of the existence and the agency of the governing ruleset itself was, for Shubik in 1972, a key factor in defining something as a game (rather than, perhaps in the vein of much of our conversation here today, a calamity or a condition).
This is of course a highly privileged position: to have the knowledge and real and perceived agency to play within and against the rules. Fostering that kind of knowledge and agency among our citizenry—extending that privilege—is one primary aim of a liberal arts education. This seems an entirely uncontroversial statement to make, when applied to the understanding of aesthetics, or of systems of policy and law. That we shy away from teaching formal procedural, computational, and algorithmic thinking in the humanities classroom as somehow separate from humanistic concerns, and that we allow it to be bifurcated from our own fields as “STEM learning” and even set ourselves in opposition to it, is both a failure of collective imagination and a failure of our individual obligations to our students.
Why? Because, if you understand the basic principles on which a piece of computer code is constructed and must act, and if you are aware that you’re playing with and in it, the most constraining algorithm in any ludic or hermeneutic system becomes just another participant in the process. It opens itself to scholarly interpretation, subjective or artistic response, and practical enhancement or countering measures. And in that way, it begins to open itself to resistance. (This is another angle on “human factors” in algorithmic systems—on human awareness and agency as what Kevin has today called “controls for the controls.”)
In games like Peter Suber’s “Nomic,” which is based on constitutional law, such countering and playful measures become the real, iterative, turn-based modification of the rules themselves. Nomic is “a game of amendment.” And I think it’s no coincidence that the inventor of Nomic has revised himself into an open-access advocate and a dedicated reformer of closed scholarly communication systems.
In fact, we’re always amending our algorithms. We’re always playing with and against them, even as we set them up to sharpen themselves in play against us. The encouragement I take from Frank Pasquale’s interventions, from Shubik and Suber, and from Kevin Hamilton’s talk today—to consider “the role of the rules” in our black box games—leads me (I conclude now, in a rush) to the common insight of a perhaps unlikely trio: Emily Dickinson, Gerard Manley Hopkins, and Charles Sanders Peirce.
Poets and logicians know the liberating value of constraint. We wouldn’t have a sonnet otherwise, or a proof.
As Hopkins put it, writing to his friend Robert Bridges about the inexorable logic of what he called “’vulgar,’ that is obvious or necessary rhymes,” to compose verse, we must understand the precise mechanisms by which “freedom is compatible with necessity.” For Dickinson, the enduring artistic value that comes from operating under constraint may begin with a natural blossoming, but must end in a squeeze:
Essential Oils — are wrung —
The Attar from the Rose
Be not expressed by Suns — alone —
It is the gift of Screws —
And—working in a more mathematical and pragmatic but no less imaginative mode—C. S. Peirce interpreted algorithmic specifications not as thwarting, confounding black boxes, but as creative prompts, “in the sense in which we speak of the ‘rules’ of algebra; that is, as a permission under strictly defined conditions.” Algorithmic and combinatoric art forms, such as the work of Sol Lewitt or the OuLiPo group, show us how this functions.
I want to suggest that it’s in granting ourselves playful permission, as through the exciting lines of research we’ve heard laid out today, and in their application in the classroom, that we’ll retain real agency: not just as artists and users, but as scholars and as citizens. What would happen if—as Kevin Hamilton has prompted us to do—we more systematically questioned not just the technical and data-driven frameworks of our “algorithmic culture,” but the validity of the “black box” metaphors that have simultaneously defined and obscured them? What if we refused, in our teaching and research, to understand algorithms as closed systems—refused to accept (in his terms) the algorithmic “dead end?” And what if, as Kevin suggests is possible, we practiced against the longstanding hermeneutic impulse to direct our scholarship wholly and finally toward a “reveal” of the contents of the black box?
What if we educated and designed for resistance, through iterative performance and play?
[Note: I have more on the definition of the algorithm in an entry in the recent Johns Hopkins Guide to Digital Media (eds. Ryan, Emerson, and Robertson), and on the notion of “ludic algorithms” in a chapter of Kevin Kee’s Pastplay: Teaching and Learning History with Technology.]