This is the text of a presentation I made yesterday at a wonderful Columbia University symposium called Insuetude (still ongoing), which is bringing media archaeologists together with stones-and-bones archaeologists. I started my talk with a bit of film, as a way of time-traveling to the middle of my theme, in part for the pleasure of taking a jarring step back out. Please watch the first 90 seconds or so of The Last Angel of History, a brilliant 1996 documentary by John Akomfrah. You can catch it in this clip. Go ahead. I’ll wait.
Now—what would it mean to take an explicitly antiracist approach to the digitization of cultural heritage? To its technological recovery? To its presentation, not as static content to be received, but as active technology to be used? What would it mean to create an actively antiracist digital library?
Let us first understand the construction of libraries in general, along with their embedded activities of remediation and digital stewardship, as exercises in spatial and temporal prospect. This is work that requires practitioners and builders to develop a geospatially expansive imagination, and to see their charge as having as much to do with things speculative as with retrospect—as much, that is, with scrying for possible, yet-unrealized futures as with reflecting documented, material pasts. If we agree that our collective network of libraries, archives, and museums should be made for prospect—with spatial scope and (as C.P. Snow wrote of the community of scientists) holding “the future in their bones”—then taking up the design problem of an antiracist digital library, particularly in this country, means addressing one fundamental question.
I want to begin from Kevin Hamilton’s own, very effective jumping-off point. By doing that, I’ll hope to encourage some further historical and contextual thinking about these problems in much the same way Kevin did, with his situating of the “black box” metaphor in changing 20th-century conceptions of agency and work—in our evolving notions of the relation of laborers to the systems and environments they inhabit. My context is a little different, though, if closely aligned, because I’m thinking of modes of interpretive work, of scholarship and creativity in the humanities. I’ll also talk a bit about the formal definition of the algorithm, and why I think it’s useful—particularly for practitioners and critics of the digital humanities but really for all scholars engaged in a discussion of algorithmic culture—to be clear on what an algorithm is and is not, especially in its connection to the kind of work we and most of our academic colleagues do.
“What do we do,” Kevin productively asks, “when the sociotechnical system we hope to study is obscured from view?” You’ve heard from him about a range of experimental approaches, all tending toward the conclusion—which resonates strongly with my own experience in digital project and platform design—that the most fruitful research paths may lie beyond or alongside the impulse to “reveal” the contents of a so-called algorithmic black box: even to include making a kind of peace with our platforms and our growing awareness of own situated positions within them.
But I’ll ask again. Traditionally, when we become interested in obscured systems, what do we do? Well, “we” (the sort of folks, that is, in the room today)—go to grad school.
Nobody lives with conceptual black boxes and the allure of revelation more than the philologist or the scholarly editor. Unless it’s the historian—or the archaeologist—or the interpreter of the aesthetic dimension of arts and letters. Okay, nobody lives with black boxes more than the modern humanities scholar, and not only because of the ever-more-evident algorithmic and proprietary nature of our shared infrastructure for scholarly communication. She lives with black boxes for two further reasons: both because her subjects of inquiry are themselves products of systems obscured by time and loss (opaque or inaccessible, in part or in whole), and because she operates on datasets that, generally, come to her through the multiple, muddy layers of accident, selection, possessiveness, generosity, intellectual honesty, outright deception, and hard-to-parse interoperating subjectivities that we call a library. Continue reading “a game nonetheless”
When I was a graduate student in my mid-20s, around (gasp!) the turn of the century, I helped to found an intentionally short-lived but very interesting and effective humanities computing think tank. It was sort of an unauthorized, prototyping or tool-building offshoot of the center where I worked, UVa’s Institute for Advanced Technology in the Humanities. This is before the Scholars’ Lab existed. Only CHNM and (relative to today’s wild blossoming) a startlingly few other such digital humanities and digital history centers were in operation. This is, in fact, before “DH” existed, as a term of art.
One of the many fun things for me, about establishing this think tank—alongside folks like Jerome McGann, Steve Ramsay, Johanna Drucker, Geoffrey Rockwell, Andrea Laue, Worthy Martin, and a few others—was that I got to name it! Sometimes you do, if you’re the one building the website. (Or at least, you used to.) The name I suggested was the Speculative Computing Lab—SpecLab, for short. I was so enamored with the idea—the metaphor, really, of speculative computing—that it also became the title of my dissertation. Let me tell you why, and explain why I tell this story on a panel about the future of DH centers. Continue reading “speculative computing & the centers to come”
[This post is re-published from an invited response to a February 2014 MediaCommons question of the week: “How can we better use data and/or research visualization in the humanities?” I forgot I had written it! so thought I would cross-post it, belatedly, to my blog. Many thanks to Kevin Smith, a student in Ryan Cordell’s Northeastern University digital humanities course, for reminding me. Read his “Direct visualization as/is a tactical term,” here.]
Neatline, a digital storytelling tool from the Scholars’ Lab at the University of Virginia Library, anticipates this week’s MediaCommons discussion question in three clear ways. But before I get to that, let me tell you what Neatline is.
It’s a geotemporal exhibit-builder that allows you to create beautiful, complex maps, image annotations, and narrative sequences from collections of documents and artifacts, and to connect your maps and narratives with timelines that are more-than-usually sensitive to ambiguity and nuance. Neatline (which is free and open source) lets you make hand-crafted, interactive stories as interpretive expressions of a single document or a whole archival or cultural heritage collection.
Now, let me tell you what Neatline isn’t.
It’s not a Google Map. If you simply want to drop pins on modern landscapes and provide a bit of annotation, Neatline is obvious overkill – but stick around.
How does Neatline respond to the MediaCommons question of the week?
1) First, as an add-on to Omeka, the most stable and well-supported open source content management system designed specifically for cultural heritage data, Neatline understands libraries, archives and museums as the data-stores of the humanities. Scholars are able either to build new digital collections for Neatline annotation and storytelling in Omeka themselves, or to capitalize on existing, robust, professionally-produced humanities metadata by using other plug-ins to import records from another system. These could range from robust digital repositories (FedoraConnector) to archival finding aids (EADimporter) to structured data of any sort, gleaned from sources like spreadsheets, XML documents, and APIs (CSVimport, OAI-PMH Harvester, Shared Shelf Link etc.).
2) Second, Neatline was carefully designed by humanities scholars and DH practitioners to emphasize what we found most humanistic about interpretive scholarship, and most compelling about small data in a big data world. Its timelines and drawing tools are respectful of ambiguity, uncertainty, and subjectivity, and allow for multiple aesthetics to emerge and be expressed. The platform itself is architected so as to allow multiple, complementary or even wholly conflicting interpretations to be layered over the same, core set of humanities data. This data is understood to be unstable (in the best sense of the term) – extensible, never fixed or complete – and able to be enriched, enhanced, and altered by the activity of the scholar or curator.
3) Finally, Neatline sees visualization itself as part of the interpretive process of humanities scholarship – not as an algorithmically-generated, push-button result or a macro-view for distant reading – but as something created minutely, manually, and iteratively, to draw our attention to small things and unfold it there. Neatline sees humanities visualization not as a result but as a process: as an interpretive act that will itself – inevitably – be changed by its own particular and unique course of creation. Knowing that every algorithmic data visualization process is inherently interpretive is different from feeling it, as a productive resistance in the materials of digital data visualization. So users of Neatline are prompted to formulate their arguments by drawing them. They draw across landscapes (real or imaginary, photographed by today’s satellites or plotted by cartographers of years gone by), across timelines that allow for imprecision, across the gloss and grain of images of various kinds, and with and over printed or manuscript texts.
[Update: I’ve made low-res versions of my slides and an audio reading available for download on Vimeo, Alex Gil has kindly translated the talk into Spanish, and Melissa Terras’ wonderful performance is now up on the Digital Humanities 2014 website. Finally, a peer-reviewed and formally-published version appears in a 2015 issue of DSH: Digital Scholarship in the Humanities.]
“And by-and-by Christopher Robin came to an end of the things, and was silent, and he sat there looking out over the world, and wishing it wouldn’t stop.” – A. A. Milne
Every morning, as the Virginia sun spills over the rim of the Shenandoah Valley, I dive into the water of my municipal swimming pool and think of ruined Roman baths. On either end of the lane in which I take my laps are blue tile letters, mortared just beneath the waterline by a craftsman of the century gone by. I read two words as I swim back and forth: shallow and deep, shallow and deep.
I’m here to give a talk that likewise wants to glide from shallows to depths in turn. My hope is to position our work—the work of the DH community that has nurtured me with kindness for some 18 years—less as it is lately figured (that is, less as a fragmenting set of methodological interventions in the contemporary, disciplinary agon of humanities scholarship) and more as one cohesive and improbably hopeful possibility. The possibility is for strongly connecting technologies and patterns of work in the humanities to deep time: both to times long past and very far in prospect. But I’ll swim to the shallows, too—because, by musing about the messages we may attempt to send and receive in the longest of longues durées, I mean also to encourage a searching and an active stance in DH, toward our present moment—toward engagement with the technological, environmental, and ethical conditions of our vital here-and-now.
I promised in my abstract a practitioner’s talk, and that is what you will get. I’m not a philosopher or a critic. I’m a builder and a caretaker of systems—so I will attempt to bring a craftsperson’s perspective to my theme tonight.