alternate futures/usable pasts

[While I’m cleaning up the text of a talk I gave at Harvard’s Hazen Symposium last week (see #HazenatHarvard or Merrilee’s Storify for tweets from many great presentations), I thought I’d share just the prelude and final paragraph to one that preceded it, and was really a first stab at the concept. This is from Marquette University Library in late September.]

[Update: “Speculative Collections,” the talk that followed, is now available.]

It wasn’t until I took a job in the library that I became unstuck in time. I thought I knew what time was, in that way that you think you know things, now, when you’re just out of your 20s and it’s possible you could have it all together. I thought I knew time as a young mother: how it drags and loops with repetition (sleep and milk and laundry); how quickly it passes, as little bodies grow and reach and change. I thought I knew it as a scholar. My academic training had been in classical archaeology, on the one hand, and poetry and textual criticism on the other—the meter of lyric verse and the history of print culture—with a weird stop-over in the middle to teach the design and aesthetics of video games. Each of these disparate fields has its own ticking metronome, its particular largo or accelerando. They have positionality as disciplines and different ways of positioning the objects of their study, all splayed out on timelines of their own making.

I thought I knew time, too, because I’d designed software to model it. Part of my dissertation work around (ahem) the turn of the century, in which I was grappling toward something I called Speculative Computing, had been to collaborate with a small team (Johanna Drucker, Jim Allman, Petra Michel, and many generous colleagues) in prototyping a tool for humanistic timelines. These were timelines not governed—as nearly all digital interfaces to time were then and still are—by the mechanical ticking of a scientific clock. We were funded, oddly enough, by a grant to Johanna from the Intel Corporation, which was interested in hardware requirements for the Don Draper-like transcendent moments they hoped you might have with the digital equivalent of your family’s slide carousel. They wanted to sell computers that were machines for memory, rather than just memory-machines. So they offered money (time is money) to some humanities scholars (who come cheap), to tinker with stretchy, squishy timelines, to imagine interfaces and interaction modes for the personal and uniquely human experience of time.

We created timeline tools for fiction and memoir and contested historical events, lines on which nothing could be pinned precisely, tools for sketching ambiguous causes and imprecise moments. Our Temporal Modelling Project made timelines for causal relations and visions proleptic—acts of revision and retrospect, anticipation, prediction, self-illusion, and regret. We modeled time that zips by, and time that drags its feet. We also built branching timelines, my specialty, in which the subjective observer’s standing-point—the moment of the now, my experience necessarily very different from yours even in the same instant—was like a bead: any number of beads, really, all valid imaginary nows—which could move freely back and forth along unraveling threads of time—concentrating them for a moment, maybe, into a contingent view of past, present, and future—but always in motion and part of a fabric of observation and interpretation, being perpetually unmade and made.

So I guess I was primed to look beyond progress narratives and linear conceptions of time. Continue reading “alternate futures/usable pasts”

everywhere, every when

This is the text of a presentation I made yesterday at a wonderful Columbia University symposium called Insuetude (still ongoing), which is bringing media archaeologists together with stones-and-bones archaeologists. I started my talk with a bit of film, as a way of time-traveling to the middle of my theme, in part for the pleasure of taking a jarring step back out. Please watch the first 90 seconds or so of The Last Angel of History, a brilliant 1996 documentary by John Akomfrah. You can catch it in this clip. Go ahead. I’ll wait.

Now—what would it mean to take an explicitly antiracist approach to the digitization of cultural heritage? To its technological recovery? To its presentation, not as static content to be received, but as active technology to be used? What would it mean to create an actively antiracist digital library?

Let us first understand the construction of libraries in general, along with their embedded activities of remediation and digital stewardship, as exercises in spatial and temporal prospect. This is work that requires practitioners and builders to develop a geospatially expansive imagination, and to see their charge as having as much to do with things speculative as with retrospect—as much, that is, with scrying for possible, yet-unrealized futures as with reflecting documented, material pasts. If we agree that our collective network of libraries, archives, and museums should be made for prospect—with spatial scope and (as C.P. Snow wrote of the community of scientists) holding “the future in their bones”—then taking up the design problem of an antiracist digital library, particularly in this country, means addressing one fundamental question.

Where and when do black lives matter? Continue reading “everywhere, every when”

capacity through care

[This is the draft of an invited contribution to a forum on “care” that will appear in Debates in the Digital Humanities 2017, edited by Matthew K. Gold and Lauren Klein. It’s a capsule summary of my NEH talk, “On Capacity and Care.” (A more digestible pill?)]

The grand challenges that face (and link) little cultures and fragile creatures across the implacable Anthropocene must be met by an academy made more capable—in every sense of that open-handed word. But our perpetually erupting anxieties about data-driven research and inquiry “at scale” seem to betray a deep-seated—and ill-timed—discomfort with the very notion of increased capacity in the humanities.

There are obvious and valid reasons for humanities scholars to be skeptical of big data analysis, distant reading, or work in the longue durée: problems of surveillance and privacy; the political ends to which data mining can be put and the systems of consumption and control in which it is complicit; intractable and cascading structural inequities in access to information; and disparities in sampling and representation, which limit the visibility of historical and present-day communities in our datasets, or filter them through a hostile lens. We can further understand and respect a discomfort with vastness in fields that have, most particularly over the past half century, focused intently on the little stuff: working in bits and bobs and “small things forgotten.”

Humanities scholars make theoretical and practical advances—including advances in the cause of social justice—by forwarding carefully observed, exquisitely described jewel-box examples. Our small data add nuance and offer counter-narratives to understandings of history and the arts that would otherwise fall along blunter lines. The finest contribution of the past several decades of humanities research has been to broaden, contextualize, and challenge canonical collections and privileged views. Scholars do this by elevating instances of neglected or alternate lived experience—singular human conditions, often revealed to reflect the mainstream.

The most compelling arguments against algorithmic visualization and analysis are not, therefore, fueled by nostalgic scholarly conservatism, but rather emerge across the political spectrum. Yet they share a common fear. Will the use of digital methods lead to an erosion of our most unique facility in the humanities, the aptitude for fine-grained and careful interpretive observation? In seeking macroscopic or synthetic views of arts and culture, will we forget to look carefully and take—or teach—care?

I see the well-established feminist ethic and praxis of care, itself, as a framework through which the digital humanities might advance in a deeply intertwingled, globalized, data-saturated age. An ethic of care—as formalized in the 1970s and ‘80s by Carol Gilligan, Nel Noddings, Joan Tronto, Virginia Held, and others—means to reorient its practitioners’ understanding in two essential ways. The first is toward a humanistic appreciation of context, interdependence, and vulnerability—of fragile, earthly things and their interrelation. The second is away from the supposedly objective evaluation and judgment of the philosophical mainstream of ethics—that is, away from criticism—and toward personal, worldly action and response. After all, the chief contribution, over prior directions in moral philosophy, of the feminist ethics of the 18th and 19th century that inform this work, was to see the self as most complete when in connection with others. Kantian morality and utilitarianism had valorized an impartial stance and posited that, as a man grew in judgment and developed ethical understanding, he separated himself from others. The mark of a fully developed (implicitly masculine) self was its ability to stand apart from and reason outside of familial systems and social bonds.

A feminist ethic of care—like many a DH research agenda or platform for large-scale visualization and analysis—seeks instead to illuminate the relationships of small components, one to another, within great systems. Noddings identifies the roots of care in what she calls engrossment: that close attention and focus on the other which provokes a productive appreciation of the standpoint or position of the cared-for person or group—or (I would say) of the qualities and affordances of an artifact, document, collection, or system requiring study or curation. Humanities scholars hone and experience engrossment in archival research and close reading. We perform it in explicating subjectivity. We reward each other for going deep. Yet one concern in the literature of care has been whether engrossment can become too intense. I believe the answer is the same for caregiving (nursing, teaching, tending, mothering, organizing) as it is for humanities scholarship. Real experts are those who manifest deep empathy, while still maintaining the level of distance necessary to perceive systemic effects and avoid projection of the self onto the other. In other words, empathetic appreciation of the positional or situated goes hand in hand with an increase in effective observational capacity. A care-filled humanities is by nature a capacious one.

To me, this suggests that a primary design desideratum for Anthropocenic DH and cultural heritage systems must be the facilitation of humanistic engrossment through digital reading (viewing, listening, sensing) and large-scale analysis. Let us build platforms that promote an understanding of the temporal vulnerability of the individual person or object; that more beautifully express the relationship of parts, one to another and to many a greater whole; and that instill, through depth of feeling in their users, an ethic of care—active, outward-facing, interdisciplinary, and expansive: sufficient to our daunting futures and broadened scope.

neatline & visualization as interpretation

[This post is re-published from an invited response to a February 2014 MediaCommons question of the week: “How can we better use data and/or research visualization in the humanities?” I forgot I had written it! so thought I would cross-post it, belatedly, to my blog. Many thanks to Kevin Smith, a student in Ryan Cordell’s Northeastern University digital humanities course, for reminding me. Read his “Direct visualization as/is a tactical term,” here.]

Neatline, a digital storytelling tool from the Scholars’ Lab at the University of Virginia Library, anticipates this week’s MediaCommons discussion question in three clear ways. But before I get to that, let me tell you what Neatline is.

neatline

It’s a geotemporal exhibit-builder that allows you to create beautiful, complex maps, image annotations, and narrative sequences from collections of documents and artifacts, and to connect your maps and narratives with timelines that are more-than-usually sensitive to ambiguity and nuance. Neatline (which is free and open source) lets you make hand-crafted, interactive stories as interpretive expressions of a single document or a whole archival or cultural heritage collection.

Now, let me tell you what Neatline isn’t.

It’s not a Google Map. If you simply want to drop pins on modern landscapes and provide a bit of annotation, Neatline is obvious overkill – but stick around.

How does Neatline respond to the MediaCommons question of the week?

1)   First, as an add-on to Omeka, the most stable and well-supported open source content management system designed specifically for cultural heritage data, Neatline understands libraries, archives and museums as the data-stores of the humanities. Scholars are able either to build new digital collections for Neatline annotation and storytelling in Omeka themselves, or to capitalize on existing, robust, professionally-produced humanities metadata by using other plug-ins to import records from another system. These could range from robust digital repositories (FedoraConnector) to archival finding aids (EADimporter) to structured data of any sort, gleaned from sources like spreadsheets, XML documents, and APIs (CSVimportOAI-PMH Harvester, Shared Shelf Link etc.).

2)   Second, Neatline was carefully designed by humanities scholars and DH practitioners to emphasize what we found most humanistic about interpretive scholarship, and most compelling about small data in a big data world. Its timelines and drawing tools are respectful of ambiguity, uncertainty, and subjectivity, and allow for multiple aesthetics to emerge and be expressed. The platform itself is architected so as to allow multiple, complementary or even wholly conflicting interpretations to be layered over the same, core set of humanities data. This data is understood to be unstable (in the best sense of the term) – extensible, never fixed or complete – and able to be enriched, enhanced, and altered by the activity of the scholar or curator.

3)   Finally, Neatline sees visualization itself as part of the interpretive process of humanities scholarship – not as an algorithmically-generated, push-button result or a macro-view for distant reading – but as something created minutely, manually, and iteratively, to draw our attention to small things and unfold it there. Neatline sees humanities visualization not as a result but as a process: as an interpretive act that will itself – inevitably – be changed by its own particular and unique course of creation.  Knowing that every algorithmic data visualization process is inherently interpretive is different from feeling it, as a productive resistance in the materials of digital data visualization. So users of Neatline are prompted to formulate their arguments by drawing them. They draw across landscapes (real or imaginary, photographed by today’s satellites or plotted by cartographers of years gone by), across timelines that allow for imprecision, across the gloss and grain of images of various kinds, and with and over printed or manuscript texts.

anthropocene abstract

I am deeply honored to have been invited to give a plenary lecture at this year’s Digital Humanities conference, planned for Lausanne, Switzerland in early July. My fellow keynoters are Bruno Latour, Sukanta Chaudhuri, and Ray Siemens, who will receive ADHO‘s Zampolli Prize. This is quite a line-up! I’m not nervous at all. Why do you ask?

Now that I’ve provided an abstract for the talk, I thought I’d share it here. My subject is Digital Humanities in the Anthropocene:

This will be a practitioner’s talk, and—though the abstract belies it—an optimistic one. I take as given the evidence that human beings are irrevocably altering the conditions for life on Earth and that, despite certain unpredictabilities, we live at the cusp of a mass extinction. What is the place of digital humanities practice in the new social and geological era of the Anthropocene? What are the DH community’s most significant responsibilities, and to whom? This talk will position itself in deep time, but strive for a foothold in the vital here-and-now of service to broad publics. From the presentist, emotional aesthetics of Dark Mountain to the arms-length futurism of the Long Now, I’ll dwell on concepts of graceful degradation, preservation, memorialization, apocalypse, ephemerality, and minimal computing. I’ll discuss digital recovery and close reading of texts and artifacts—like the Herculaneum papyri—once thought lost forever, and the ways that prosopography, graphesis, and distant reading open new vistas on the longue durée. Can DH develop a practical ethics of resilience and repair? Can it become more humane while working at inhuman scales? Can we resist narratives of progress, and still progress? I wish to open community discussion about the practice of DH, and what to give, in the face of a great hiatus or the end of it all.

The talk will likely be recorded at the event and later published in one of the ADHO journals, but I will also (as usual) post the text here after I deliver it. You’ll see hints at my reading on the subject in the abstract above—from Jo Guldi and David Armitage to Steven J. Jackson, Rebecca Solnit, Shiv Visvanathan, Bruno Latour, Dipesh Chakrabarty, Timothy Morton, Susie O’Brien, Brian Lennon, Eileen Crist, and more, including a number of institutional and collective projects—but I welcome messages pointing me at things you suspect I’ll miss.