speculative computing & the centers to come

[This is a short talk I prepared for a panel discussion today with Brett Bobley, Ed Ayers, and Stephen Robertson, on the future of DH centers. The lovely occasion is the 20th anniversary celebration of the Roy Rosenzweig Center for History and New Media at George Mason University. Happy birthday, CHNM! Next year, I’ll buy you a drink.]

When I was a graduate student in my mid-20s, around (gasp!) the turn of the century, I helped to found an intentionally short-lived but very interesting and effective humanities computing think tank. It was sort of an unauthorized, prototyping or tool-building offshoot of the center where I worked, UVa’s Institute for Advanced Technology in the Humanities. This is before the Scholars’ Lab existed. Only CHNM and (relative to today’s wild blossoming) a startlingly few other such digital humanities and digital history centers were in operation. This is, in fact, before “DH” existed, as a term of art.

One of the many fun things for me, about establishing this think tank—alongside folks like Jerome McGann, Steve Ramsay, Johanna Drucker, Geoffrey Rockwell, Andrea Laue, Worthy Martin, and a few others—was that I got to name it! Sometimes you do, if you’re the one building the website. (Or at least, you used to.) The name I suggested was the Speculative Computing Lab—SpecLab, for short. I was so enamored with the idea—the metaphor, really, of speculative computing—that it also became the title of my dissertation. Let me tell you why, and explain why I tell this story on a panel about the future of DH centers. Continue reading “speculative computing & the centers to come”

johannes factotum & the ends of expertise

[This—more or less—is the text of a keynote talk I delivered last week in Atlanta, at the 2014 DLF Forum: the annual gathering of the Digital Library Federation. DLF is one among several stellar programs at CLIR, the Council on Library and Information Resources, where I have the honor to serve as a Distinguished Presidential Fellow. I began the talk with the following slide…]

johannes-factotum

You’re probably wondering who Johannes Factotum may be. Let’s start with a story.

Grad school in English, for me, began with a scavenger hunt. I am deeply sorry to report that this was not as much fun as it might sound. In 1996, the University of Virginia Library’s OPAC had been online for only a few years, and for most, the physical card catalog reigned supreme. Journal collections were almost entirely in print or on microfiche, but above all were in the building—shared and offsite storage being mostly a thing of the future. Search engines, which were poor, were supplemented by hand-coded indices, many of which were made and maintained by individual enthusiasts. These folks were a mix of established and self-proclaimed experts who had newly gotten their hands on the means of production. What they produced were largely pages of blue and purple links on Netscape-grey backgrounds, punctuated with little icons of shoveling dudes—lists of this and that, labors of love, some of which aimed to be comprehensive. Continue reading “johannes factotum & the ends of expertise”

neatline & visualization as interpretation

[This post is re-published from an invited response to a February 2014 MediaCommons question of the week: “How can we better use data and/or research visualization in the humanities?” I forgot I had written it! so thought I would cross-post it, belatedly, to my blog. Many thanks to Kevin Smith, a student in Ryan Cordell’s Northeastern University digital humanities course, for reminding me. Read his “Direct visualization as/is a tactical term,” here.]

Neatline, a digital storytelling tool from the Scholars’ Lab at the University of Virginia Library, anticipates this week’s MediaCommons discussion question in three clear ways. But before I get to that, let me tell you what Neatline is.

neatline

It’s a geotemporal exhibit-builder that allows you to create beautiful, complex maps, image annotations, and narrative sequences from collections of documents and artifacts, and to connect your maps and narratives with timelines that are more-than-usually sensitive to ambiguity and nuance. Neatline (which is free and open source) lets you make hand-crafted, interactive stories as interpretive expressions of a single document or a whole archival or cultural heritage collection.

Now, let me tell you what Neatline isn’t.

It’s not a Google Map. If you simply want to drop pins on modern landscapes and provide a bit of annotation, Neatline is obvious overkill – but stick around.

How does Neatline respond to the MediaCommons question of the week?

1)   First, as an add-on to Omeka, the most stable and well-supported open source content management system designed specifically for cultural heritage data, Neatline understands libraries, archives and museums as the data-stores of the humanities. Scholars are able either to build new digital collections for Neatline annotation and storytelling in Omeka themselves, or to capitalize on existing, robust, professionally-produced humanities metadata by using other plug-ins to import records from another system. These could range from robust digital repositories (FedoraConnector) to archival finding aids (EADimporter) to structured data of any sort, gleaned from sources like spreadsheets, XML documents, and APIs (CSVimportOAI-PMH Harvester, Shared Shelf Link etc.).

2)   Second, Neatline was carefully designed by humanities scholars and DH practitioners to emphasize what we found most humanistic about interpretive scholarship, and most compelling about small data in a big data world. Its timelines and drawing tools are respectful of ambiguity, uncertainty, and subjectivity, and allow for multiple aesthetics to emerge and be expressed. The platform itself is architected so as to allow multiple, complementary or even wholly conflicting interpretations to be layered over the same, core set of humanities data. This data is understood to be unstable (in the best sense of the term) – extensible, never fixed or complete – and able to be enriched, enhanced, and altered by the activity of the scholar or curator.

3)   Finally, Neatline sees visualization itself as part of the interpretive process of humanities scholarship – not as an algorithmically-generated, push-button result or a macro-view for distant reading – but as something created minutely, manually, and iteratively, to draw our attention to small things and unfold it there. Neatline sees humanities visualization not as a result but as a process: as an interpretive act that will itself – inevitably – be changed by its own particular and unique course of creation.  Knowing that every algorithmic data visualization process is inherently interpretive is different from feeling it, as a productive resistance in the materials of digital data visualization. So users of Neatline are prompted to formulate their arguments by drawing them. They draw across landscapes (real or imaginary, photographed by today’s satellites or plotted by cartographers of years gone by), across timelines that allow for imprecision, across the gloss and grain of images of various kinds, and with and over printed or manuscript texts.

a kit for hosting Speaking in Code

[Cross-posted from the Re:Thinking blog at CLIR, the Council on Library and Information Resources, where I’m honored to be serving as Distinguished Presidential Fellow. Check out all the great content at CLIR! (and see the Scholars’ Lab’s announcement, too).]

This is a belated follow-up post to last autumn’s “How We Learned to Start/Stop Speaking in Code,” in which I described the motivation for us, at the UVa Library Scholars’ Lab, to host a two-day summit on the scholarly and social implications of tacit knowledge exchange in digital humanities software development. But the timing is good!—because today, the Scholars’ Lab is releasing a web-based toolkit that any group can use to host a similar gathering. We also want to make the community aware of some venues in which distributed discussions of the social and theoretical side of DH software development can continue online: using the #codespeak hashtag on Twitter, and at the #speakingincode channel on IRC.

“Speaking in Code” was generously supported by the National Endowment for the Humanities and the University of Virginia Library, and it brought together 32 competitively-selected, advanced software developers with expertise in humanities applications of computing, for an extended conversation about the culture and craft of codework in DH. The group that met in Charlottesville last November paid special attention to knowledge and theoretical understandings that are gained in practice yet typically go unspoken—embodied in systems, techniques, interfaces, and tools, rather than in words. This is a brand of humanities work that can seem arcane and inaccessible to scholars, or worse: because its methods and outcomes are not always broadly legible, it is easily assumed to be devoid of critical thought and contextual (historical, theoretical, or literary) understanding. To quote my last post:

Communications gaps are deep and broad, even among humanities-trained software developers and the scholars with whom they collaborate. Much (not all) knowledge advances in software development through hands-on, journeyman learning experiences and the iterative, often-collaborative development of built objects and systems. Much (not all) knowledge advances in humanities scholarship through fixed and fluid kinds of academic discourse: referential, prosy, often agonistic. Continue reading “a kit for hosting Speaking in Code”

digital humanities in the anthropocene

[Update: I’ve made low-res versions of my slides and an audio reading available for download on Vimeo, Alex Gil has kindly translated the talk into Spanish, and Melissa Terras’ wonderful performance is now up on the Digital Humanities 2014 website. Finally, a peer-reviewed and formally-published version appears in a 2015 issue of DSH: Digital Scholarship in the Humanities.]

“And by-and-by Christopher Robin came to an end of the things, and was silent, and he sat there looking out over the world, and wishing it wouldn’t stop.” – A. A. Milne

Every morning, as the Virginia sun spills over the rim of the Shenandoah Valley, I dive into the water of my municipal swimming pool and think of ruined Roman baths. On either end of the lane in which I take my laps are blue tile letters, mortared just beneath the waterline by a craftsman of the century gone by. I read two words as I swim back and forth: shallow and deep, shallow and deep.

I’m here to give a talk that likewise wants to glide from shallows to depths in turn. My hope is to position our work—the work of the DH community that has nurtured me with kindness for some 18 years—less as it is lately figured (that is, less as a fragmenting set of methodological interventions in the contemporary, disciplinary agon of humanities scholarship) and more as one cohesive and improbably hopeful possibility. The possibility is for strongly connecting technologies and patterns of work in the humanities to deep time: both to times long past and very far in prospect. But I’ll swim to the shallows, too—because, by musing about the messages we may attempt to send and receive in the longest of longues durées, I mean also to encourage a searching and an active stance in DH, toward our present moment—toward engagement with the technological, environmental, and ethical conditions of our vital here-and-now.

I promised in my abstract a practitioner’s talk, and that is what you will get. I’m not a philosopher or a critic. I’m a builder and a caretaker of systems—so I will attempt to bring a craftsperson’s perspective to my theme tonight.

Continue reading “digital humanities in the anthropocene”