Thirteen years ago, I was a graduate student in English literature when the Twin Towers collapsed, a fireball erupted from the Pentagon, and a group of everyday travelers hurtled a fourth involved commercial airliner, in self-sacrifice, into a muddy field. We got an email from our department chair. It read (I paraphrase), “this is why poetry matters.”
I had been watching people leap to their deaths from skyscrapers on the morning news. “Bullshit,” said I, a girl who had been in love with Shakespeare and Pope and Keats and Tennyson since grade school. And that was the end of any more conventional conception I may have had of my own career–the end, for me, of the profession of English.
I was, truth be told, already on the way out, toward my discipline’s methodological and material oddball fringe–specializing by then not in literary hermeneutics but in the mapping of its lessons and techniques to bibliography, scholarly editing, human-computer interaction, and humanities computing. Over time–by applying my teaching experience and past education in Education, and by learning from the side jobs in labs and centers that I held as a grad student–I built some expertise in project management and digital cultural heritage. In that way, I applied myself to work that felt more satisfyingly pragmatic to me. I couldn’t bear to spend my time happily, as a single, sensitive reader and writer–but I could happily spend it struggling: nudging and nurturing people, and helping them find ways to work effectively as teams in the protection and remediation and interpretation and sharing of stuff. Soon I was a mother and a post-doc. Then I was a member of UVa’s research faculty in Media Studies and a mother some more. Finally, I became a librarian and (heaven help me) an administrator. Continue reading “all at once”
In recent years, we’ve guided four separate cohorts of the graduate fellows who participate in the Scholars’ Lab’s Praxis Program through an unusual exercise. Praxis is a team-based fellowship, in which six students, from a variety of humanities and social science disciplines and in varied phases of their graduate careers, spend two full semesters working together to design, create, and launch a digital project—either “from scratch” or by building on and refining the work of the previous year’s group. They do this with the benefit of careful mentorship, smart technical instruction, and lots of free caffeine and therapy from University of Virginia Library faculty and staff.
When I was a graduate student in my mid-20s, around (gasp!) the turn of the century, I helped to found an intentionally short-lived but very interesting and effective humanities computing think tank. It was sort of an unauthorized, prototyping or tool-building offshoot of the center where I worked, UVa’s Institute for Advanced Technology in the Humanities. This is before the Scholars’ Lab existed. Only CHNM and (relative to today’s wild blossoming) a startlingly few other such digital humanities and digital history centers were in operation. This is, in fact, before “DH” existed, as a term of art.
One of the many fun things for me, about establishing this think tank—alongside folks like Jerome McGann, Steve Ramsay, Johanna Drucker, Geoffrey Rockwell, Andrea Laue, Worthy Martin, and a few others—was that I got to name it! Sometimes you do, if you’re the one building the website. (Or at least, you used to.) The name I suggested was the Speculative Computing Lab—SpecLab, for short. I was so enamored with the idea—the metaphor, really, of speculative computing—that it also became the title of my dissertation. Let me tell you why, and explain why I tell this story on a panel about the future of DH centers. Continue reading “speculative computing & the centers to come”
[This—more or less—is the text of a keynote talk I delivered last week in Atlanta, at the 2014 DLF Forum: the annual gathering of the Digital Library Federation. DLF is one among several stellar programs at CLIR, the Council on Library and Information Resources, where I have the honor to serve as a Distinguished Presidential Fellow. I began the talk with the following slide…]
You’re probably wondering who Johannes Factotum may be. Let’s start with a story.
Grad school in English, for me, began with a scavenger hunt. I am deeply sorry to report that this was not as much fun as it might sound. In 1996, the University of Virginia Library’s OPAC had been online for only a few years, and for most, the physical card catalog reigned supreme. Journal collections were almost entirely in print or on microfiche, but above all were in the building—shared and offsite storage being mostly a thing of the future. Search engines, which were poor, were supplemented by hand-coded indices, many of which were made and maintained by individual enthusiasts. These folks were a mix of established and self-proclaimed experts who had newly gotten their hands on the means of production. What they produced were largely pages of blue and purple links on Netscape-grey backgrounds, punctuated with little icons of shoveling dudes—lists of this and that, labors of love, some of which aimed to be comprehensive. Continue reading “johannes factotum & the ends of expertise”
[This post is re-published from an invited response to a February 2014 MediaCommons question of the week: “How can we better use data and/or research visualization in the humanities?” I forgot I had written it! so thought I would cross-post it, belatedly, to my blog. Many thanks to Kevin Smith, a student in Ryan Cordell’s Northeastern University digital humanities course, for reminding me. Read his “Direct visualization as/is a tactical term,” here.]
Neatline, a digital storytelling tool from the Scholars’ Lab at the University of Virginia Library, anticipates this week’s MediaCommons discussion question in three clear ways. But before I get to that, let me tell you what Neatline is.
It’s a geotemporal exhibit-builder that allows you to create beautiful, complex maps, image annotations, and narrative sequences from collections of documents and artifacts, and to connect your maps and narratives with timelines that are more-than-usually sensitive to ambiguity and nuance. Neatline (which is free and open source) lets you make hand-crafted, interactive stories as interpretive expressions of a single document or a whole archival or cultural heritage collection.
Now, let me tell you what Neatline isn’t.
It’s not a Google Map. If you simply want to drop pins on modern landscapes and provide a bit of annotation, Neatline is obvious overkill – but stick around.
How does Neatline respond to the MediaCommons question of the week?
1) First, as an add-on to Omeka, the most stable and well-supported open source content management system designed specifically for cultural heritage data, Neatline understands libraries, archives and museums as the data-stores of the humanities. Scholars are able either to build new digital collections for Neatline annotation and storytelling in Omeka themselves, or to capitalize on existing, robust, professionally-produced humanities metadata by using other plug-ins to import records from another system. These could range from robust digital repositories (FedoraConnector) to archival finding aids (EADimporter) to structured data of any sort, gleaned from sources like spreadsheets, XML documents, and APIs (CSVimport, OAI-PMH Harvester, Shared Shelf Link etc.).
2) Second, Neatline was carefully designed by humanities scholars and DH practitioners to emphasize what we found most humanistic about interpretive scholarship, and most compelling about small data in a big data world. Its timelines and drawing tools are respectful of ambiguity, uncertainty, and subjectivity, and allow for multiple aesthetics to emerge and be expressed. The platform itself is architected so as to allow multiple, complementary or even wholly conflicting interpretations to be layered over the same, core set of humanities data. This data is understood to be unstable (in the best sense of the term) – extensible, never fixed or complete – and able to be enriched, enhanced, and altered by the activity of the scholar or curator.
3) Finally, Neatline sees visualization itself as part of the interpretive process of humanities scholarship – not as an algorithmically-generated, push-button result or a macro-view for distant reading – but as something created minutely, manually, and iteratively, to draw our attention to small things and unfold it there. Neatline sees humanities visualization not as a result but as a process: as an interpretive act that will itself – inevitably – be changed by its own particular and unique course of creation. Knowing that every algorithmic data visualization process is inherently interpretive is different from feeling it, as a productive resistance in the materials of digital data visualization. So users of Neatline are prompted to formulate their arguments by drawing them. They draw across landscapes (real or imaginary, photographed by today’s satellites or plotted by cartographers of years gone by), across timelines that allow for imprecision, across the gloss and grain of images of various kinds, and with and over printed or manuscript texts.