charter-ing a path

[Cross-posted from the Re:Thinking blog at CLIR, the Council on Library and Information Resources, where I’m honored to serve as Distinguished Presidential Fellow. Check out all the great content at CLIR!]

In recent years, we’ve guided four separate cohorts of the graduate fellows who participate in the Scholars’ Lab’s Praxis Program through an unusual exercise. Praxis is a team-based fellowship, in which six students, from a variety of humanities and social science disciplines and in varied phases of their graduate careers, spend two full semesters working together to design, create, and launch a digital project—either “from scratch” or by building on and refining the work of the previous year’s group. They do this with the benefit of careful mentorship, smart technical instruction, and lots of free caffeine and therapy from University of Virginia Library faculty and staff.

Our fellows’ first challenge, though, is not the daunting one of formulating a scholarly question that lends itself to exploration through building. Nor is it the challenge of learning a new digital production method (or four, or five), nor even of designing a system that can make a meaningful technical or intellectual contribution to humanities teaching and research (like the 2011-13 cohorts’ Prism project, or the past two groups’ revival of the Ivanhoe Game). Instead, our fellows nervously draft a project charter. Continue reading “charter-ing a path”

speculative computing & the centers to come

[This is a short talk I prepared for a panel discussion today with Brett Bobley, Ed Ayers, and Stephen Robertson, on the future of DH centers. The lovely occasion is the 20th anniversary celebration of the Roy Rosenzweig Center for History and New Media at George Mason University. Happy birthday, CHNM! Next year, I’ll buy you a drink.]

When I was a graduate student in my mid-20s, around (gasp!) the turn of the century, I helped to found an intentionally short-lived but very interesting and effective humanities computing think tank. It was sort of an unauthorized, prototyping or tool-building offshoot of the center where I worked, UVa’s Institute for Advanced Technology in the Humanities. This is before the Scholars’ Lab existed. Only CHNM and (relative to today’s wild blossoming) a startlingly few other such digital humanities and digital history centers were in operation. This is, in fact, before “DH” existed, as a term of art.

One of the many fun things for me, about establishing this think tank—alongside folks like Jerome McGann, Steve Ramsay, Johanna Drucker, Geoffrey Rockwell, Andrea Laue, Worthy Martin, and a few others—was that I got to name it! Sometimes you do, if you’re the one building the website. (Or at least, you used to.) The name I suggested was the Speculative Computing Lab—SpecLab, for short. I was so enamored with the idea—the metaphor, really, of speculative computing—that it also became the title of my dissertation. Let me tell you why, and explain why I tell this story on a panel about the future of DH centers. Continue reading “speculative computing & the centers to come”

neatline & visualization as interpretation

[This post is re-published from an invited response to a February 2014 MediaCommons question of the week: “How can we better use data and/or research visualization in the humanities?” I forgot I had written it! so thought I would cross-post it, belatedly, to my blog. Many thanks to Kevin Smith, a student in Ryan Cordell’s Northeastern University digital humanities course, for reminding me. Read his “Direct visualization as/is a tactical term,” here.]

Neatline, a digital storytelling tool from the Scholars’ Lab at the University of Virginia Library, anticipates this week’s MediaCommons discussion question in three clear ways. But before I get to that, let me tell you what Neatline is.

neatline

It’s a geotemporal exhibit-builder that allows you to create beautiful, complex maps, image annotations, and narrative sequences from collections of documents and artifacts, and to connect your maps and narratives with timelines that are more-than-usually sensitive to ambiguity and nuance. Neatline (which is free and open source) lets you make hand-crafted, interactive stories as interpretive expressions of a single document or a whole archival or cultural heritage collection.

Now, let me tell you what Neatline isn’t.

It’s not a Google Map. If you simply want to drop pins on modern landscapes and provide a bit of annotation, Neatline is obvious overkill – but stick around.

How does Neatline respond to the MediaCommons question of the week?

1)   First, as an add-on to Omeka, the most stable and well-supported open source content management system designed specifically for cultural heritage data, Neatline understands libraries, archives and museums as the data-stores of the humanities. Scholars are able either to build new digital collections for Neatline annotation and storytelling in Omeka themselves, or to capitalize on existing, robust, professionally-produced humanities metadata by using other plug-ins to import records from another system. These could range from robust digital repositories (FedoraConnector) to archival finding aids (EADimporter) to structured data of any sort, gleaned from sources like spreadsheets, XML documents, and APIs (CSVimportOAI-PMH Harvester, Shared Shelf Link etc.).

2)   Second, Neatline was carefully designed by humanities scholars and DH practitioners to emphasize what we found most humanistic about interpretive scholarship, and most compelling about small data in a big data world. Its timelines and drawing tools are respectful of ambiguity, uncertainty, and subjectivity, and allow for multiple aesthetics to emerge and be expressed. The platform itself is architected so as to allow multiple, complementary or even wholly conflicting interpretations to be layered over the same, core set of humanities data. This data is understood to be unstable (in the best sense of the term) – extensible, never fixed or complete – and able to be enriched, enhanced, and altered by the activity of the scholar or curator.

3)   Finally, Neatline sees visualization itself as part of the interpretive process of humanities scholarship – not as an algorithmically-generated, push-button result or a macro-view for distant reading – but as something created minutely, manually, and iteratively, to draw our attention to small things and unfold it there. Neatline sees humanities visualization not as a result but as a process: as an interpretive act that will itself – inevitably – be changed by its own particular and unique course of creation.  Knowing that every algorithmic data visualization process is inherently interpretive is different from feeling it, as a productive resistance in the materials of digital data visualization. So users of Neatline are prompted to formulate their arguments by drawing them. They draw across landscapes (real or imaginary, photographed by today’s satellites or plotted by cartographers of years gone by), across timelines that allow for imprecision, across the gloss and grain of images of various kinds, and with and over printed or manuscript texts.

a kit for hosting Speaking in Code

[Cross-posted from the Re:Thinking blog at CLIR, the Council on Library and Information Resources, where I’m honored to be serving as Distinguished Presidential Fellow. Check out all the great content at CLIR! (and see the Scholars’ Lab’s announcement, too).]

This is a belated follow-up post to last autumn’s “How We Learned to Start/Stop Speaking in Code,” in which I described the motivation for us, at the UVa Library Scholars’ Lab, to host a two-day summit on the scholarly and social implications of tacit knowledge exchange in digital humanities software development. But the timing is good!—because today, the Scholars’ Lab is releasing a web-based toolkit that any group can use to host a similar gathering. We also want to make the community aware of some venues in which distributed discussions of the social and theoretical side of DH software development can continue online: using the #codespeak hashtag on Twitter, and at the #speakingincode channel on IRC.

“Speaking in Code” was generously supported by the National Endowment for the Humanities and the University of Virginia Library, and it brought together 32 competitively-selected, advanced software developers with expertise in humanities applications of computing, for an extended conversation about the culture and craft of codework in DH. The group that met in Charlottesville last November paid special attention to knowledge and theoretical understandings that are gained in practice yet typically go unspoken—embodied in systems, techniques, interfaces, and tools, rather than in words. This is a brand of humanities work that can seem arcane and inaccessible to scholars, or worse: because its methods and outcomes are not always broadly legible, it is easily assumed to be devoid of critical thought and contextual (historical, theoretical, or literary) understanding. To quote my last post:

Communications gaps are deep and broad, even among humanities-trained software developers and the scholars with whom they collaborate. Much (not all) knowledge advances in software development through hands-on, journeyman learning experiences and the iterative, often-collaborative development of built objects and systems. Much (not all) knowledge advances in humanities scholarship through fixed and fluid kinds of academic discourse: referential, prosy, often agonistic. Continue reading “a kit for hosting Speaking in Code”

asking for it

A report published this week by OCLC Research asks the burning question of no one, no where: “Does every research library need a digital humanities center?” The answer, of course, is of course not.

Of course, I’m being rude. The click-bait question, as posed, had a foregone conclusion — but there’s much to recommend in the report, even if it fails to define a “DH center” in any clear way, makes an unwarranted assumption that “DH academics” and librarians exist in mutually-exclusive categories, and bases too much of its understanding of faculty and researcher perceptions on the inadequate sample of some conference-going and a couple of focus groups (however carefully convened and accurately reported).

The chief value of the report may lie in its stated and implied purposes: providing library directors with a set of options to consider (stated) and an easy citation — a bit of OCLC back-up — (implied) for the local arguments they must formulate in the event their provosts or presidents catch Library-based DH Center-itis and seem completely unwilling to entertain a model customized to the needs of the institution. Wait a minute. That will never happen.

Okay, the chief value of the report is in its clear reinforcement of the notion that a one-size-fits-all approach to digital scholarship support never fits all. Continue reading “asking for it”