Friday, December 6, 2013

Moving Past the Ultimatum

I should say, in the interest of full disclosure, that I was skeptical of Rushkoff right off the bat. Program or be Programmed? It’s the ultimatum that really gets me. As a casual reader looking for a hook, I admit that I love his style. It’s blatant and provocative, and undoubtedly interesting. I’m being taken advantage of by the elite? “They” don’t want me to understand how my devices work? Insomuch that Rushkoff works to put programming on a pedestal , trying to make seem like necessary civil disobedience, I think he succeeds.

As a critical thinker, however, I’m less sure. By the end of things, I feel a bit chilled, and not for the right reasons; I feel as though I’ve been run roughshod over and vaguely insulted; I think that some credit where credit is due is lacking left and right.

I can’t help but think, at the end of this semester of reading, that we as scholars and as access activists, can do better than this. In a discipline where much of the fascination lies in the in-between spaces, the up-against and the puzzles, I simply reject the notion that there are two rigidly defined choices and nothing else. Near the beginning of the introduction, Rushkoff says, “In the emerging, highly programmed landscape ahead, you will either create the software or you will be the software. It’s really that simple: Program, or be programmed. Choose the former, and you gain access to the control panel of civilization. Choose the latter, and it could be the last real choice you get to make” (7-8). He’s very concerned, all along, about the elite that controls the dominant medium of the age, and I can’t fault him for that. Media is obviously controlled by people with powerful hands. However, I think that if the discussion turns into a simple David vs. Goliath power struggle, that it will end in reduction. As far as I’m concerned, program or be programmed is the kind of attitude that will create an elite/powerless dynamic just as much as the programmer vs. end user juxtaposition that Rushkoff sees happening now. That type of better-than-thou attitude is one of the things that I think turns people off from wanting to understand the technology they’re using. It’s not productive with casual users, and I don’t even think it’s productive amongst an academic audience. If we’re truly interested in the spread of thoughtful and powerful digital tools, the worst thing to do is to force them.

I struggle with this, because there are aspects of Rushkoff’s writing that I think are really valuable. He’s certainly not wrong about the potential power of programming, and I wouldn’t try to deny that media has long been harnessed most effectively by those with the money and the influence. I appreciate his appreciation of the “value-creation” potential of technology. Personally, I think there’s great value in programming, and I hope that I can continue to pick up skills that will allow my digital work to evolve.

It’s just the rigidness that I can’t allow. Is anything so black and white? What does making this type of argument even get you? People that agree, would have agreed anyway, and people that are on the fence, like me, are apt to find things to question—like me—and feel less friendly to the cause than they did before they started reading.


And since I’m questioning things, let me add one more thing. Rushkoff says that we’ve at least, as a society, gotten to the point of writing. We write while the elite program. The point I think he misses, though, is that writing in a digital age is not at all the same as writing in any previous age. If I have learned anything this semester, it is that our communicative actions are deeply tied to their forms. A question that needs to be asked, I think, is what is it to write in an age of programming? What aspects of programming can we become aware of; how can the technology inform our writing strategies? To put all of our academic and social eggs in the basket of programming just seems short-sighted, and it devalues the other meaningful work that’s being done, sometimes work that is “just” being done using the programs already at our disposal. 

Saturday, November 30, 2013

How I Think About How We Think

As I was reading N. Katherine Hayles’ book How We Think, I noticed that it not only aims to emphasize the collaborative nature of DH, but also that the book itself is structured to recreate that collaboration. The book is dotted with interviews and multiple perspectives; what this tells me, as a reader, is that Hayles herself sees the value in a crowded academic conversation. If we are truly to embrace the ways in which DH asks us to change our scholarly approaches, then we should also think about how those changes might manifest in even our traditional print scholarship.

I found a lot to reflect on and appreciate in How We Think, but I was most struck by a passage in Chapter 2 about the theoretical implications of coding:

On the human side, the requirement to write executable code means that every command must be explicitly stated in the proper form. One must therefore be very clear about what one wants the machine to do. For Tanya Clement…this amounts in her evocative phrase to an “exteriorization of desire.” Needing to translate desire into the explicitness of unforgiving code allows implications to be brought to light, examined, and modified in ways that may not happen with print. At the same time, the nebulous nature of desire also points to the differences between an abstract computational model and the noise of a world too full of ambiguities and complexities to be captured fully in a model. (42)

Hayles is right; exteriorization of desire is an evocative phrase, and a wonderfully challenging one. What does it mean to really lay your scholarly cards on the table? How does research change when it must be recorded step-by-step? What might we do differently when we channel our searches and queries through a computer that needs to be guided to results?

Though I would never have though to articulate it this way, I think I have encountered this need to question the implications or functions of my research as I’ve dipped my toes into the water of digital projects. One example that immediately comes to light is mark-up: what aspects of a text do you as a scholar choose to mark, and therefore emphasize, when you digitize? What do those focuses allow you to study? Mark-up, too, is often the first step. Therefore, there are decisions that must be made before certain types of analysis or exploration can even begin. Mark-up can also impact a wide audience of readers and researchers, in terms of what can be searched and returned about a certain text or collection of texts. Encoding standards have all kinds of political weight—in some ways, it is the same type of weight that has always come with editorial decisions, but I think there are also some differences. As our possible scope widens—more texts, more search power, more computing force—the decisions that guide the possibilities become exponentially more important.

But, on the other hand…

Does the act of coding fundamentally change the type of research question we can (or should) ask? Previous readings in our seminar have touched on the concept of meaningful failure in DH work, and I think it’s relevant in this context as well. If each step of our research must be explicitly coded, we are crafting for ourselves a specific path. Eventually, that path will either lead to fruitful results, or it will lead to a dead end that will, itself, tell us something about what we asked. Either way, though, the code has been written. It is a more solid-seeming process than perhaps some traditional avenues of research—perhaps it is that there is more evidence left of our various attempts to find patterns or meaning?


Hayles is right, I think, that the world of the humanities is one full of noise; I also think that the conflict between that noise and the need to explicitly state our research desires will continue to be an important point of tension, and I don’t think that we should strive to completely erase that tension. Like following the upward arc of a bell curve, tension up to a certain threshold can push us to be better, and to inquire more not only about the information in front of us, but also about our own motivations for asking the questions we ask. 

Sunday, November 24, 2013

In Which a Theme is Revealed

Reading Graphs, Maps, Trees at this particular time was a strange and wonderful experience. This was, first and foremost, because it’s a provocative and earnest book, and there’s nothing I love more than scholars who respond earnestly to their chosen fields. I’ve never been one for sustaining a hip, detached façade. Also, though, there were strange overlaps between the examples used in the three sections and work I’ve done in the last year. In a class on Digital Archiving and Editions here at the University of Nebraska, I worked with a partner on mapping several of Doyle’s original Sherlock Holmes adventures. The project was a joy because I was working with texts that I have loved reading and rereading, but also because of what it taught me about the power of visualization.

Doyle’s stories are full of places—readers are taken both in and out of buildings in greater London, but also in and out of neighborhoods, suburbs…they are, like most adventure stories, meant to be seen by readers. However, one cannot just look at a map of modern London and understand Doyle’s London. Where do these characters live? Where do they live in relation to one another? To landmarks? What types of places does Doyle take us to? Being able to represent the answers to these questions is not only a helpful aid to the casual reader, but also can open up new lines of scholarly inquiry. There are details that cannot be gleaned from texts without some added manipulation. As Moretti says:

…you reduce the text to a few elements, and abstract them from the narrative flow, and construct a new, artificial object like the maps….And with a little luck, these maps will be more than the sum of their parts: they will possess ‘emerging’ qualities, which were not visible at the lower level. (53)

Visualizations of literature—graphs, maps, trees, sine waves, word maps—all allow for a revision opportunity that other research tools cannot recreate. Simply put, visualizations show us a different version of the text in question than close reading, or deconstruction, or Marxist critiques, or any other interpretive lens. And why should we not think about space when we think about literature? I remember, for instance, realizing the crucial connection between the social class of Doyle’s minor characters and their relative distances from the epicenter of London—and how these distances were allowed by the rapid development of rail travel. Without going through the act of mapping out the stories, it’s a set of connections I would have missed. Perhaps those who possess not only a knowledge of late 19th-century British literature but also social history and technology would not have needed an impetus for realization, but, for the rest of us…

            There’s something very tactile about the process of creating a visualization that seems not to occur in traditional literary scholarship. Perhaps this comes from the adaptation from one medium to another; whatever the cause, I appreciate the slowing down that it requires of me. The actions are less familiar; the “abstracting” that Moretti writes about seems to work, for me, like taking a familiar painting and holding it upside down before viewing it again—it beats my brain’s familiar routes for just long enough to make space for something new.


            One of the aspects of Moretti’s discussion I particularly appreciate is that of the interdependence of interpretive strategies and tools. Just as a text can perhaps become more than the sum of its parts with the addition of a visualization, a visualization without a text would lack—I think—a crucial foundational basis. Again—and, at this point, I’m willing to call this “the theme of the semester”—what it seems to come down to is the value of a combination of approaches. When we as scholars are willing to incorporate new strategies, our research will undoubtedly benefit, either because our new approach will reveal an undiscovered facet, or because attempting something new will reveal to us something unknown about our previous approach. 

Saturday, November 16, 2013

Hybrid: More Than Just a Word for Fancy Cars

What I have always loved about embracing a new field of study is the moment when realizations start to multiply. It’s always a little bit magical; always a little bit like your brain has suddenly become more clever than it really is. Reading Lev Manovich’s Software Takes Command this week brought me some of these clarifying moments; hopefully I’ll be able to recreate some of the connections outside the insulation of my internal dialogue.

The place where Manovich starts his book—basically, with the omnipresence of software—is sound. Software is certainly an integral part of my day-in, day-out experience—I’m writing this blog post on Word, which I will then copy and paste…I could spend the whole post just proving his point that we are now in a “software society.” I’ve been teaching my students recently about how to introduce main ideas into writing, so I feel a bit guilty just jumping in, but—and this is so often the case—the interesting part starts when we take the given statement as truth and move forward. So, in that vein…

Just as adding a new dimension adds a new coordinate to every point in space, “adding” software to culture change the identity of everything that a culture is made from. (In this respect, software is a perfect example of what McLuhan meant when he wrote, the “message of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs.”) (33)

I couldn’t help but think about this XKCD comic when I read Manovich’s introduction:

http://xkcd.com/1289/
It does seem to accurately represent the questions that we so often ask when “New Thing X or “Exciting Gadget Y” comes to the table; my favorite recent example is the conversation swirling around Google Glass. To hear people talk, it's either the making or the breaking of the world as we know it.

And, of course, it wouldn’t be a conversation about technology, or a conversation about a conversation about technology, if we didn’t get to talk about alienation. This time, though, it’s different, because standing against the traditional “technology is a wedge that sits between the individual and others” argument is Manovich’s idea of hybridization. I’ll let him define it:

…in media hybrids, interfaces, techniques, and ultimately the most fundamental assumptions of different media forms and traditions, are brought together resulting in new media gestalts. That is, they merge together to offer a coherent new experience different from experiencing all the elements separately. (167)

So, rather than a divisiveness, we have cohesion and creation. What I love about this concept is that in it I also see reflected the academic values that resonate most with me. The difference between text and video existing side-by-side and text acting in the way that a video acts is the difference between static and dynamic education. Up until recently, I’ve led a very English Studies-centric life. But, no, that’s not quite true, either. Let me try again. Up until recently I’ve led a very Creative Writing-in-English Studies-centric life. And I couldn't be doing what I'm doing now without that history, and I do value the type of learning that a single-lens focus allows for. However...

 The very first thing I came to value about digital humanities, and all the many things that term might encompass, is that it seems to be much more about saying yes than other methodologies or mentalities. Interdisciplinary work? Yes. Working toward a mutually-informed print/digital dynamic? Yes. Real, intentional, thoughtful teamwork? Yes. Making meaning from unexpected or flawed results? Yes.

As always, for me, it comes down to language. As Manovich continues to explain hybridization, he says:

…in the process of hybridization…we end up with a new metalanguage that combines the techniques of all previously distinct languages… (170)

This is exciting for all kinds of reasons. The one that’s sitting in the front of my mind at present, though, is this: in an environment where new (media) experiences, to borrow Manovich’s phrase, are increasingly prevalent, I have to imagine that those experiences are going to come with a multiplicity of mores—more combinations, more conversations about those combinations and hybrids, more people contributing to the conversations. As a relatively young person looking into the complicated maze of academia, I can tell you that this matters--really matters. If it represents a shift in thinking, an increased openness and interest in collaboration, then let me own my idealism and tell you that all of this is very good news.

Wednesday, October 23, 2013

Bound and re:Bound

Fill in the circle completely. 
Don’t make any marks outside the box. 
Use only a No. 2 pencil.

If you went through the American public school system after the dual rise of the standardized test and the Scantron form—as I did—then instructions like those above probably sound very familiar. I’ve always hoped that no kid ever had the makings of a masterpiece in a standardized essay response—for all we know, a brilliant re-examination of the reasons for the War of 1812 has been lost forever, because to write it would have been to mark outside the box.

This, of course, qualifies as an especially extreme example of artificially restrictive boundaries, and even this has its benefits. For one, the many, nameless readers of the thousands of responses to the same generic essay prompt probably appreciate not having to read any more than is necessary. But even when constructed from some perceived need, both literal and metaphorical boundaries to writing can have pervasive consequences. I see this firsthand teaching introductory composition classes here at the university. I’ve constructed assignments that are open by design; especially when it comes to topic choice, the lion’s share of the intellectual work is left up to the student. Some of them thrive, developing interesting and unusual avenues to explore; more often, though, the freedom makes them uncomfortable. Even brainstorming seems to be to be constrained by the mental version of rusty pipes—once you open the tap, it takes a while for things to really get going. And why wouldn’t it? The most ruthlessly efficient way teach writing is with a formula: three supported points equals a conclusion. After twelve years of that, I can’t blame them for looking like deer in the headlights when I introduce them to the concept of a lyric essay or a non-linear argument.

What really resonated with me in Matt Kirschenbuam’s Mechanisms is the same wrestling with boundaries, this time located in new media. He’s constantly evaluating, and then questioning, the barriers that have been established that separate traditional textual studies from digital media analysis, with some fascinating results. Early on, he quotes from a 1991 book by George Landow and Pauel Delany:

So long as the text was married to a physical media, readers and writers took for granted three crucial attributes: that the text was linear, bounded, and fixed. Generations of scholars and authors internalized these qualities as the rules of thought, and they have pervasive social consequences. We can define Hypertext as the use of the computer to transcend the linear, bounded, and fixed qualities of the traditional written text. (42)

To take this view, it’s no longer just the box for the standardized test essay, but the whole expanse of print literature before it that leads to internalized boundaries. Here, new media is the liberator, the breaker of pre-established roles. And in some ways, for some situations, this is true. (I keep trying to imagine how one might translate the “write lines on the chalkboard” punishment into the 21st century. With CTRL+C, after all, to reproduce a line of text is the work of seconds, not hours.)

But if it was really that simple, if the seemingly ever-expandable Microsoft Word document was all that we needed to reformulate our relationship with text, then my students should already be free. They, even more than me, should have the free and flexible relationship with digitally recorded words that continues to be propagated—electronic composition is flexible! It’s changeable! The freedom is exhilarating, right?

Right. 

Except…

Except when it’s not; except when changeability doesn’t mean transparency, or removability. One of my favorite recurring images in Mechanisms is that of the computer as a ‘black box,’ that seemingly impenetrable record of what has been. The book also spends quite a bit of time with the mechanics of a bitstream; these both add up to the same idea, which is this: like picking up glitter spilled from an elementary school craft table or the removing the smell of sauerkraut from a poorly-ventilated kitchen, removing the traces of computer activity is no small feat. To bring in another, more elegant image of digital resonance, this time from the text itself:

The interactions of modern productivity software and mature physical storage media such as a hard drive may finally resemble something like a quantum pinball machine…files leaving persistent versions of themselves behind at every point they touch—like afterimages that only gradually fade—and the persistent versions themselves creating versions that multiply in like manner through the system. (52)

At some point, our record-keeping became self-perpetuating. This, too, is something that my students know better than almost anyone. Try arguing the impermanence of the internet to someone being chased by an embarrassing photo. We haven’t done away with media boundaries at all; if anything, we’ve just suppressed them into the gears-and-guts layer of the composition apparatus so we don’t have to look at them anymore. 

A composition notebook only has so many pages; a terabyte of memory, though it seems vast, only has room for so many bytes. I keep wondering: will we even know what to do with uncapped storage when we finally get it?

Thursday, October 10, 2013

Back to Babel

Growing up, I had a particular habit with much-watched movies. When I got to the point where I could basically recite the lines along with the actors, I would start watching them in other languages for fun—Spanish, usually, or French if the mood struck me. It wasn’t that I was trying to actively teach myself another language, though I’m sure I picked up phrases here and there; it was more about how the movie both changed and stayed exactly the same. The story was the story that I knew so well, but by switching languages, it also felt like I was watching things unfold for the first time.
It was the same, later, when I was learning Latin (an ever-practical choice) in college. I’d read the Aeneid previously, in English, but it wasn’t the same; the essence wasn’t the same—the feeling of the words as they hung aloud in the room was markedly different.

It’s natural while thinking about technology to think about language; technology is often utilized in the service of communication, after all. It’s natural, I think, to question how technology can be made universal when the practicalities of language and location are not. I’ve seen eloquent arguments for the value of this universality; technology itself is sometimes the language that allows this type of engagement. And it does seem, when we take a wide view of things, that a swiftly-moving blanket of interconnectivity is spreading itself out; it is challenging to keep it from slipping over our heads; the buzz of the global is a distinct layer of sound that runs under everything else.

So in the midst of this, it is perfectly right that Donna Haraway should ask the question: is a common language actually what we want? What is lost when the narrative becomes universal?
She gets to this question—among others—in the delightfully provocative “A Cyborg Manifesto.” First, she crystallizes something about technology that I think is paramount to the way things crack things open later on. Observe:

Technologies and scientific discourses can be partially understood as formalizations, i.e., as frozen moments, of the fluid social interactions constituting them, but they should also be viewed as instruments for enforcing meanings. The boundary is permeable between tool and myth, instrument and concept, historical systems of social relations and historical anatomies of possible bodies, including objects of knowledge. Indeed, myth and tool mutually constitute each other.
Furthermore, communications sciences and modern biologies are constructed by a common move — the translation of the world into a problem of coding, a search for a common language.

I am completely taken with the idea of technology acting as a preservative for a social moment. It’s the “why” behind the incredible nostalgia of outdated pieces, but I hadn’t ever had the wherewithal to conceptualize it that way. The beauty is this: now that technology—the churning, buzzing behemoth—gets forced into a moment of stillness, we can lay hands on it. To continue with the metaphor of freezing, it’s simple work to shatter something made of ice—and shatter it does. Haraway’s summation comes to this:

This is a dream not of a common language, but of a powerful infidel heteroglossia. It is an imagination of a feminist speaking in tongues to strike fear into the circuits of the supersavers of the new right. It means both building and destroying machines, identities, categories, relationships, space stories. Though both are bound in the spiral dance, I would rather be a cyborg than a goddess.

Questioning the effect of universality becomes a move of power, rather than one of weakness. To embrace a cyborg’s mentality is to seek friction between previously bounded dichotomies—not to eliminate it, but to capture what sparks may form. So, what might it mean to study like an infidel? To create meaning like a cyborg?

These are questions that all disciplines can ask of themselves. For me it means, among other things, looking towards blended media rather than away. When it comes right down to it, the book has always been a cyborg—first birthed from the oral tradition and the inception of the alphabet; then from the alphabet and the revolution of printing; now it stands reborn as the child of print and digital. The mutable boundaries are what clue us in to its definitive properties, if we look.

A common language—literally, technologically—would allow for one dialogue, it’s true, but would we in fact be saying less? Can we instead harness what comes of cacophony? 

Saturday, October 5, 2013

In which I continue to puzzle over typewriters

Several years ago, I came by an electric typewriter at a garage sale. It cost almost nothing, but it also didn’t have a ribbon, so for years it sat in a case in my closet. Finally, this summer, I got around to getting it up and running. A writer and her typewriter, together at last! (Were this a movie, there would be a nice montage of me learning which keys need to be repaired. It’s a good thing I don’t have occasion to use the letter z too often.) I’m someone who grew up writing by hand, and who now primarily writes by laptop keyboard, so it was strange how much those first strokes on the humming Smith Corona felt like a homecoming…or perhaps a unification?

Writing about the typewriter (was he writing about a typewriter ON a typewriter?), Marshall McLuhan says this:

Because he is an audience for his own mechanical audacities, he never ceases to react to his own performance. Composing on the typewriter is like flying a kite. (261)

Using the example of E.E. Cummings, McLuhan explains the more tangible writerly acrobatics that the typewriter encouraged—he doesn’t completely credit the typewriter with the free verse revolution, but it’s given some pretty significant credit. (Cummings also once wrote that "Progress is a comfortable disease," so I'm somehow not surprised that he gets brought in here as an example.)

I continue to be fascinated by this format/content relationship, especially as it pertains to writing, and I have to admit that McLuhan’s summation of “the medium is the message” is about the catchiest way to communicate the connection that I’ve read thus far. He also does an admirable job, I think, of tracing these concepts back to the root. I—numb to the real impact of the medium as apparently I am—would never have focused so much power with the alphabet itself, naked even of any vehicle. I find it challenging to conceptualize the phonetic alphabet the way McLuhan does, though I think it might have more to do with the contrasting forces at play in his theory than with what he’s actually saying. Here I’m thinking mostly of the strange wave-like pattern of our advancement; the way the electric age is in some ways a…not a regression, exactly, but perhaps a corrective action? A way of imploding us back together? Whatever it is, it’s not quite on the typical axis of understanding.

But, back to my pet puzzlement of the moment:

One of the game-changing aspects of the typewriter is supposed to be the way it compresses and combines the composition-to-publication process. I suppose that was true when McLuhan was writing (which, as I had to continually remind myself, was in the 1960s), but the forward motion of the intervening years has made it even more true. I’m typing this post into a Word document before I paste it into the ‘new post’ box on Blogger, but I wouldn’t have to; I could leave myself with a single click between my thoughts (as they appear on the screen) and the wide and instantaneous realm of the internet.  

So, that much is technically true. But what does it matter? It can be challenging to be conscious of one’s own writing process, but I think I can work out this much about myself:
I’m more likely to write a really terrible, rough-from-the-edges-to-the-core draft if I’m writing by hand; I’m more likely to draft something from start to finish, without pausing to correct anything, if I’m writing by hand.
Put me a keyboard in front of me, and the stakes are somehow heightened. I agonize more about individual words as they come; I am more aware of my writing as it will eventually exist in front of an audience. Why is that true, when I could just as easily hand my notebook to someone, or lots of someones? 
It must be something about connectivity--the handwritten page might be more connected to me--my personal handwriting, one of the closest extensions of my own self--but the typed page seems more connected...or more connectable, to everyone else who knows the orderly lines of the alphabet. 


Frankly, this whole business is getting a little eerie. But, if I’m to get to the point of Understanding Media (and doesn't it seem more and more of an uphill climb?), then those moments of awareness are necessary. To borrow from McLuhan’s amputation metaphor, I guess I’m ready for a bout of phantom-limb syndrome? 

Saturday, September 28, 2013

The Literal Shape of Things

Time for a confession: I love talking about things I don't understand just as much as the next aspiring scholar. It's fun! It's frustrating! It often feels like painting while wearing a blindfold! But we all have limits, right? Sometimes I just want to create moments where my academic comfort zones work for me. (Incidentally, if you ever need to know about the deaths of American modernist novelists, or innovations in mid-19th century prosthetics, you know who to ask....)

So, as I read Friedrich Kittler’s Gramophone, Film, Typewriter this week (incidentally another text translated from German, thankfully not nearly so dense or so inscrutable as Heidegger), I was thrilled to see not just one, but many avenues for connecting what I know of writing and what Kittler knows and theorizes of technology.

It would be presumptuous and inaccurate to reduce Kittler’s book to just one, or even several, “big ideas,” but he does return, in all three sections of his book, to the idea of form: the form of emerging technology as a re-purposing of a human neurological phenomenon, the form of technology as it shapes the content it creates, form as impetus to the pendulum that swings between real and imagined.

Now, poets generally know a thing or two about form. Even people who profess a profound indifference about poetry generally know a thing or two about poetic form—you know what a sonnet is, or a haiku, or—come on now—a limerick? You probably have a handle on stanzas, too: basic division of parts, an idea we share with music.

To write poetry is to inherently interact with the concepts of form. Even the most free-wheeling of free-verse poets, even the Beats, even Whitman’s sprawling lines and occasional disregard for margins are working with form; to disregard a convention is still to have a relationship with it, if only in the sense that whatever you’ve done will be first understood by others in the context of those conventions.

So, then, how to connect this back to Kittler? I want to start with a Gottfried Benn quote that he uses:

The poem impresses itself better when read….In my judgment, its visual appearance reinforces its reception. A modern poem demands to be printed on paper and demands to be read, demands the black letter; it becomes more plastic by viewing its external structure (228).

In some ways, this seems a perfunctory observation: the visual appearance of poetry has been up for discussion for centuries—since Blake’s illuminated manuscripts, since ornamental drop caps first adorned copies of the Bible, in some way since poetry began to transition from a primarily verbal to a primarily written art. Even Shakespeare was thinking about it:

O fearful meditation! where, alack,
Shall Time's best jewel from Time's chest lie hid?
Or what strong hand can hold his swift foot back?
Or who his spoil of beauty can forbid?
O, none, unless this miracle have might,
That in black ink my love may still shine bright. (from Sonnet 65)

There’s something to this, then, to the physical act of putting words down. Kittler is arguing that the typewriter, specifically, is a game-changer. I think that he’s right, and not only because I think that the phrase “discursive machine-gun” is brilliant. But if he’s right about that, if the mechanization of the previously manual task of writing has fundamentally altered the future content of writing, then oughtn’t it go all the way back? We know that the rise of water-powered paper mills impacted the availability of writing, but did it also impact what was being written? Of course, if only because it impacted who was writing…

You can see how this gets digressive, and how all these ideas are tightly bound in one another. I’m still chasing the ultimate why of this, trying to figure out what it is about the set of constraints that make up the typewriter that does something to us. Is it the “automated and discrete steps” of the typing process? Is it that somehow building from letters is different than building from words?

And, to end by going forward: what about the incarnations of these basic sensory experiences that were still in the future when Kittler’s book was published? I’m especially curious about the idea of touchscreens. If to build letter by letter, keystroke by keystroke, is a fundamental shift, then what of building without any tangible representations of letters? If writing letters is to commune with ghosts, as Kafka implied in his letters, then what is it when the letters themselves are ghosts?



Saturday, September 21, 2013

My Date with Heidegger: An Epistolary Saga

My Dear Heidegger,

I hate to say this, but people warned me about you.

They said you were difficult; they said you weren't worth the trouble—all those things that people tend to say about philosophers, especially ones they don’t like.

So I tried to prepare myself: I brewed an entire pot of coffee. I made myself a batch of scones (I’ve always thought that scones are the most academic of baked goods). I readied my pile of differently-colored highlighters. I even Googled your picture—incidentally, I wonder how you would have felt about Google?—and you don’t look like a difficult man. Just serious. Thoughtful.

But, as will come as no surprise, I’m not writing to you to talk about your face.

You see, Heidegger—can I call you H.? I’m going to call you H.—when I read philosophy, I always try to figure out why the ideas are being presented in the way that they are. Structure and content go hand in hand, right? And so I asked myself this question about you. Because you really weren’t satisfied with the job that language was doing, were you? I confess that I don’t speak a word of German, and I do wonder if I’d be coming to you with a different set of questions if I’d read things as you’d written them originally. But, alas…

I’m going to cut to the chase here. What is it with the re-definitions?  I’m as much a fan of the flexibility of language as the next person; I freely admit that sometimes I make up words, too. But I’ve never read anything written by anyone who liked gerunds as much as you. I read things like “Enframing is an ordaining of destining, as is every way of revealing” and the cynic in me just wants to clock out. But I didn’t want to do that to you…even though after a re-read, and another re-read, I’m not sure I understand. You know, H., you were a beautiful writer—not beautiful in the way that flower are beautiful, or even in the way that sonnets are beautiful, but more like the beauty of a well-preserved skeleton. It’s very precise, your language, and completely dependent on a series of interlocking parts—like a spine, like the twenty-seven bones in the human hand.

I’ve been wondering, H., how much you knew about programming. You see, what you did to examine the essence of technology feels an awful lot like what programmers do to get at the physical, or instrumental, if you will, side of technology. When the code doesn’t work, you rewrite it. When the proof is broken, you fix it. And so you do here. You were creating your own way to communicate, just like the originators of symbolic logic, or the creators of HTML. You take this very magpie approach—look to the Greeks, look to science, look to poetry (a nice touch, by the way)—and you build us up to this triumphant place, where all of a sudden we are not just dealing with technology, but something more:

…enframing propriates for its part in the granting that lets man endure—as yet inexperienced, but perhaps more experienced in the future—that he may be the one who is needed and used for the safekeeping of the essence of truth. Thus the rising of the saving power appears.
            The irresistibility of ordering and the restraint of the saving power draw past each other like the paths of two stars in the course of the heavens. But precisely this, their passing by, is the hidden side of their nearness.
            When we look into the ambiguous essence of technology, we behold the constellation, the stellar course of the mystery.
            The question concerning technology is the question concerning the constellation in which revealing and concealing, in which the essential unfolding of truth propriates (338).

You—like so many others—are bringing it back to Truth. The question concerning technology, you write, is the question that holds the essential notion of Truth. And how is it that you and so many other of the early builders and makers and thinkers all zeroed in on this idea of technology as a vehicle for truth, or a way into the concept of truth, when so many of us now are just really into Angry Birds and cat GIFS? Which bus did we get on? Which constellation are we looking at? In the balance between ordering and restraint, I have to imagine that you’d be putting us squarely in the former territory—but what does that do to us going forward?
I’m trying, H., I really am, to make sense of this, but I’m just not there yet. 

If you want to send along any cosmic hints, from the afterlife, well—you know where to find me.

Best wishes,


L. 

Saturday, September 14, 2013

Claw Machines and the Question of Truth

My father used to teach an undergraduate chemistry course for non-majors that he semi-affectionately dubbed “Chemistry for Poets.” All of us—the chemists, the poets, and everyone in between—can imagine what that class was like; we can imagine—circa 1975—the kind of student that class would have attracted. I can well imagine his frequent exasperation; I know well enough how I treated my introductory geology course in college.

I bring this up not to bore you with anecdotes from my family’s past, but to serve as one example of the strife—sometimes major, sometimes minor—that occurs when the domain of the right brain and the domain of the left brain feel as though they’re enduring too much togetherness.

It’s this particular split—the two ways of thinking, the two kinds of people—that I’ve been thinking about this week as I read through Martin Davis’ The Universal Computer. As an origin story, it’s a successful outing through several centuries of history—which of course means that it problematizes and complicates all manner of concepts and objects for the reader. Even just typing this post is a more complex activity than it would have been last week—now that I finally understand what RAM is and what Random Access Memory actually means, I can imagine the well of data buzzing in my laptop, the precision of direct accessibility. (I should say that now when I try to picture the inner workings of my computer, I imagine one of those arcade claw machines, reaching for just the right file or string of code…)

Clearly, this book has just been one more enabler for my non sequitur metaphors. 

But there’s something to this intermingling of the knowledge styles, isn’t there? In fact, it seems to me that the entire story of the computer, the centuries-long grappling with questions of infinity and truth and the physicality of numbers, is one that has required a total engagement of all the brain’s reasoning. If I gleaned anything from this book, it was the sheer messiness of generating something that, from the outside, has always seemed incredibly precise. Really, what’s more imaginative than essentially trying to recreate the brain outside of the body?

To get back to the actual history for a moment, let me share this brief quote from Davis:

“[Leibniz] dreamt of an encyclopedic compilation, of a universal artificial mathematical language in which every facet of knowledge could be expressed, of calculational rules which would reveal all the logical interrelationships among these propositions. Finally, he dreamed of machines capable of carrying out calculations, freeing the mind for creative thought” (4). (Emphasis added.)

It is remarkable to me that Leibniz didn’t envision his logical work, his “wonderful idea” of a symbolic alphabet, as being itself “creative thought.” Regardless of its eventual use or the background of its inventor, the initiation of a new language is always a creative act. For all that language has grammar, and for all that grammar is a logical structure, the impetus must always be more nuanced than that. (Besides, anyone who argues that the rules of English grammar are 100% logical is delusional and should not be trusted.)

Really, he seems to have been a paradoxical thinker in a number of ways. The ideas he put forth—the exhaustive compendium of human knowledge, the universal characteristic, the eventual automation—were creative; new ways of answering mathematical questions. At the same time, Leibniz saw everything—everything—as part of God’s best possible world, each action and connection in some way necessitated.
I wonder about the moment when he realized that he wasn’t going to see all his questions answered; I wonder if he ever questioned the combination of his religion and his research, the ways that they complement and contradict each other.

Leibniz wasn’t even close to the first one to ask questions that stretched beyond his time, but I think his style of questioning—proposing concepts that seem both supremely rational and supremely out-of-reach—is what we continue to rely on.


It also seems to me that we’re all trying to talk about truth—truth as it lives inside the human mind, or truth as it lives inside a set of numbers; truth as something a machine can enable us to realize, or something we can teach it to recognize. If we bear that it mind, our future questions might all have more interesting answers. 

Friday, September 6, 2013

As it turns out, "what's in a name" is still pretty interesting.

Sometimes it seems as though there are as many answers to the question “What is digital humanities?” as there are people who call themselves digital humanists. Reading through the many definitions and explanations that exist is, luckily, one of those times when—like with so many moments in DH—the structure is just as relevant as the meaning contained within.

Let me elaborate.

If you were to Google “What is digital humanities?”, as I have, you would find a happy cacophony of suggestions. You would read pieces that sounded like straight philosophy;  you would read pieces that sounded like rebels who are thrilled to have finally found a microphone; you would read pieces that are so technical they become hard to parse. Thinking about all of these different approaches to the same questions should tell you almost as much about what DH is, as what they are actually telling you. This is not the eye of the needle; this is the floodgate.

When I read pieces like “The Digital Humanities Manifesto 2.0” (incidentally, the best use of clip-art-style graphics that I’ve seen in a long time), or the more restrained Rafael Alvarado’s “The Digital Humanities Situation,” I can’t help but try to picture myself in a place somewhere other than the sidelines. To me, it seems that both the beauty and the curse of the state of DH stems from its expansive set of possibilities. Coming fresh into this discipline (or maybe I’d rather call it a confederation of disciplines all flying the same set of methodological flags), it’s hard to know where to look first, let alone where to jump in.

It should be said, of course, that the hesitation I feel was born from my own mind, and not from any perceived closing-of-the-ranks from any writing I’ve read on this topic—or any of the lovely DH professionals I’ve talked with. The field is vast, yes, but it’s marked (I think) by a deep enthusiasm; the kind that wants nothing more than to pull you in and make sure that you, too, are excited about what could be next.

I may have found my way in, though, and it was beautiful to read something and feel, if not a full-on, mad-scientist “EUREKA!” moment, then at least a moment of clarity. To spread my cards out on the table for a minute: I see myself working in a library or archive, once I’ve gotten all my degrees in a row, and in the past I’ve struggled to articulate why a trained poet (if there is such a thing) not only wants to be a librarian, but also a librarian with a hand in the murky world of DH. On the surface, it doesn’t seem to connect. In my mind, the creative work of poetry and the creative-yet-logical work of digital scholarship pair perfectly with each other, but that’s probably deserving of its own post later on. To get to the point: there’s a place in the Manifesto (can you tell that I loved it? It’s a joy to read, how could I not love it?) where the conversation comes around to the idea of curation. Let me quote a little of it for you:

Curation also has a healthy modesty: it does not insist on an ever more possible mastery of the all; it embraces the tactility and mutability of local knowledge, and eschews disembodied Theory in favor of the nitty-gritty of imagescapes and objecthood….
Curation means making arguments through objects as well as words, images, and sounds….
Curation also implies custodial responsibilities with respect to the remains of the past as well as interpretive, meaning-making responsibilities with respect to the present and future. In a world of perpetual data overload, it implies information design and selectivity….(9).

This may not seem like a groundbreaking excerpt, but for me it might as well have had a giant neon border. “See here?” it said to me, “THIS is what you’ve been waiting for. Here’s the reasoning you’ve been trying to articulate for the past year.” And it really is. Curation is obviously not the only skill I can or will latch onto, but it speaks to my sensibilities as a poet and my strengths as a scholar. What is poetry if not the selective promotion of some details and not others? What is storytelling if not a means to preserve what we find most important? Flash fiction and haikus from the New York Times came to be for some of the same reasons that DH came to be: a need to push on traditional boundaries of genre coupled with an unwillingness to completely divorce from traditional guiding principles; clever people getting bored with a horizon they can comfortably discern. 


One more connection with my home base of creative writing, if you will: I would argue that what separates an average poet (or prose writer, or essayist, etc.) is the ability to revise. The initial shape of an idea is not nearly so important as the shape that we leave it in. In that vein, I find the word of DH compelling because it seems to be another discipline that has embraced the guiding principle of revision. Whether you want to think about it from the vantage point of individual projects growing and morphing, or whether you want to go big-picture and think about the fact that digital editions are essentially revisions of traditional print books, it seems to be a deeply embedded concept. 

So, to all those throwing this party under the Big Tent (another great Google search waiting to be made, by the way): Thanks for inviting me! It's good to be here.