Information Technologies and Mnemonic Technê

I posted this picture on Instagram as a joke, saying that I’m excited about working with book as soon as this semester has ended. One of the things I realized I didn’t do very well in “Architects of Memory” was to highlight key technological moments when/where a small number of people “invented” a type of search, retrieval, or storage technique that later became important for later technologies to function. Thomas Hughes calls these sorts of important techniques/technologies “reverse salients,” because they changed the resources available to make decisions with in the future. I was trying to avoid technological determinisms in information technologies while still valuing the materialisms that are important for world building.

No photo description available.

For example, I wrote on Mortimer Taube and his Uniterm technique that indexed documents with a “post-coordinate” system. Post-coordinate systems index terms according to access points that are created by the co-occurrence (Uniterms) of two words in one document. The Uniterm is a hybrid of two words that become meshed because of their continued co-occurrence at the document and the corpus level. Uniterms don’t actually have a label–they just because a variable point for search technologies to work with. To my mind these conceptual bits work similarly to a scheme or a trope, with the added caveat that they can be easily used to scaffold bigger algorithmic systems. Those standardized conceptual techniques like Uniterms help to understand algorithmic bias more clearly. My book described how Taube generated the techniques so that they would speak to users of systems. The advantage of reading early technical documents from early information scientists is that they produced numerous appeals/discourse to help legitimize the effectiveness of their systems. I was illuminating theoretical objects to better understand the effects of search engine biases. The term I used to describe those time-bound human/machine hybrids was mnemonic technê.

After reading reviews and talking with a few readers I realized that idea wasn’t as clear as I tried to make. The historical work I completed got read as history of information technologies, but not necessarily as a way to understand how rhetoric coordinated public memory through information technologies. One rhetorician asked “Where’s the rhetoric?” Readers from the information sciences have commented on the idiosyncrasies of rhetoric’s disciplinary terminology while applauding work that better highlights their discipline’s history.

So the next book I’m working writing is trying to do a better job of showing the cultural biases that are performed by these algorithmic shorthands that are easier to understand as early information technologies and become technical building blocks for scaffolding later information systems. F.W. Lancaster (the book I posted) is fairly well-known for assessing the earliest databases that were designed for research. He literally spent a lot of his career comparing why one type of encoding technique is better at helping audiences search, which consequently points directly to the intellectual/economic/cultural biases of the time period. The challenge is to locate language/interventions that escape user testing because they’ve become so deeply engrained in computational infrastructure.

I’ll add that trying to do this hasn’t been easy. It’s been tough to write in the first place, and reviewers don’t always know where to place it. For instance, Meredith Johnson and I have a piece on algorithms used for constructing roads by creating shorthands for imaging places/temporality. We tried to pull off the same theoretical argument, and it took a while to convince the reviewers to move forward with it. Anyway, I’m excited to be reading this as soon as grading is done.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.