Curating Entextualized Timeframes with AudioAnnotate: Rearranging the Audiotextual Features of Warren Tallman’s Retirement Speech

Jason Camlot (Concordia University)

The goal of my short talk today is to analyze the generic audiotextual features of a cassette recording Warren Tallman made of his formal farewell address to the University of British Columbia, first read to an audience of students, faculty and friends in a UBC lecture theatre on 27 November 1986, and then published the next year in Line journal as a twelve-page article entitled, “How To Play Career, and Win.” Part of these audible generic features include edits, excerpts, sound-drops, overloads, and interspliced commentary inserted with the aim to contextualize and situate the more stable and recognizable audiotext that is a documentary recording of his public reading of the farewell paper or speech that he wrote for the occasion of his retirement.

Warren Tallman was Professor and literary organizer at UBC. With Robert Creeley he organized the Vancouver Poetry Conference; and he was a mentor to the poets in a Canadian avant garde poetry movement known as TISH. This audio recording is held in the UBC Okanagan soundbox literary audio collection.

The generic features of audiotexts are largely discernible in their located sound, often as part of a process of excerption (or entextualization), recontextualization, and generic consolidation in performed speech or reading. Richard Bauman usefully defines “entextualization” as a process of “bounding off a stretch of discourse” and “endowing it with cohesive formal properties” so that it becomes objectified and “extractable from its context of production.” Recontextualization, he continues, “amounts to a rekeying of the text, a shift in its illocutionary force and perlocutionary effect—what it counts as and what it does.”

In the case of documentary spoken recordings, recontextualized speech is generically instantiated in recorded performances, acts of expression “framed as display” and open to “interpretive and evaluative scrutiny by an audience both in terms of its intrinsic qualities and its associational resonances.” The generic features of an audiotext become discernible in the located, contextualized sound displayed in a recorded speech performance. The recording of Warren Tallman’s retirement speech is a fascinating example of an audiotext that highlights such contextualized location through deliberate acts of mediated interpolation and recontextualization. One might say, through pause edits, and the will to situate the meaning of an already temporally over-determined genre—the retirement speech—Tallman has created an artifact that shows us the situating sutures of the time-based medium of sound recording.

At the centre of this cassette recording we have a lengthy documentary recording of Tallman’s reading of a written paper entitled “How to Play Career, and Win” before an audience consisting of (according to Tallman’s account in one of his recorded interpolations) “a more or less full house [of] undergraduates, graduate students, TAs, English Department colleagues, a scatter of UBC Art History, Philosophy and Soc. profs, as well as profs from SFU, Capilano and Angara, and a substantial number of scruffy downtown Powell Street, Commercial Street, Octopus books, … types slinking in and out of the Western Front” (02:30). The retirement speech, proper, begins 6 minutes and 19 seconds into the full audiotext and lasts 37 minutes and 22 seconds (or so). So, a simple answer about the nature of this audiotext is that it’s a 37 minute (or so) recording of Warren Tallman delivering a paper at UBC on the 27th of November 1986. Tallman himself does some work in one of the audiotext’s many dubbed interpolations to define the genre of this primary portion of the audiotext he is preserving when he refers to it as “the paper, the talk, the speech, the address” (03:30).

But to describe it this way is to leave out some of the most interesting features of the recording as an audiotext because this long documentary chunk is framed on either end with a series of recorded snippets of contextualizing explanation and commentary by Tallman (and in one case, by a colleague). Further, the documentary recording of the talk on a few occasions is subject to corrective interjections, because, as Tallman says, there were “places where I’ve had to dub in missing sentences” due to failures in the original recording process. The AudioAnnotate annotations I have prepared give us some idea of how it is comprised of multiple, temporally interwoven segments. There are approximately 14 sections, in all, alternating between different times and locations—namely, Tallman’s Bellavista Sitting Room, December 14th, 1986, that same address on November 17th, 1986, the UBC lecture Hall Buchanan A100 on November 27th, 1986, and, on the second side of the tape, the classroom of BU3323 at UBC on November 3rd, 1986. Each segment of the audiotape is initiated by the sound of a pause button click, the sounds of which I have also annotated. These pause-clicked segments at times represent disruptive, microgeneric interpolations wherein Tallman comments upon the sounds that were produced in one time and space with sounds produced from another time and space.

While one can annotate such audio using tools like audacity, or even transcription software, I am beginning to discover that one of the benefits of AudioAnnotate’s IIIF approach to the presentation of annotations, in a case such as this, is that it allows one to curate temporality in very clearly demarcated ways. If a recording is comprised of interspliced segments from distinct recording events, each with its own timeframe and location, then this tool will allow one to reassemble the temporal pieces that originally went together, thus restoring and listening to the time frames that are otherwise interspersed across a temporally intermixed audio timeline. The ability to cluster and curate time in this manner enables nimble movement within and across annotated segments, and offers new ways of listening to complex audio artifacts. In short, the annotation-layers function afforded by IIIF, and made easily presentable via AudioAnnotate, allows one to see, play and hear the entextualizations of documentary spoken recordings in unique ways for the purpose of analysis.