Any Noise At All: A Brief Discussion on a Taxonomy of Electronic and Electroacoustic Music
Edited and expanded by Dustin Ragland,
Academy of Contemporary Music at The University of Central Oklahoma
“Brahms makes object: beginning, middle, end. Art and artists predict what’s going to happen, predict and understand change…Then tape arrived. Whole medium opened up. Can record any sound on tape. Idea of piece not limited by orchestral instrumental sounds. Opened up sound world; direct access to sound. Now could use any noise at all.”
This brief set of observations comes from then-18 year-old Nicolas Collins, in notebooks published online in 2011 from his studies with Alvin Lucier of Wesleyan University, in a class titled “Introduction to Electronic Music.” While Collins is quick to preface his notebooks with his age and idealism, it is a helpful starting place to understand how to communicate the forms, ontologies, “acoustemologies” to one another as we compose, perform, reflect within electronic music. Most importantly, it is a reminder that we face constant accretion of definitions, sub-genres, and negative dialectics that often obscure with the purpose of exclusion. Definitions offer clarity where they provide communities of similar curiosities, and virtue where they help to re-configure the possible with respect to sound and silence. Sub-genres point to a social reactivity to natural and built environments (note how often certain cities are tied to certain timeframes – are tied to certain technologies – are tied to broader movements in visual and sonic arts). Negative dialectics can help us to build Adorno’s “constellations,” to understand the “…’more’ which the concept is equally desirous and incapable of being.” Yet each of these can serve to pile on semiotic gravity that creates more sluggishness than inquiry.
This summer, Gil Trythall posed a straightforward question to the SEAMUS mailing list: “can anyone direct me to a taxonomy of e-music?” The responses were thoughtful and quite varied. The format of an email thread provides an external pressure: brevity. To devolve into theses-length email responses would suffocate discussion. However, as Kevin Austin remarked, “that topic is probably three or four doctoral theses long,” and one way of playing creatively with brevity (as us professors know) is to recommend longer texts. In addition, there is some presumption at work by all of us who responded, and I readily admit some presumption in this longer reflection. However, this is not intended as redaction criticism, rather, I hope to creatively interrogate my presumptions in response to the question.
Included in Gil’s original question is the assertion: “at this time it appears that musique concrète and DJ dance music are both ‘electronic.’ ” Embedded in this idea is that there is both some value in delimiting various neighborhoods of genre within the larger oblasti of cultural-social situations. Also embedded in the sentence is the assertion that there even is a delineation to define. Depending upon one’s musical context(s), similarity between “high” and “low” electronic music might be a supportive, though subversive claim. In other contexts, it might appear helpful to border off what Chad Clark, in a tweet, calls “USB music,” which I take to be music that feels like it has little imaginative cost, and few sonic checks and balances in its production, and is not expected to persevere beyond its immediate moment in the market.
There is little doubt that the question holds fascination and controversy in proximity, and often phase changes one into the other. What I present below is a collection of SEAMUS community responses to the original question, with the reading resources included, and some slight commentary in-between. I’ll close with some thoughts on the potential of the discussion, and some short, personal answers I’d give to the question. The responses below are edited for clarity.
A brief answer from Dave Gedosh points to the importance of semiotics and music in this discussion:
Here are a few books I’ve read personally that have recently (Schaeffer obviously not recent – but seminal rather) approached these questions in their own unique way; Pierre Schaeffer, In Search of a Concrete Music, Michel Chion, Audio-Vision, Sound On Screen, Brian Kane, Sound Unseen, and Curtis Rhoads, Composing Electronic Music: A New Aesthetic. These are specific to acousmatique, but connections can be made to encompass other categories of computer music and electronic music. And one I haven’t read yet that looks interesting, Tara Rodgers, Pink Noises: Women on Electronic Music and Sound.
And then there are numerous journal articles regarding Semiotics and music that deal specifically with electroacoustic music. I compiled a folder of about 35 of these a few years back including some of Dennis Smalley’s spectramorphology articles – which is, even at his attempt to say it isn’t, semiotics. Semiotics might be an excellent place to start.
What I find fascinating in this response is Smalley’s assertion on spectramorphology: “it is not a compositional theory or method, but a descriptive tool based on aural perception. It is intended to aid listening…” Electroacoustic music (the milieu for his commentary – we might expand to electronic music as a whole) needs listening skills that are not only prior to the listening act (top-down, as it were), but indeed listening skills that emerge from the very listening itself.
Kevin Austin offered two thoughtful responses to the initial question, the first of which dovetails nicely with electroacoustic music being a kind of listening act, as much as a broad genre or process. His initial response:
As I have seen it, a long-standing and for some, an evasive question.
Part of the immediate semantic difficulty — as it is a question of semantics at base, is in the definition / limits of the words “music,” and the less problematic, “e” [as in electronic, or electric, or electro-acoustic], and the aesthetic concept[s] within and surrounding “style.”
In the “old days,” many an axe got ground on the “stony” definition of “music.” Is the ringtone on a phone “music?” EDM, IDM and house styles, start with the Wikipedia entries.
These are about “DJ dance music.” Mostly.
However, I am not sure that that is the question. Please accept my apology if my assumption is presumptuous.
Musique concrète is a somewhat older term of what (in Europe and Québec) is generically called “acousmatique.” This word has become complicated by some popular use converting its meaning away from being a language-specific notion — read Pierre Schaeffer, et al., on “fixed media.”
A useful read on the subject of “acousmatic music,” is the recently published: Treatise on Writing Acousmatic Music on Fixed Media, by Annette Vande Gorne, published by Musiques & Recherches Volume IX – 2018, now in English translation, http://www.musiques-recherches.be/. In its 70 or so pages is a kind of “proposed formulation” of acousmatic music concepts, processes, aesthetics, limitations and boundaries.
As from its ancestors, this treatise places the microphone and recording at the start and center of the aesthetic. This is not a “negotiable” condition. The audio files are created from sounds packed up by a microphone, and do not become (in the EDM tradition), objects for presentation and manipulation using traditional music[al] conceptions. This is not music that is to be played via a sampler and plugins, into/onto a metric grid.
There has been some argument about “why not?” and the answer has been, (more or less), “because” acousmatic music is not EDM. While this may sound like a religious statement…(fill in the rest of the sentence with your own mythology).
The American tradition from the 1960’s and 1970’s, as you know, was largely to conceptualize electronic music as being an extension of the Western European instrumental tradition. Wendy Carlos, among others, used “synthesizers” to create extensions of existing (or possible) Western instruments, using electronics.
It was inevitable that the American keyboard synthesizer tradition was going to merge with popular music in the recording studio, presaged by the rapid technical evolution of disco, and the early adoption of keyboard synths as new timbral instruments.
This merger, which began with — among others — the Beatles, Kraftwerk, Pink Floyd, etc., follows in the imprints of pop music. MIDI was introduced to simplify the interconnection of simplified electronic music instruments.
“Electroacoustic” was a term, as you will remember, introduced in the mid-1970’s that attempted to describe a different conceptual and perceptual mode of hearing, listening, and sonic creation. The generic term “sound design” is the one I use to explain to people what electroacoustics is.
There was a time when the word “music” did not adequately encompass the hearing of sound for its own values. Music was the perception of patterns of musical objects, be these objects in a North or South-Asian raga, a Beethoven symphony, or a Carter string quartet.
Pierre Henri’s Variations for a Door and a Sigh, for example, did not aim to stimulate the same kinds of perceptual centers that the Unanswered Question did.
Kevin’s first response here gets to the heart of how I perceive this question as well: how do we speak well about electronic music? How do we enroll all of the complex history and semiotic activity into meaningful discourse, without flattening the necessary complexity? Further, if this is primarily a listening act, why not change the assumption that rational terms define the objects of listening for us prior to our encounters with them? Especially if these rational terms carry enormous cultural specificity: “sound design” in most of my own contexts of musical life refers either to specific focus in a compositional or production stage on the individual design of an instrument’s timbre (usually synthesized, though not always) or, a film-centered role that works with sounds ranging from musical score to diegetic and non-diegetic sound placement. However, I quite love the appellation “sound design” to explore what electroacoustics are.
Two brief book responses were offered by Konstantinos Karathanasis and Nicholas Cline, each with a unique approach to the initial question: Leigh Landy’s Understanding The Art of Sound Organization, and Joanna Demers’ Listening Through The Noise. Landy’s book is intertwined with the EARS (ElectroAcoustic Resource Site) project, and is a “study of sound-based music.” It is in line with some recent attempts to employ computation and neural networks to analyze genre and organizing communities of sound, but remains less algorithmic or mathematically driven. Demers’ work is more firmly in line with other works of musical aesthetics, in the vein of Scruton or Kivy, and it has the advantage of directly addressing the tensions between experimental music that is received in “classical” environments, and that received in “popular” environments, by focusing on various genres’ sonic signatures and not merely the reception of them.
My own book suggestions are an attempt to bring iterations of electronic music within contemporary popular contexts to bear in the discussion, while also recognizing (alongside Kevin and Dave) the myriad language games at play in the discussion (in the strong Wittgensteinian sense). Ryan Diduck’s Mad Skills is a “cultural history of MIDI,” narrowing the focus to one technology’s impact on electronic music creation. Attack Magazine’s Secrets of Dance Music Production is a useful read not only because it requires getting past the title a bit, but also because it presents genre borders in the middle of discussion of process and composition, not as sitting “top-down” on them. Live Wires, by Daniel Warner, works from the implicit thesis that electronic music is the bridge between musical styles regarded as “experimental,” and those relegated to “popular” cultures and contexts. While not an exhaustive or systematic taxonomy, it is the closest work within my own suggestions to answering the question of a timeline for electronic music. Novak and Sakakeeny’s Keywords In Sound is a collection of essays that digs closer to the beating semiotic heart that Dave and Kevin point to in their responses. A collection of signifiers from sound studies: (“Silence,” “Noise,” “Acoustemology” among them) are expanded in essays, and are much more for those exploring the definitions themselves (and the very possibility thereof).
The goal of this whole exploration is presenting some of the thoughtful conversation that happened in the wake of our original question, and it’s worth bringing this to a close with Kevin’s response to a foundational question on how to define a delimiting word like “music” itself. He says:
But to the more specific point, a framework for the “taxonomy” has to be set before the research continues.
Is soundscape [to be defined], electronic music? Soundscape itself has an extensive, and incomplete taxonomy. There was an international committee set up to try to clarify what the parameters would be. Hmmmm…Not so easy.
And algorithmic composition? Is music composed by computers using Markov chains a genre? Neural nets? Rolling dice?
The above questions are not abstract: Autechre’s NTS Sessions is eight hours of highly algorithmic computer music, and it’s already sold out of its LP version. That the responses to a simple email question have been so varied, passionate, and widely read remind me of Tara Rodgers reflecting on Moog’s early copywriting:
The evocative phrase of the Moog ad- “world of sound” -is more than marketing rhetoric; it is a useful way to describe an affective realm of music-making and audio-technical practice that integrates imaginary, embodied, and social modes of experience.
Questions of what denotes electronic music, and what denotes music eo ipso are not going away, but becoming ever more granular. Those grains of definition can be analyzed: this brings us clarity with respect to history, colonization, process, and tonal/metric assumptions. Those grains of definition can also be synthesized: we form imaginative new ways of speaking about the music we create, and new ways of considering the possibilities therein.