DAY 1 — Thursday, August 16th
Click here to return to the symposium schedule .
Session Chair: Steven Naylor. Click on linked titles to read full articles published in eContact! 15.2 — TES 2012.
Recent advances in live sound processing technology make it increasingly possible for acoustic instrumentalists and vocalists to interact with electronic systems and sounds in live performance. These advances allow for the creation of musical works that utilize the nearly infinite sonic palette that is available through electronic means without relegating acoustic instrumental resources to the realm of fixed-media samples. The most significant effect of this technological shift is that it allows pre-existing musical traditions (which have historically been developed only through purely acoustic means) to enter the electroacoustic arena and develop with the resources that sound-processing technology provides.
This new era of live electro-instrumental performance coincides with the greatest period of globalization and cross-cultural exchange in the history of the world. While the technology that has made electro-instrumental music possible does not itself exhibit characteristics of any one culture or geographical region, this technology has developed largely among composers and researchers working in the Western experimental art music tradition and it is arguably this musical tradition that has been most significantly altered by this technology. Very recently, however, many musicians around the world have sought to explore other musical traditions (notably those of Indonesia) using live electro-instrumental resources.
In order to describe and understand the contemporary æsthetic effects of globalization in the arts, curator and theorist Nicolas Bourriaud developed the biological analogy of the radicant in his eponymous 2009 book. A radicant is a plant, like an ivy or strawberry vine, which grows nonlinearly outward and without one single trunk or root for nourishment. Rather, radicant plants lay their roots over all of the soil that they traverse, drawing nutrients from multiple places simultaneously and even allowing roots that no longer provide nourishment to wither and dry up. Radicant artists and musicians exhibit this ability to re-root themselves and be rooted in multiple cultures at the same time (either through literal geographical relocation or through contact via the Internet), drawing inspiration from and working within the artistic and musical traditions of these diverse cultures simultaneously. Most importantly, radicant artists and art forms are not defined or fixed solely in their cultural or historical origins, but can develop authentically and significantly across or between cultures.
For radicant musicians, the defining characteristics of a particular culture’s music are not fixed properties, like souvenir snapshots taken by a visiting tourist, but rather dynamic constellations of sounds and techniques, which may be altered each time a musical work is performed. In this sense, radicant theory is particularly apropos to the field of live electro-instrumental performance (rather than fixed media pieces, or pieces which only use pre-recorded samples), since each such performance requires the natural reinvention and rekindling of a musical tradition that is inherent to the performance of an acoustic instrument.
Few art forms better represent the radicant analogy than the diverse musical traditions of Indonesia. The tuning and structure of gamelan music, in particular, have influenced many significant American and European composers in the past century. Conversely, the large influx of Western tourists to Indonesia in recent decades has significantly altered the performance practice of much traditional gamelan music, particularly on the island of Bali.
This presentation explores the development of musical traditions within live electro-instrumental performance through the lens of radicant theory, with an emphasis on musical traditions that have their earliest roots in Indonesia. It features the work of Marc Chia (a.k.a. One Man Nation), founder of The Future Sounds of Folk project; Ensemble Gending, a Javanese gamelan based in Utrecht, Netherlands, who recently hosted a workshop for composers to explore the use of live electronics with the gamelan ensemble; and my own work with Gamelan Cahaya Asri (a Balinese ensemble based in Wisconsin) in my piece Grattage: Baris Tunggal for gamelan gong kebyar and Max/MSP, which was premiered at the 2012 Society for Electroacoustic Music in the United States (SEAMUS) conference.
Lawton Hall makes music and media art in the American Midwest. He tends to emphasize his geographical roots in bios and such, though he thinks that place,
history and geography are increasingly complicated concepts. His study of the writings of James Tenney and close relationship with Pauline Oliveros have
fueled a desire to put consciousness and perception at the center of his creative work. Lawton holds a Bachelor’s in Music from Lawrence University, where
he studied composition with Asha Srinivasan and John Mayrose, and new media art with John Shimon and Julie Lindemann. He has studied Balinese gamelan with
I Dewa Ketut Alit Adyana and horn with Tod Bowermaster and James DeCorsey. His music has been presented across North America and Europe and he has
collaborated with numerous ensembles, musicians and artists across the United States. Recently, Lawton worked at STEIM, Amsterdam.
Projecting the screen during a performance is a practice that has been embraced by live coding musicians as a means to provide an audience with insight into the players’ actions. The TOPLAP group’s manifesto contends that incorporating the code into the visual presentation gives “access to the performer’s mind, to the whole human instrument.” This paper explores the challenges of creating visual live coding pieces to accompany the Cybernetic Orchestra, a laptop orchestra that uses pulse-based timing and live coding environments like ChucK and SuperCollider as its primary instruments. Other topics discussed include the æsthetics of projected code, the use of natural language and invented code to engage audiences and an outline of techniques for creating intersections between visuals and music. The presentation features audio/visual excerpts from performances and a demonstration of the live coding environments used to create live coded visual accompaniment.
Creating “Orbit, a Scalable Laptop Composition”
by Ian Jarvis
This paper presentation will review the author’s process and reflections of expanding his creative process into live laptop performance and human-computer interaction design, through the creation of Orbit, a laptop composition that is scaled for multiple formats. Action Research was employed to create the composition, with special reference to Latour’s notion of the nonmodern and Merleau-Ponty’s phenomenology. The main points of the discussion will include: 1) the impact of scaling the piece for multiple formats that include a solo performer, laptop ensemble and geographically distributed laptop ensemble; 2) the impact of the modes of human-computer interaction that include live-coding, the use of uniquely designed gestural controllers and user interfaces and physical computing. The Cybernetic Orchestra, McMaster’s laptop orchestra, has been the key research and creation resource. As the project extends to multiple formats, the structure of the laptop orchestra has provided the technical foundation and further creative resources. The piece is the major research project that completes my MA in Communication and New Media at McMaster University. The overarching interest is the reciprocal relationship of humans and technology in the creation of Sonic Art and Music; how technology becomes experienced as extensions to the body and is integrated into our overall habituated understanding.
Ian Jarvis is a sound artist, composer, songwriter and media producer from Toronto. Much of his work combines elements of acousmatic art, soundscape composition, algorithmic art and popular music, and is influenced by the implications of technology on creative practices and on the development of personal identities. Recent works are included on the NAISA Deep Wireless 8 CD, the Cybernetic Orchestra’s debut CD ESP.beat and have been presented at the Hamilton Art Crawl, the Sheridan Gallery and the 2012 Toronto Electroacoustic Symposium. He creates “The Becoming of an Audiophile” for NAISA webcast, and composes and produces various audio projects under the names Audio Being and frAncIs.
Session Chair: Mitch Renaud. Click on linked titles to read full articles published in eContact! 15.2 — TES 2012.
Under Living Skies: Aural character in creative practice
by Eric Powell
There exists a special relationship between space and sound. Sound, by definition, cannot exist without space — waves of physical energy vibrate air particles contained within the volume of a space, creating acoustic sound. This relationship between sound and space can be inverted, allowing sound to activate space, thus highlighting the unique nuances or peculiarities within any physical area. This paper discusses the research methods and creative approaches applied in composing a site-specific concert entitled Under Living Skies. This piece is based on two central questions: What is the sound of Saskatchewan? and How can the unique aural character of this province be presented to an audience? In order to provide a context for my own work, this paper draws from the writing and creative work of researchers and artists such as Bernie Krause, Charlie Fox, Christina Kubisch and R. Murray Schafer. These comparisons will explore the cultural history of sounding-in-space — comparing a variety of approaches to location-based artistic expression and examining the ways we interact with space through sound.
Under Living Skies is an ongoing project focused on developing a musical composition of and for a specific geographic location. The discussion of this project encompasses the two previous installments of this project as well as the ongoing compositional and research processes utilized to bring the 2013 site-specific performance event to fruition. The first part of this project was the 2010 performance of a concert piece combining a chamber ensemble with an 8-channel fixed-media score composed of rural, urban and industrial field recordings collected from around the Canadian province of Saskatchewan that explored the compositional balance between environmental sounds and instrumental voices. The second instalment was a period of intensive audio-based mapping research. Over the summer of 2011 my team and I created a library of impulse responses, instrumental interventions and other documentation from locations with a unique sonic character, allowing me to perform acoustic analysis of these locations. The current stage of this project will be discussed in more detail, as it synthesizes findings from the two previous iterations, developing new compositional approaches, heightened listening practices and an understanding of the sounding character within these places.
This paper explores the variety of approaches necessary to create a piece that both influences and is influenced by environmental sound. This involves application of both artistic and scientific methods: integrating geologic, topographic, acoustic and psychoacoustic analysis with elements of acoustic ecology, contemporary music practices and new media technology. Using these means, it now becomes possible to create a piece of music that truly resonates within the site, performing music of andfor the land.
Eric Powell is a sound artist and composer working with a wide variety of presentation methods including composing for stereo and multi-channel tape,
performing with acoustic instruments and live electronics, as well as creating site-specific and interactive installations. In 2008, he received his MFA in
electroacoustic composition from Simon Fraser University. His is a founding member of the sound art organizations Electricity is Magic and Holophon Audio
Arts, and sits on the board of the Canadian Association for Sound Ecology (CASE). His work has been heard throughout Canada, Mexico, the USA and Europe
with recent presentations at Toronto’s New Adventures in Sound Art, Tangente Dance Campany in Montreal, and the Prague Quadrennial. Much of this work has
been made possible through the generous support of the Saskatchewan Arts Board.
On faith, work, leisure & sleep is an interdisciplinary work consisting of a series of six pieces for piano, pianist’s voice, text, electronics and video. A collaborative project by a Canadian contemporary art music composer Emilie Cecilia LeBel and Brazilian-Canadian pianist Luciane Cardassi, the project represents the composer’s interest in combining visual art with concert music and the pianist’s interest in incorporating multimedia works into her repertoire. The composer will present this new work to illuminate her creative practice and to discuss the challenges and rewards surrounding the collaborative process. The question of how a composer can successfully create music in the current complex and varied landscape arose during the course of this project. How does a contemporary composer in their creative practice honour the western art music tradition, recognize the influence of exposure to a variety of music, acknowledge the presence of extra-musical influences and uphold their own voice? How does one create engaging art in the present, while looking backwards and towards the future? The composer will discuss how when faced with these issues during the course of this project, it led her to embrace technology, include a collaborative work process with the performer and incorporate interdisciplinary work, while attempting to honour her own voice and the æsthetics of her discipline’s tradition.
Emilie Cecilia LeBel is a Canadian composer presently based in Toronto. Her compositions have been performed across Canada and internationally. As a
contemporary art music composer, Emilie works with acoustic instruments and electroacoustic mediums to create both discrete and mixed compositions. She
also creates intermedia projects working with electronics, video, photography and acoustic instruments. Many of her recent projects have focused on
collaborations with performers, incorporating contemporary art music with electronics and video. Emilie is completing doctoral studies in composition at
The University of Toronto. She has studied at the University of Victoria, Harris Institute for the Arts and York University. She holds an Honours Diploma
in Audio Engineering, a Spec. Hons BFA in Music Composition with a minor in Visual Art and an MA in composition and ethnomusicology. Emilie has
participated in a variety of workshops and residencies including New Adventures in Sound Art’s Sound Travels Festival and Deep Wireless Festival, The Banff
Centre, Quatuor Bozzini Composers’ Kitchen, Arraymusic Young Composers’ Workshop and The Leighton Artist Colony at The Banff Centre.
Bird on a Wire , my project to commission works for recorder and electronics, took off a second time in Summer 2011 with eight new pieces in 8-channel surround. The composers had a wide range of experience with electronics, in terms of type and complexity — some had never worked with live processing; others had never tried multichannel composition. The composers also covered a large spectrum of genres, from acousmatic to noise by way of instrumental and free improvised music. The one thing all composers did have in common was that none of them had written for the recorder before, so a large part of what they would learn about the instrument came from me.
What I had observed in the first set of pieces I commissioned was that collaboration (a meeting at the beginning of the compositional process, one in the middle and if possible, one when the work was finished) excited me the most about the process. It was clear to me that for Bird on a Wire II — Flocking Patterns I would try to create a situation in which there could be an even greater degree of interaction (and time in an 8-channel studio) such that the pieces would become not only the composers’ reflections on the recorder in an 8-channel environment, but that they might in some way define our musical relationship. My aim was not to infiltrate their compositional process in any forceful way — I had two quite different goals: a) by introducing them to my personal way of using the instrument (which is quite different from anything presented in an orchestration book), I would invite others to write with my own sounds and techniques in mind, as “idiomatically” for me as possible; and b) to expand my Zone of Proximal Development. I like to borrow this term from Lev Vygotsky, who defined this zone as “the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers.”
In this presentation, I will first describe broadly, from my perspective, how successful or frustrated I was in those goals, using a number of the pieces as examples. This includes relating my experience of “recognizing” my voice in the music (both recorded and to be played) as well as how the electronics sometimes allowed me to hear through the kaleidoscope of another’s ear. Then, I will expand on the idea of collaboration as a developmental tool for artists and more specifically for performer/composer groups using technology. In certain cases, I provided technological scaffolding (another term from psychology developed by Vygotsky’s followers) — be it in terms of descriptions of instrumental technique or building computer patches — allowing the composer to experiment with ideas previously beyond their reach. Other times, the composers provided me with a structure within which to reach greater performance “heights.” Either way, the process allowed for us to expand our comfort zone and without the time and effort to join forces that would not have happened. In this second section, I will mingle my own impressions about this learning with the reactions I have received from the composers.
Throughout this presentation, I will play excerpts of the pieces from Bird on a Wire II — Flocking Patterns to illustrate the results of composing “idiomatically” and collaboratively in sound. The recording is in 5.1.
Recorder player and composer Terri Hron comfortably migrates from performance to composition, exploring acoustic and electronic sounds in both written and
improvised situations. Bird on a Wire, Terri’s ongoing performance project to commission, perform and record new pieces for recorder and
electronics has yielded two evening-length concerts of music and two albums. Terri’s compositions span the range from chamber orchestra to acousmatic
pieces. Her current projects include Sharp Splinter, a cycle for instruments and electronics based on her family archive, and a new work for
Spiritus Chamber Choir, Calgary. Her own experience as a performer and her interest in working on individuality has led Terri to research creative
collaboration between composers and performers, especially within settings involving technology. Terri is the recipient of numerous scholarships, prizes
and residencies, including: the Social Sciences and Humanities Research Council of Canada, The Banff Centre, and the Canada Council for the Arts.
Session Chair: Kevin Austin. Click on linked titles to read full articles published in eContact! 15.2 — TES 2012.
Tracing Conceptual Structures in Listener Response Studies
by Adam Basanta
Studies concerning listener responses to electroacoustic music are surprisingly rare, with such scholarship characterized as “an exception rather than the rule” (Landy). The major contributions to this area of research (Landy, Weale and McCartney) centre on the relationships between composer, work and listener with respective regards to accessibility, tensions between intention and interpretation of narrative development, as well as the cultural constitution of the listener. While these studies, as well as their aims and methodologies, are of great value to the scholarly understanding of listener engagement with electroacoustic music, I would like to suggest a complementary aim and methodology to these previous efforts, which focuses on tracing the conceptual structures governing the relationship between listener and work. This approach will be illustrated through discussion of an online pilot study conducted in November 2010.
The pilot study in question can be briefly described as an online survey, in which participants — varying in age, sex, physical location, as well as degree of familiarity with electroacoustic music — were provided with an open, largely unguided forum in which to respond in writing to excerpts of electroacoustic music specifically composed for this purpose by the author. Although the study bears several methodological similarities to McCartney’s inquiry (notably, the lack of direct questions in favour of an open forum for responses), as well as Landy and Weale’s project (an emphasis on the use of “real-world” sound materials in the compositional excerpts), it departs from the aforementioned studies in several respects.
One major methodological difference is reflected in the construction of excerpts of electroacoustic music specifically for the purposes of the study, as well as the length of said excerpts (20 seconds to 3-1/2 minutes). However, a larger departure is evident in terms of the study’s analytical aims and methodologies. While Landy and McCartney both address high-level interactions between composer, work and listener (such as narrative development, sound identification, enjoyment and accessibility), the conceptual underpinnings of the relationship between the listener and the liminal space afforded by the work is assumed and is thus largely unexplored. I would like to suggest an examination of responses by familiar and unfamiliar listeners in terms of the underlying conceptual structures from which these higher-level responses arise; that is, to uncover the processes through which listeners construct the relationship between themselves and the work.
Listening, like all perceptual activities, is not neutral, but rather is made possible and constrained by “conceptual understanding across a multitude of cognitive domains” (Varela). I will suggest that some of these underlying conceptual structuring processes can be gleaned from the language found in listener responses. Language, according to the experientialist approach, emerges from “the structured nature of bodily experience and… our capacity to imaginatively project [structured bodily experience] to abstract conceptual structures.” In turn, the syntax of language can be regarded as both providing and manifesting “semantic and functional motivations,” as well as “indicating… relationships based both on form and on meaning” (Lakoff). In this sense, the linguistic investigation of a listener’s reported experience — the use of personal pronouns, tenses, metaphors and sentence structure — can reveal the manner in which this experience has been cognitively structured, in turn shedding light on the processes which comprise the listener’s negotiation of meaning.
Within this approach, the emphasis remains on structural tendencies, rather than the exploration of specific responses. I will particularly concentrate on several repeating motifs in listener responses that emerged in the pilot study: subject positioning, place images, movement metaphors, cultural references, bodily affect and musical-analytical schemas. Using Davis and Harré’s concept of “subject positioning,” I will reflect on the most basic factor on which the interaction between listener and work is founded: the positioning of the listening-self in relation to the sound media, from which “a person inevitably sees the world… in terms [of] particular images, metaphors, storylines and concepts” (Davies and Harré). I will suggest three major types of subject positioning in pilot study responses: external, internal and double externalization.
Following subject positioning, I will examine the articulation of place images in listener responses, as well as the various motivating factors leading to this articulation. I will explore the use of various movement metaphors to account for changes in place images using Lakoff and Johnson’s “location-event structure” metaphor. Subsequently, the use of cultural references as an aid in the listener’s negotiation of meaning will be explored using an ecological perspective (following Windsor), extended to the realm of cultural production. Specific focus will be placed on the relationship between cultural references and the negotiation of role and meaning of pitch-based sound materials. Finally, I will contrast unfamiliar listeners’ use of bodily affect as a process of meaning negotiation with the familiar listener’s use of musical-analytical schemas.
The contemplation of these structural underpinnings of listener responses will provide a complementary view with which to reflect on existing listener response studies, as well as provide insight with regards to issues of listening behaviour, accessibility, narrative development and the sociology of electroacoustic culture.
Adam Basanta (b. 1985) is a multiple award-winning composer and media artist, whose work traverses electroacoustic, acoustic and mixed composition,
audiovisual installations, interactive laptop performance and innovative light design. His work often explores various modes of listening, cross-modal
perception, the re-animation of quotidian objects and the articulation of site and space. His concert works have been presented throughout the Americas,
Europe, Asia and the UK, and have been awarded multiple national and international prizes. In 2012, his electroacoustic work Three Myths of Liberalism was awarded the John Weinzweig Grand Prize for best composition in the SOCAN Foundation Awards for Young Composers. His
audiovisual installations have been presented in Canada, the USA and Spain. He holds a BFA from Simon Fraser University (Vancouver BC), where he studied
extensively with Barry Truax, and is currently completing an interdisciplinary MA at Concordia University (Montréal QC), supervised by Sandeep Bhagwati and
The project Scalable, Collective Traditions of Electronic Sound Performance aims to scale laptop orchestra practices, compositions and technologies across diverse sizes of ensemble, ranging from the chamber music of small quartets, to common 6–15 performer ensembles, to large-scale, globally distributed telematic happenings. Since 2010, with the support of an Image Sound and Text Technology grant from the Social Sciences and Humanities Research Council of Canada, the project has unfolded along multiple axes — ranging from the design and development of software tools, through the development of specific compositions and improvisational practices, to the use of surveys and focus groups to uncover the complex dynamic among participants and audiences. Methodological eclecticism is the project’s defining strength. We will give a progress report on the various strands of the project, with a special emphasis on demonstration and discussion of the espTools, software developed to streamline the sharing of information, communication, timing, audio and video among the members of a laptop orchestra.
The Cybernetic Orchestra, a continuously running laptop orchestra at McMaster University in Hamilton, Canada, has been both the primary site and beneficiary of this activity, and more recently, the Electroacoustics, Space and Performance (ESP) research group has begun working alongside the orchestra. The Cybernetic Orchestra quickly developed into an orchestra that emphasized two things in its everyday practice: live coding (primarily in the ChucK language) and network synchronized beat cycles. Out of this activity, a set of software tools began to develop, foremost among them in terms of frequency of use a simple protocol for beat synchronization (espBeat), implemented as a “server” and “client” (or receiver) in several software environments. At the Toronto Electroacoustic Symposium 2012 we will present the next phase of this development, the emergence and public release of the espTools as a standalone application on various platforms. Performers run the espTools application alongside their chosen performance environment / interface and benefit from the ability to share various kinds of performance information, with as little configuration and interaction with the tools as possible.
David Ogborn is a creator, performer and producer who combines the traditional performing arts with electronic media — whether these be recordings of diverse outdoor environments around the world, laptop orchestra improvisations, video projections influenced by live musical gestures, massive synthesized sounds on immersive loudspeaker arrays, or spatial installations and sculptures built from sensors, microcontrollers and motors. Recent highlights have included Metropolis (2007, live electronics with silent film), Opera on the Rocks (2008, ambient opera with live electronics), Emergence (2009, live electronics + physical computing) and Waterfall (2010, collaborative video sculpture with physical computing at Summer Olympics). Ogborn is the president of the Canadian Electroacoustic Community (CEC) and teaches audio, sound+image, multimedia programming and physical computing at McMaster University, where he also directs the Cybernetic Orchestra. The Cybernetic Orchestra’s debut album esp.beat (2012) can be heard online at http://soundcloud.com/cyberneticOrchestra/sets/esp-beat.
The Electroacoustics, Space and Performance (ESP) Research group is housed within the Department of Communication Studies and Multimedia at McMaster University. The group explores and examines the possibilities of electroacoustic technologies and techniques, with a special focus in participatory and collaborative performance situations. Current group members include Nicolas Hesler, Aaron Hutchinson, Ian Jarvis, Alyssa Lai, David Ogborn, Kearon Roy Taylor and others.
There is a need for spatialization tools that are integrated into an audio sequencer to allow the composers to work with space all along the compositional process. As opposite to the way autonomous spatialization tools force the composer to first compose the time-line and then the space. In most standard audio sequencers (Mac or PC), there is no straightforward access to the number of outputs available in the audio interface (except for Reaper). An RME Fireface has 28 outputs for example but the sequencers usually limit the number of outputs to the standard of the cinema industry: 5.1, 6.1, 7.1 and 10.2. Also, the speakers’ placement is not available. I will also discuss the need for a 3-D spatialization tool integrated in an audio sequencer — Digital Performer, Logic or Reaper (as opposed to a dedicated autonomous tool). The Zirkonium was originally developed in 2005 by the ZKM in Karlsruhe (Germany) to fulfil their needs for 3D software to controll the space on a dome of speakers. They have a 47 speaker dome permanently installed in a concert hall as well as a 24?speaker minidome in a production studio. The Zirkonium is a VBAP (Vector Base Amplitude Panning) software. It needs a minimum of three speakers to position a sound. Unfortunately there is no simple interface to control the movement of the sound in the Zirkonium except to build one by yourself if you are familiar with the OSC coding. It could be a SuperCollider or a Max patch that would control the Zirkonium. But again it will be at the end of the compositional process. Our project was to integrate the Zirkonium into a Mac audio sequencer under the form of an AU plugin then each track can be spatialized independently during the compositional process.
Robert Normandeau holds an MMus (1988) and DMus (1992) in Composition from Université de Montréal. His work figures on many compact discs, including seven solo discs published by empreintes DIGITALes, and Sonars (Rephlex, UK). He was awarded Opus Prizes from the Conseil québécois de la musique in 1999 for “Composer of the Year” and “Record of the Year in Contemporary Music.” He was awarded the Masque in 2001 for Malina and in 2005 for La cloche de verre, the best music composed for a theatre play, given by the Académie québécoise du théâtre. Normandeau is an award winner of numerous international competitions, including Ars Electronica, Bourges, Luigi Russolo, Noroit-Léonce Petitot, Phonurgia-Nova and Giga-Hertz. His compositions employ æsthetic criteria whereby he creates a “cinema for the ear” in which “meaning” as well as “sound” become elements, which elaborate his works. Along with concert music he now writes incidental music, especially for the theatre. He is Professor in Electroacoustics Composition at Université de Montréal and has been since 1999.
Click here to return to the symposium schedule .
09:30–11:00 • Keynote Address
Encounters in the Republic of Heaven
by Trevor Wishart
“Approaches to the Voice in Electroacoustic Music” will describe various approaches to working with vocal material and especially human speech, in the creation of electroacoustic music, from both a technical and musical perspective, with particular focus on my new work Encounters.
Trevor Wishart (*1946) is a composer / performer specialising in sound metamorphosis and constructing software to make it possible (Sound Loom / Composers
Desktop Project). Has lived and worked as composer-in-residence in Australia, Canada, Germany, Holland, Sweden and the USA. Currently residing in the North
of England, Wishart creates music with his own voice, for professional groups, or in imaginary worlds conjured up in the studio. His æsthetic and technical
ideas are described in books On Sonic Art,Audible Design and Sound Composition (2012). Works include Red Bird, Tongues of Fire, Two Women, Imago and Globalalia. He has received commissions from Paris Biennale, DAAD in Berlin,
French Ministry of Culture and BBC Proms. In 2008 he was awarded the Giga-Hertz Grand Prize for his life’s work. Between 2006 and 2010 he was
composer-in-residence at Durham University (North East England) and during 2011, Artist-in-Residence at the University of Oxford.
14:00–15:00 • Paper Session #4: Theoretical Perspectives
Session Chair: Ian Jarvis. Click on linked titles to read full articles published in eContact! 15.2 — TES 2012.
The initial thoughts about this paper revolved around a wonderful flaw in my thinking, which ultimately brought me to its present direction. Initially when reading Derrida’s discussion of what it will mean for a machine to think, I predicated machine’s thought on their capability to affect us, the humans. However, for the advent of thought to be fully articulated by a machine, the machine itself would have to be affected, thus demonstrating its ability to experience. With this paper I set out to lay the theoretical groundwork for whether or not we can formulate the hypothesis: if — on some level — we perceive machines as creating the music they are reproducing, we are expanding our thought towards being able to conceptualize machines that can think.
My discussion of the futurity of thinking machines focuses on a gap in our perception where we think of the experience of listening to recorded music not as listening to a machine reproduce a human input, but as the machine itself producing the sound we hear. This lack of constant cognitive recognition allows for the formation of the gap that we can use to further our potential to think about the futurity of machines thinking. In other words this gap allows us to expand the way we think towards machines thinking. It is only a tiny crack, opened by a lack of constant awareness, but through further thought and development of technology we can begin to force this gap wider. I will explicate the idea of the futurity of machine thought by looking at the acousmatic nature of recorded sound — along with all disembodied sound — as identified by Pierre Schaeffer. Drawing on Cary Wolfe and Jacques Derrida’s work I will show how the live diffusion of electronic music, in both public and private space, creates the perceptual gap necessary to be able to think about machines thinking. I will also look at how ideas of performativity and affect will have to be adapted towards machine thinking and how when machines fully achieve the advent of thought, it will be a truly posthuman event predicated by a whole new logic. The drastic changes to the modes of distribution of music in the Internet age is forcing the further expansion of this gap due to the coming immediacy of music. It is through these lines that my paper will look at the potential of the hypothesis that there is a gap in our perception that can allow us to expand our understanding of the futurity of thinking machines. By engaging with music that relies on machines to produce sound we are creating the potential for the perceptual gap that allows us to conceive of the futurity of thinking machines. In this way, art is allowing for the creation of new avenues of thought and the posthuman machine at the end of these avenues will facilitate the creation of new art. I can only hope that this paper will catalyze the same motions towards creation.
Mitch Renaud was born in Puce Ontario where he grew up playing in rock bands, gradually moving towards contemporary music. He completed his undergrad in composition at the University of Toronto, studying with Gary Kulesha and James Rolfe, among others. Active as a concert organizer, a guitar teacher and writer etc., in the fall he will begin an interdisciplinary masters degree at the University of Victoria, where his research will look at approaches to music and the arts through Cultural Studies. Mitch’s practice explores the visceral and intellectual points of intersection between various art forms and ideas, often incorporating varying degrees of extra-musical elements. Most of his output reflects on issues surrounding the state of being: how we live with ourselves, among each other and the spaces that contain us.
Diegesis as a Semantic Paradigm for Electronic Music
by An?l Çamc?
In the field of narratology, diegesis is known as the spatiotemporal universe of a story (Genette, 1969). Having its roots in Plato’s dichotomization of imitation (mimesis) and narration (diegesis) as modes of discourse, the concept is commonly used in explaining narrative structures in art and situating components of an artwork (i.e. narration, actors etc.) in relation to one another. On a meta level, this narratological perspective also provides insights into the fabrics of the artistic experience through delineating the threads between the artist, the artistic material and the audience.
As an artistic expressive form of temporal nature, music too, prompts narratives. This, however, occurs in the abstract realm of the “musical sound” as narratives are conveyed to the listener through a culturally embedded musical language, which has been established over the course of centuries. The outcome of the musical experience, therefore, is the material’s unmediated transition to emotion. Electronic music, on the other hand, begets an entirely new vocabulary of sounds. No longer sufficed by the well-ingrained structures of the aforementioned musical language, this new material engages with the cognitive faculties of the listener, inducing a layer of meaning attribution amidst the continuum from material to affect.
Consequently, electronic music assumes a mimetic role; the listeners are presented with sounds that represent extra-musical events while the medium of the recounting remains the same to that of the recounted. However, as the material engages with the aesthetic capacity of the listener, the physical artifact is inevitably extended by the manifestation of a narrative and, therefore, diegesis emerges in the intellectual domain. The cognitive processing of the music constitutes the bond between the mimetic and the diegetic: the figure and ground relations between musical gestures extend beyond that of physical formations and a narrative unfolds both in the spatial domain of the concert hall and in the semantic space superimposed onto this domain by the listener.
This article approaches the matter of “Inner and Outer Sound Places” by investigating semantic and spatial dimensions of electronic music and the contacts between these two dimensions. The explicit and the implicit sonic worlds are discussed on the axes of focus and proximity in an effort to elicit a new perspective towards the concepts of figure and ground in electronic music. Diegesis is utilized as a paradigm to explain the tension / interaction between near and far and to question the extents to which the listener is inside or outside the musical material. Rather than merely contrasting the mimetic and the diegetic aspects of electronic music in order to impel a dichotomization between the physical and the intellectual, the role of their coexistence in actively shaping our experience of the music is studied.
The inferences that lead to the formulation of the aforementioned concepts are extracted from both the artistic practice of the author over the years and the experimental data obtained from extensive subject group studies, which were conducted to investigate the cognitive foundations of electronic music. In the context of this article, the experimental data is used to substantiate the remarks on the experience of electronic music and how narratives are formed on the listener end. The theoretical framework of the diegesis approach is therefore motivated with real world examples.
An?l Çamc? is an Istanbul-based electronic music composer and a new media artist whose works have been (dis)played around the world. A graduate of the
Media Arts and Technology Department at UCSB, Çamc? is currently pursuing a PhD degree at Leiden University, docARTES program. Concurrently, he teaches
electronic music composition and history, multimedia design and audio programming at Istanbul Technical University, Centre for Advanced Studies in Music,
where he recently co-founded Istanbul’s first sonic arts graduate program. Çamc?’s work explores contacts between abstract digital art and the audiovisual
objects in our daily surroundings. Inspired chiefly by environmental phenomena, his narratives accentuate the interactions between material, meaning and
16:00–17:30 • Paper Session #5: Approaches to Real-Time, Interactive and Intermedia Work
Session Chair: Emilie LeBel. Click on linked titles to read full articles published in eContact! 15.2 — TES 2012.
This presentation will discuss the various approaches to working with a vocal interface called the eMic. The eMic design draws upon the stylised gestural language of pop singers. More recently, the process for generating the work for the eMic has involved the use of a choreographer to explore gestures and movement relative to the eMic interface. The compositional approach foregrounds movement and uses choreographic gesture as the basis for musical structures, inverting the traditional idea of an instrument where the body must develop a command over the instrument. Movement is the starting point for the generation of musical materials, rather than having the choreographer compose movements to finish music. The design of the vocal performance interface was initially drawn from the gestural language of singers who use a microphone and microphone stand. The interface raises many complexities around the relationship of the sound, voice, body and gesture and has inspired different approaches to composition and developing works involving gesture, voice and technology. This paper will discuss the issues and the strategies employed to date with the eMic.
Donna Hewitt is a vocalist, electronic music composer and instrument designer. Her primary interest in recent years has been exploring gesture mediated music performance and investigating new ways of interfacing the voice with electronic media. She is the inventor of the eMic, a sensor enhanced microphone stand for electronic music performance which she has been developing and performing with internationally since 2003. In 2010, she has collaborated with Dance artist Avril Huddy on an Australia Council for the Arts funded project for eMic performer and dancers. Donna has most recently been working with the collective Lady Electronic who were awarded funding from the Australia Council for the Arts to work with artists including Gotye and Quan from Regurgitator. She is currently working on an Arts Queensland funded project to develop a performance showcase for Lady Electronica to be held at the Judith Wright Centre in October 2012. Recent performances include ICMC 2011, Brisbane Festival (Under the Radar), NIME 2010. She is a Senior lecturer in Music & Sound at Queensland University of Technology, Australia.
This paper explores the technical and æsthetic considerations behind the author’s interactive sound installation PulseCubes, exhibited in NAISA’s Sound Travels Festival of Sound Art running concurrently with the Toronto Electroacoustic Symposium. In the work, visitors are invited to become part of an implicit feedback loop whose components include a set of small cubes on a flat surface, computer vision and digital signal processing. The cubes are tracked by a webcam positioned overhead and processed through a partially opaque system implemented in the programming environment Max/MSP/Jitter. Audience interaction is created through the placement, grouping and movements of these cubes acting as a control device, which in turn results in the production of audio and physical vibrations.
Interaction occurs on several different levels. In terms of modality, it is a complex sonification of visual data: a picture or a pattern may be drawn, which becomes transferred to the audio domain in a cross-modal manner inspired by synæsthesia. In musical terms, users are able to: “play” the installation as an instrument with complex parameter mappings, controlling features such as various frequency, amplitude, panning and reverb settings; “compose” with the cubes and their behaviour as a sequencer, altering the rhythmic pattern of the playback; engineer the “notes” or create an instrument i.e. the timbre via the calculation of the wavetable from the arrangement of the cubes, thus producing a self-similar structure. At a broader level, which nonetheless incorporates the previous two, the interaction becomes an heuristic process whereby the participants are able to discover and understand the procedures involved.
Too often, interaction for innovative audio applications are encumbered by their “interface metaphors” that restrict the development of the music it produces, an historical example being the keyboard or MIDI controller being based on the piano. By reverting to older forms, tasks such as performance and composition with a new device or instrument become inadvertently constrained by cultural and historical behaviour and thought, in this case by piano repertoire and technique. Michael Hamman describes this mode of interaction as symbolic / denotative; its apparent advantages include the user not having to be conscious of the mediation of the interface, an attribute sought after by an industry preoccupied with such features as “usability” and “intuitiveness.”
Whilst this may be beneficial in the design of a tool such as the word processor or the computer keyboard harking back to the earlier typewriter, this immediacy can hinder the creative process — e.g. in the case of experimental music and sound art. Instead, the awareness of the interface through its unfamiliarity can be crucial in allowing the user to think beyond previously defined modes of behaviour and thought in what Hamman describes as a semiotic / connotative mode of interaction, and an interactive installation is an area where there has much potential for its exploration.
These notions of the interface in PulseCubes will also be considered within the wider context of the new media object as described by Lev Manovich. One of its features made possible by the digital medium is that of variability, with its ability to customise its output depending on the input often at run-time, producing an ideal result for each individual user. However, this is also tempered by the interface for Manovich and in the case of art as opposed to the design of tools / equipment, the connection between content and form / interface is motivated and cannot be considered as separate entities.
Furthermore, Manovich claims the current ubiquity of physical interaction — e.g. pressing buttons and controlling an interface — to be part of a longer trend to externalise the workings of psychological interaction through, for example, any kind of interpretation of a text or an image. Consequently, interactive media is seen to imitate, replicate or even become human mental reasoning more successfully than traditional media, leading to the assumption that physical interaction with the interface can objectify the mind and control or at least influence psychological interaction. This occurs to an extent in PulseCubes in order to comment on its presence in more traditional interfaces such as a keyboard and its limitations.
The success of PulseCubes on these terms will be discussed.
Ryo Ikeshiro is a London-based electronic and acoustic musician working in the fields of audiovisual composition, improvisation, interactive installations,
soundtrack and theory. He graduated from Kings College London and Cambridge and is currently studying for a PhD in studio composition at Goldsmiths
College. Research interests include the use of chaotic systems in generative, emergent structures and non-standard synthesis, glitch / noise / punk
æsthetics in electronic music and new forms of interaction and presentation of works. He has presented work at New Resonances Festival, Noise vs. Culture
(Kent), Redsonic (London), Deleuze Philosophy Transdisciplinarity (London), Seeing Sound 2 (Bath), Xenakis International Symposium 2011 (London),
Contemporanea 2011 Festival di Nuova Musica (Udine), ICMC 2010 (New York) and re:new 2010 (Copenhagen). He is a member of ry-om, whose tracks have been
featured on Resonance FM. His orchestral works have been performed by the Britten Sinfonia. As an events organiser, he runs a series entitled ABA. He is
also a visiting tutor and runs workshops challenging preconceptions about music.
This paper-demonstration discusses the creation of a 75-minute mixed media performance for string quartet, live audio processing, surround sound diffusion, live-motion capture video and audience participation. It is supported by the Center for Chemical Evolution, which is funded by the National Science Foundation and NASA Astrobiology Program. Composition of the work draws upon stochastic modelling of chemical data provided by researchers in Martha Grover’s Research Group at the School of Chemical and Biomolecular Engineering at Georgia Institute of Technology. Each section of this work is constructed from contingent outcomes drawn from Grover’s biochemical research exploring the early Earth formations of organic compounds. The work also uses the results from the 1953 Miller-Urey experiments that demonstrated some organic compounds such as amino acids, essential to cellular life, could be made easily under the conditions that scientists suspected to be present on the early earth.
“A biological organism has the ability to respond to its environment and learn from its past experiences, while human-designed systems are typically more
rigid and thus less ‘intelligent.’”
—Martha Grover, Design of an Intelligent Material, 2011.
Martha Grover and David Lynn from Emory University speak about aspects of their research in the Center for Chemical Evolution during the five Interludes of the work, presenting a scientific framework for each musical realization of the chemical data. This interactive composition attempts to model a biological organism’s ability to respond to the conditions of its environment by the use of real-time computer programming systems generating live audio and video. Data representation types in this composition include discrete, continuous, stochastic and interactive forms. This project attempts to create auditory models, or sonifications, of many of the elemental and environmental conditions present in early Earth thus providing a new way to imagine the salient biochemical morphologies at play in the origins of evolution. Data values drawn from self-organizing chemical compounds were assigned to the sonic properties of frequency, amplitude, duration, timbre, tempo and spatial location. The stochastic processes also contain Hidden Markov Models to embed a degree of probabilistic input from both the computer-generated processing and from the string quartet performers. Audio-visual programs used in the composition of the work and discussed in this presentation are Kyma, Open Music, Max, Jitter and Isadora. The live motion capture video system used is a further development of one created in 2008 for my chamber opera, Ophelia’s Gaze.
This paper-demonstration will provide an overview of the project details and will play select live sonifications and visualizations of data using the Kyma and Max/MSP compositional systems.
Steve Everett is Professor of Music and teaches composition, computer music. He directs the Music-Audio Research Center and is Director of the Center for Faculty Development and Excellence at Emory University. In addition, he has been a visiting professor of composition at Princeton University and has been a guest composer at the Conservatoire National Supérieur de Musique de Paris, Rotterdam Conservatory of Music and Utrecht School of the Arts. His doctoral degree in composition is from the University of Illinois, where he studied with Salvatore Martirano. He also studied composition with Peter Maxwell Davies and Witold Lutos?awski at Dartington Hall in England. At Emory, he has served as Chair of the Department of Music and President of the University Senate. Many of his recent compositions involve performers with computer-controlled electronics and have been performed in twenty-five different countries throughout Europe, Asia and North America. He has received composition awards from the Rockefeller Foundation, Chamber Music America and International Trumpet Guild.
Click here to return to the symposium schedule .
09:30–11:30 • Paper Session #6: Creative Practice
Session Chair: Eric Powell. Click on linked titles to read full articles published in eContact! 15.2 — TES 2012.
A Particle System for Musical Composition
by Bruno Degazio
This paper describes the development of a particle system for musical composition. It employs a generator as described in William Reeves’ seminal 1983 paper on the subject, but one in which the particles are musical themes rather than points of light. This is distinct from an audio-level particle system such as might be employed effectively in conjunction with granular synthesis, because an audio-level process has no “musical intelligence” in the traditional sense as the term is used in discussing rhythm, melody, harmony or other traditional musical qualities. The particle-system uses the author’s software, The Transformation Engine, as the musical engine for rendering particles. This allows the particle system to control relatively high-level musical parameters such as melodic contour, metrical placement and harmonic colour, in addition to fundamental parameters such as pitch and loudness. The musical theme corresponding to an individual particle can therefore evolve musically over the lifetime of the particle as these high-level parameters change.
Bruno Degazio is a composer, sound designer and educator. His film work includes the special-effects sound design for the Oscar nominated documentary film,The Fires of Kuwait and music for the all-digital, six-channel soundtracks of the IMAX films Titanica,Flight of the Aquanaut and CyberWorld 3D. His concert works for traditional, electronic and mixed media have been performed throughout North America and Europe. As a
researcher in the field of algorithmic composition he has presented papers and musical works at leading international conferences. He is a founding member
of the Canadian Electroacoustic Community and of the Toronto music ensemble Sound Pressure. He has written on his research into automated composition using
fractals and genetic algorithms. Bruno Degazio is the designer of The Transformation Engine, a software musical composition system with application to
algorithmic composition and sonification.
Toward “Compositional Reflection”
by Matthew Peters Warne
The electroacoustic community has long been good at sharing the technical aspects of electronic sound (see any program of any major computer music conference — ICMC, NIME, SMC, SEAMUS — for a schedule full of technical presentations) but less forthcoming when it comes to a discussion of the artistic strategies employed in the creation of works. The quantity of scholarship on electroacoustic music available in monographs and journals suggests that there is no shortage of theorization within the field. There also seems to be no shortage of music as electroacoustic community listservs are replete with concert and recording announcements. The area where I see the least publicly visible activity is in the area of composers and performers recounting and accounting for their experiences in creating and performing their work. So, while the music exists, as does the impulse to theorize it, composers have yet to develop a robust set of reflective practices that open the compositional work to more effective critical thought. This paper develops and proposes “compositional reflection” as a mechanism for sharing effectively from our compositional practice and as a way of linking inner and outer sound worlds.
From William Duckworth’s Talking Music to the approximately 50 interviews published in 34 years of the Computer Music Journal, the interview is the most popular published format for electroacoustic composers to reflect on their practice. However, detailed conversations about paths taken to individual compositions are virtually non-existent as questions about the advantages and disadvantages of various computational technologies often dominate the conversation. Still, I know composers to be reflective and interested in talking about their practice and I see abundant evidence of reflective practice in teaching where composers excel at describing how and what they do in both individual and group settings. My work leverages the abundant impulse among composers to reflect on where and how the ideas for their works emerge and channel it to develop a set of practices we can use to perform public conversation about compositional practice.
My work contributes to a growing chorus of voices calling for more visible and more detailed explications of compositional practice, all of which would lead to more engaged audience listening and enhanced theorization of our compositional work. Katharine Norman argues how, paradoxically, more time spent by the artist considering subjective experience can “[leave] a door open for the listener to participate [in the work], from his or her own experience” in advocating for composers to share their phenomenal experiences as a way of creating work that is more open to listeners and, thus, more successful. George Lewis argues that effective theorization of improvisatory music requires a robust body of work reflecting on practice. Speaking in 2011 at Brown University, Lewis called for more auto-ethnography by practitioners to assist critical improvisation studies in developing new theories of improvisation. In his 2007 Parallax article, Lewis describes his careful and thoughtful exposition of his improvising musical environment Voyager as “auto-ethnography” necessary to “give the work a voice” and to “complement the ethnographies of technology that people such as [Lucy] Suchman and Bruno Latour have performed.” Joanna Demers’ approach in Listening through the Noise: The Aesthetics of Experimental Electronic Music is made possible in part by access to the thinking and writing of composers, and her most powerful conclusions are drawn when she compares how composers have worked with the music that results. The lesson is that the possibilities of such work, is enriched by greater access to compositional practices as exposed by composers describing their phenomenal experiences, intents and actions. As for composers, our practices can only be enhanced through intense theoretical engagements of our work.
This paper identifies a need for a technology with which composers can publicly reflect on and share the techniques they apply in their individual compositional practices. It looks to ethnography and specifically to Marcus and Fischer’s seminal workAnthropology as Cultural Critique with its renovation of ethnographic writing as font on which to draw in the development of a reflexive critical practice I call compositional reflection. I identify holism and microscale description as key features useful in compositional reflections: accounts should conceive of compositional activities broadly and examine them deeply. In the paper I describe my vision of compositional reflections as experimental (as defined by Marcus and Fischer and related to notions of experimentalism in music from Cage to Lewis). I also discuss some of the primary benefits of such experimentalism and offer examples from my own compositional reflection practices.
Matthew Peters Warne is a composer and installation artist who creates work to explore the role we play in our own perception. Matthew creates electronic
instruments and software to manipulate recordings of everyday soundscapes in live performance. His recordings are drawn primarily from Angola, in southern
Africa, as part of an effort to understand the intersection between emerging, resource-rich nations and changing global cultures. He is Part-Time Assistant
Professor in the Departments of Music, Foundation and Transmedia at Syracuse University. His doctoral work at Brown University is in Multimedia and
Electronic Music Composition; he holds an MS in Digital Media from the Georgia Institute of Technology and a BA from Grinnell College with majors in music
This paper presents a model of composition as a necessary engagement with both real and virtual spaces. Site-specific installation and sound art practices are addressed in terms of James Meyer’s distinction between literal site and functional site, as a means of teasing apart the role of the art/sound object. It is argued that the intervening (confrontational) object instigates the emergence of the listener-subject as the irreducible discontinuity between the two perspectives on “site.” The object’s ability to instigate the shift in perspective necessary to disclose the listener-subject hinges on discovery. It is asserted that a listener must discover the minimal difference between the object (its phenomenal presence) and itself (noumenal notion). The author’s composition Windows Left Open is used as a representative example of how the model applies and serves to frame an understanding of the listener as carrier of a work’s meaning. Issues surrounding fundamental differences between concert hall and real world spaces are discussed as a direction for future investigation concerning the model.
Sean Peuquet is a composer, installation artist and occasional audio hardware hacker. He is currently a Visiting Assistant Professor of Digital Arts at
Stetson University in DeLand, Florida. Sean’s compositions are performed regularly, both nationally and internationally, in venues such as SEAMUS, ICMC,
the Boston CyberArts Festival, the New York City Electronic Music Festival, Electronic Music Midwest, the SCI National Conference and the Toronto
Electroacoustic Symposium. He is a PhD candidate in Music Composition at the University of Florida, where he is finishing a dissertation concerning
theoretical intersections of music and place. His current research focuses on listener phenomenology as framed by the dialectics of space and place,
convergent algorithms in generative musical systems and human-computer interaction using position-sensing technologies. He holds a masters degree in
Electroacoustic Music from Dartmouth College and a BA in Music and Psychology (with an Astronomy minor) from the University of Virginia.
Creative artists often have difficulty moving their ideas from a subjective or inner place, where ideas develop, to an objective outer one where others can perceive their realization. It would be convenient if we could fully explain this problem as simple procrastination, or as Parkinson’s Law. But the roots of a chronic failure to make the leap between idea and action can be significantly more nuanced than that. This paper explores some of the hazards digital artists may face when making creative work in an era of near-infinite access to information. While our specific focus is on electroacoustic composers, the broader principles articulated are equally applicable to artists working in any digital medium. We informally examine the problem from perspectives that include: Blinded by the Spotlight — looking too hard in the right place; Déjà vu all over again — the difficulty of tracking excessive information; Insufficient redundancy — when too much is not enough; and No Free Lunch — the hazards of software trials. By considering how we accumulate, catalogue and process information — and the impact constant information flow has upon the creative process — we begin to understand how information asphyxiation affects our ability to cross the barrier between inner preparation and outer manifestation and inhibits the transformation from idea to reality.
Halifax-based composer / performer Steven Naylor composes for concert performance, and creates scores and sound designs for professional theatre,
television, film, and radio. His personal work is presently centred on radiophonic and acousmatic works. He is also active as a pianist, performing music
that blends improvisation and through-composition. Naylor completed the PhD in Musical Composition, supervised by Jonty Harrison, at the University of
Birmingham, UK. Naylor is a former President of the CEC.
16:30–17:30 • Closing Discussion and CEC Annual General Meeting
CEC Members and anyone interested in knowing more about the CEC activities are very welcome to join the meeting. Check your membership status — or become a member! — by contacting jef chippewa.