Etu{d,b}e

This is (and will always be) a Work in Progress.

In collaboration with the saxophonist Tommy Davis, we propose improvised performances including autonomous musical agents and live instrumentists. These agents’ behavior is directed by various machine listening functions which may be summarized as: follow my step and follow that way (Nika, 2017). We are designing a modular environment to test several performative environments, explore contrasting human-machine groupings, and develop various machine listening and agency functions. We consider these explorations like étude as in the traditional use of the term in music pedagogy. Our improvisations present situations where the human and its digital counterpart have to study (étudier) each other in realtime. The autonomous musical agents can even learn over the course of multiple performances by creating an offline memory bank. We want to assess whether this would improve the perceived quality of the human-machine interaction during an improvisation.

Performing DYCI2 for the North American Saxophone Alliance virtual conference

Credits: Marie-Chantal Leclair, Tommy Davis, and Vincent Cusson

Interaction and collaboration with musical agents

From simple pitch detection categorization to more complex live musical listening, the different improvisation environments offer autonomous capabilities in their own way. We are collaborating with each composer-programmer and have their permission to modify and perform the patches.

Adapting Existing Framework

We are adapting these three environments to our specific performance situation:

Two of them include audio effects that are modified either via agent input or manually. We already started customizing CTIP2 and DYCI2 for our needs by adding or refining graphical user interface (GUI) features. We want to play with these tools and explore their minute functions and details; amazing things can occur when testing the limits of such systems.

Currently, the realtime control over the multiple parameters of an agent is somewhat limited. It is possible to interact with the system via Max/MSP’s GUI and with typical computer input interfaces like the keyboard, the mouse, or a MIDI controller. It is often the role of a dedicated performer playing the electronics. However, we argue that an instrumentist should be able to control directly some parameters to enable more intricate interaction with a musical agent.

It has been shown that two-way communication between a musician and an improvising musical agent (McCormack, 2019) helps reach the state of flow (Csikszentmihalyi, 1990). Multimodal feedback should inform the instrumentist about the patch state during an improvisation. Besides the obvious sonic output from the agent, our first iteration integrates visual indicators via a screen. Attempting to ease the performers cognitive load, all parameters cannot be shown at the same time. Further research would involve experimenting with haptic feedback as another communication channel. In any case, we have to build the system gradually to allow for more complex collaborative scenarios over time.


Towards this goal, we are developing a digital interface fixed onto an instrument, which send data to the computer for further processing. The design of similar digital musical instruments (DMI) and various forms of meta- or augmented instrument is not new and our work is inspired by colleagues at IDMIL (Sullivan, 2018) and elsewhere.

Controller design

For our first prototype, we model and 3D print the body of the controller. It is fixed to the acoustic instrument without hindrance to the performer. Custom electronics based on the esp8266 (WIFI enabled Arduino-like boards) is interfacing the sensors and actuators to relay information between the musician and the musical agent. An emphasis is put on the instrumentist preferences regarding some inherent qualities of sensors used for bimanual interaction. The use of human-centered design strategies has been proved to be an efficient way to facilitate the learningship of a new digital instrument (Sullivan, 2021).

The Tube

In addition to traditional instruments, we are working with an unusual one: The Tube. It is a two-meter-long flexible plastic tube with a saxophone mouthpiece attached at one end. The inspiration comes from the concept of infra-instrument described as “restricted in its expressivity, broken or only a portion of a standard instrument” (Bowers, 2005). This allows the performer to maintain a certain level of control over the sound production, articulations, and dynamics developed from years of musical training. The cylindrical tube reacts differently than the saxophone body producing inharmonic sounds, overtones, and rich textures. Tommy has improvised with tubes in collaborative performances with Wild Space Dance in Brooklyn (2015), Ensemble AKA for No Hay Banda (2017), and with Duo d’Entre-Deux for the co-improvised Reverberant House (2019) produced by Codes d’accès.

The simplicity of its design is appealing for us since it offers more liberty—as in less physical constraints—for augmenting it with electronics. We are now faced with this interesting question; what happens when you augment an infra instrument..?

Acknowledgment

This project is being supported by a CIRMMT Student Award (2021-2022, 2022-2023, 2023-2024) and received funding from IICSI (2023-2024).

Information on Tommy’s DMus recital can be found here.

References

Nika, J., Déguernel, K., Chemla–Romeu-Santos, A., Vincent, E., & Assayag, G. (2017). DYCI2 Agents: Merging the “free”, "reactive”, and "scenario-Based” Music Generation Paradigms. International Computer Music Conference.

Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. (Vol. 1990). New York: Harper & Row.

Bowers, J., & Archer, P. (2005). Not Hyper, Not Meta, Not Cyber but Infra-Instruments. 5–10. https://www.nime.org/proceedings/2005/nime2005_005.pdf

Sullivan, J., Tibbitts, A., Gatinet, B., & Wanderley, M. M. (2018). Gestural control of augmented instrumental performance: A case study of the concert harp. Proceedings of the 5th International Conference on Movement and Computing, 1–8. https://doi.org/10.1145/3212721.3212814

Sullivan, J. (2021). Built to perform: Designing digital musical instruments for professional use. [Ph.D. thesis, McGill University]. https://johnnyvenom.com/files/sullivan_phd_thesis.pdf

McCormack, J., Gifford, T., Hutchings, P., Llano, M., Yee-King, M., & d’Inverno, M. (2019). In a Silent Way: Communication Between AI and Improvising Musicians Beyond Sound. 1–11. https://doi.org/10.1145/3290605.3300268

Magnusson, T. (2010). Designing Constraints: Composing and Performing with Digital Musical Systems. Computer Music Journal, 34(4), 62–73. https://doi.org/10.1162/COMJ_a_00026

Eigenfeldt, A., Bown, O., Pasquier, P., & Martin, A. (2013). Towards a Taxonomy of Musical Metacreation: Reflections on the First Musical Metacreation Weekend. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment. https://ojs.aaai.org/index.php/AIIDE/article/view/12647