Adaptation and Perceived Creative Autonomy in Gesture-Controlled Interactive Music
Jason Smith; Jason Freeman

- Format: oral
- Session: papers-5
- Presence: in person
- Duration: 15
- Type: long
Abstract:
With the variety and rapid pace of developments in Artificial Intelligence (AI), musicians can face difficulty when working with AI-based interfaces for musical expression as understanding and adaptation to AI behaviors takes time. In this paper, we explore the use of AI in an interactive music system designed to adapt to users as they learn to perform with it. We present GestAlt, an AI-based interactive music system that collaborates with a performer by analyzing their gestures and motion to generate audio changes. It uses computer vision, online machine learning, and reinforcement learning to adapt to a user’s hand motion patterns and allow a user to communicate their musical goals to the system. It communicates its decision-making to the user through visualizations and its musical output. We conducted a study in which five musicians performed using this software over multiple sessions. Participants discussed how their preferences for the system’s behavior were influenced by their experiences as musicians, how adaptive reinforcement learning affected their expectations for the system’s autonomy, and how their perceptions of the system as a creatively autonomous, collaborative partner evolved as they learned how to perform with the system.