Interested in the applications of Generative AI to the areas of musical expression and composition? Join us this IAP for a 3-week workshop during which we will be both discussing the most important tools and models in this ever-changing domain, and working towards applying these technologies to a brand new application. We will meet 4-6 hours a week (see schedule below), and work towards creating a Final Project (see ideas below). There is no prerequisites other than a willingness to learn and to participate in the group effort, so feel free to join us!
Schedule
The welcome session will run on Wednesday, January 10 at 2pm in E14-240.
Mondays 5pm-6pm: Reading Group (Optional). The AI+Music Reading Group meets every week at this time, so this is the perfect opportunity to discover new work in the area. We discuss a new paper every week and attempt to apply the technology (when possible) to make new music.
Wednesdays 2pm-3pm: Seminar. In this hour, we will be going over the vast literature of GenAI + Music. We will be covering various state-of-the-art models and discuss the evolution of these tasks as well as the different approaches that exist (audio vs symbolic music, text-to-music, image-to-music, transformers vs diffusion models, etc).
Fridays 1pm-3pm (or later): Lab. We will take this time to meet all together and work towards an implementation of our Final Project. Participants are encouraged to engage and contribute their unique skills, whether in design, music, coding, or any other. Everyone joining has the opportunity to actively participate and make a meaningful impact.
Note: All sessions happen in the E14-240 (Media Lab).
Potential Ideas for a Final Project:
An automatic arranger that generates the best song given an input video.
A voice-changing VST to embed voice models in DAWs.