Abstract: This talk presents the work of an undergraduate summer research project conducted by George Langroudi and supervised by Anna Jordanous and Caroline Li. The aim of the project is to generate music based on emotions the user is currently experiencing. To achieve this, there are two tasks: (1) to detect emotions based on current brain wave activity and (2) to generate appropriate music in real-time (responsive to the user's current state). We use the Russell circumplex model of emotions alongside a music database tagged with metadata relevant to the Russell model, to implement a music player in C# that works alongside someone wearing an Emotiv EEG headset. We hope to demonstrate our results during the talk.
Future work could be to improve how this project performs emotion detection through EEG, and to generate new music based on emotion-based characteristics of music. Potential applications of this work are in music therapy, creative music software, or music therapy. More broad applications centre around using brain-computer interfaces for non-verbal communication, e.g. computer-human communication during collaboration, better HCI customised to the user experience, or enabling people in a vegetative state ('locked-in') to communicate what emotions they are feeling.
DetailsOpen to all,
Contact: Michael Kampouridis