Originally published at https://wimir.wordpress.com on February 19, 2021.
Believing in the importance of shedding light on the stories of successful women in the Music Information Retrieval (MIR) field, we are happy to share our interview with Dr. Dorien Herremans, the second Inspiring Women in Science interview. Dr. Herremans is an Assistant Professor at Singapore University of Technology and Design and Director of Game Lab. She has a joint-appointment at the Institute of High Performance Computing, A*STAR and works as a certified instructor for the NVIDIA Deep Learning Institute. Her research interests include machine learning and music for automatic music generation, data mining for music classification (hit prediction) and novel applications at the intersection of machine learning/optimization and music.
Whereabouts did you study?
I completed a five-year masters degree in business engineering (in management information systems) at the University of Antwerp. I spent the next few years living in the Swiss Alps, where I was an IT lecturer at Les Roches, Bluche, and had my own company as a web developer. I returned to the University of Antwerp to obtain my PhD in Applied Economics. My dissertation focused on the use of methods from operations research and data mining in music, more specifically for music generation and hit prediction. I then got a Marie-Sklodowsi postdoctoral fellowship and joined the Centre for Digital Music (C4DM), at Queen Mary University of London to develop Morpheus, a music composition system with long-term structure based on tonal tension. After my postdoc I joined Singapore University of Technology and Design, where I am an assistant professor and teach data science and AI. My lab focuses on using AI for audio, music and affective computing (AMAAI), I'm also Director of the SUTD Game Lab and have a joint appointment at the Institute for High Performance Computing, A*STAR.
What are you currently working on?
Some of our current projects include a Music Generation system based on emotion (aiMuVi); music transcription; a GPU-based library for spectrogram extraction (nnAudio); multi-modal predictive models (from video/audio/text) on emotion and sarcasm detection.
When did you first know you wanted to pursue a career in science?
It happened rather naturally. When I was about to graduate, I felt more of a pull towards staying in academia versus going into industry. Especially because at the time, with a degree in business engineering, that would have most probably meant joining the big corporate world. As a 24 year old, I instead wanted to keep exploring new things, stay in the dynamic environment of academia, especially since I could do so while living in a very quaint mountain village.
How did you first become interested in MIR?
During my last year as a student in business engineering, I was looking for a master thesis topic and came across 'music and metaheuristics'. Having been passionate about music my whole life, I jumped at the opportunity to combine mathematics with music. This started an exciting journey in the field of MIR, a field I did not know existed at that time (2004).
What advice would you give to women who are interested in joining the field of MIR but don't know where to begin?
We are fortunate to have a growing community of MIR researchers. Through groups such as WiMIR or ISMIR, you can join mentoring programs and get in touch with researchers who have more experience in the field. If you are a beginning researcher, you could also attend one of the conferences and start building a network.
How is life in Singapore? Is there a difference for your research between working in Europe and Asia?
My first impression when arriving in Singapore a few years ago, was that it felt very much like living in the future. It's quite an amazing country, efficient, safe, warm (hot really), and with amazingly futuristic architecture. As a researcher, Singapore offers great funding opportunities and a dynamic environment. We have been growing the AMAAI lab steadily, and are excited to connect with other music researchers in Singapore (there are more than you might think!).
You are working on AI and music now which is a fascinating field. What can it do and cannot now?
After almost a decade of science fiction movies that play around with the concept of AI, people seem to equate AI with machines obtaining self-awareness. That's not what we should think of as AI these days. I see (narrow) AI systems as models that learn from historical data, extract patterns, and use that to make predictions on new data. Narrow AI focuses on clearly defined problems, whereas general AI is more challenging and tries to cope with more generalised tasks. In MIR we typically develop narrow AI systems, and due to the recent developments in neural network technologies and the increasing GPU power, we are making large strides. The challenges that we are currently facing are in large part related to the lack of labeled data in music, and the cross-domain expertise required to leverage music knowledge in AI systems.
How to make human musicians and AI musicians work together and not compete with each other?
This will be the natural first step. Unless properly educated about AI, many people will not trust AI systems to take on tasks on their own. I believe this is why many of the personal assistant AI systems are given female names and voices (exudes trust?). For example, a composer might not want a system to generate music automatically, but they might appreciate a computer-aided-composition system, which, for instance, gives them an initial suggestion for how to harmonise their composed melody.
It seems still some distance for it to be useful in daily life compared with face/voice recognition. What is your expectation for that field?
I actually think that AI in music is being integrated in our daily lives, through companies such as Spotify, Amazon Music, etc. as well as through smaller startups such as AIVA. I expect the number of startups in the music tech area to increase strongly in the coming years.
You are also working on combining emotion and music together. On what level do you think the computer can understand human emotion?
The word 'understand' is tricky here. We can train models to predict our perceived or experienced emotion based on observations we have done in the past, however, the biggest challenge seems to be: why are different people experiencing different emotions when listening to the same piece of music?
These days more and more people work in different fields with AI. For the students working on music and AI, can you give them some guidance about their research strategy and career path?
As for any research topic, I would recommend students to tackle a problem that they are fascinated by. Then you dive deep into the topic and explore how it can be advanced even further. To stick with a topic, it's essential that you are passionate about it.
Can you give a few tips for people working at home in the days of Covid-19?
Stay inside, get as much exercise as you can, and try to think of this as the perfect time to do undisturbed research…
To keep up to date with Dr. Herremans, you can refer to https://dorienherremans.com/
Rui Guo graduated with a Master's Degree in Computer Science from the Shenzhen University, China. Currently he is a third year PhD student in University of Sussex, UK pursuing his PhD degree in music. His research topic is AI music generation with better control and emotional music generation.
The 22nd International Society for Music Information Retrieval (ISMIR) Conference Registration is now OPEN! Join the ISMIR2021 Tutorials, Conference, and Satellite Events by Registering at https://ismir2021.ismir.net/registration. We also have several grants to cover REGISTRATION and TUTORIAL fees, as well as CHILDCARE expenses. Apply here: https://bit.ly/ismir2021grants