This year’s ISMIR Music Program could be entitled “A festival of visual music.” We received an overwhelming amount of works in the category of creative visuals. The unexpected high number of submissions (46) made it very hard to select the pieces to be presented. However, we think we got a great selection of works - not all of them related directly to MIR - that will constitute an excellent musical counterpart to the scientific program. Split into four short online concerts —two of less than 30 minutes each, another two around 40 minutes each— the program will only feature musical works with a visual counterpart, be it creative visuals, networked ensembles, or (recordings of) live performances. We realized that works that had no video (e.g. acousmatic music) should not be presented in this (online) format for reasons that have nothing to do with the (sometimes) great quality of the music - it simply would make it very hard to fully appreciate these works. Here is a list of music pieces that will be presented in ISMIR 2021:
Concert 1
- Symphony in Blue 2.0, Istanbul Coding Ensemble with Jerfi Aji, piano
- Xeno, Enrico Dorigatti
- Three Tunes from the Ai Frontiers, Bob L. T. Sturm
- toy_5, Eric Lemmon
- History Has Stopped at 2021, Vanissa Law
Concert 2
- Golden Cuttlefish, Timothy Moyers
- Topos, Giuseppe Desiato
- Things I Have Seen In My Dreams, João Pedro Oliveira
- coalesce;, Tamara E Ray
Concert 3
- Lullaby for Stepanakert, Joseph Bohigian & Ensemble Decipher
- Butterfly Garden, Donya Quick
- Forme Cangianti, Fabio Morreale
- Quartet, Ted Moore
- Inkblot, Serge Bulat
- Music for Virtual Togetherness, Poli∃tnico Choir
Concert 4
- Rooftops, Modality
- String Quartet, Hendrik Vincent Koops
- Apocalypse – Future, Oregon Electronic Device Orchestra
- The Seals: Networked ensemble of pre-recorded live performance with creative AI assisted visuals, The Seals
- Horizon, Mojtaba Heydari, Frank Cwitkowitz
Music is alive and kicking at virtual ISMIR 2021!
Symphony in Blue 2.0
The Istanbul Coding Ensemble
About the Work
The Istanbul Coding Ensemble meets Jerfi Aji for a live coding take of Kamran İnce’s composition entitled Symphony in Blue (2012). While certain parts of this work are lively improvised using just in time programming techniques and networked music systems, an ongoing dialogue amongst the coders and piano is formed leading to real time symbiotic digital palimpsests of interactive sonic experimentations. This yields an ongoing re-imagination of the composition using live coding and dynamic programming improvised by the ensemble and guests, such as, Scott Wilson (University of Birmingham, UK) who has joined telematically and performed from a remote location. Resources for machine listening functionality with the piano and the networked music system is based on part on PianoCode (2014) by Birmingham Ensemble for Electroacoustic Research. For remote transmission of the piano audio signal we used SonoBus, which evidently did not affect the quality of the performance and proved to be affordable in terms of latency and stability.
About the Author(s)
Istanbul Coding Ensemble (https://konvas.netlify.app/ice/) is the home live coding ensemble of İTÜ’s Center for Advanced Studies in Music (MIAM). It was founded in 2018 by Konstantinos Vasilakos and postgraduate students of the department. Heralding the inauguration of Live Coders’ Assembly in the Algoterannean region, ICE’s mission is the search for the perfect tuning between collective networked sonic manifestations and improvised sound algorithms on the fly. For this piece, Konstantinos Vasilakos, Scott Wilson, Jerfi Aji, Serkan Sevilgen and Onur Dağdeviren are taking part in the performance.
Xeno
Enrico Dorigatti
About the Work
1) The idea at the ground of Xeno
Xeno aims to propose a novel approach to the relationship between audio and video (and to the role of audio in general) in- and-through multimedia art. In everyday life, in the reality that we continuously experience, the sound is always the consequence of an action (think to the noise produced by a car or the speech). Therefore, the idea at the ground of this work is to subvert this constrain by placing both sounds and actions (read: images) at the same level in the cause-effect relationship. This process is so designed and meant to make it impossible to understand which one of the two media is the cause or the consequence of the other, and even if a relationship occurs or if they are separated entities. Sounds and images proceed together along, but are they complementary? Are visual shapes generating sounds or vice versa? Is there even a relationship between them? These are some questions the overall result aims to raise.
2) How was Xeno made?
The starting point was the B/W video. But how to implement the idea at the ground of the work? The best solution resulted in translating in the audio domain the fast fragmentation occurring in the images. This was realised by implementing a custom Max/Jitter patch to real-time analyse the video and control, accordingly, some Mute/Unmute automation lanes in the DAW (Reaper) via a Virtual Midi Connection. The main task of the Max patch was to detect whether each frame of the video was with images (presence of white colour, Unmute) or without (black frame, Mute). All this appeared to be an efficient, effective and reliable system.
3) Sounds
Sounds come from a variety of sources: field recording, synthesis and circuit bending. They were all manipulated creatively (but maintaining a granular flavour) with chains of digital audio effects. While the character of the first part of the work is clearly experimental, the last part places the sounds in recognisable patterns -but preserves, in the background, the tight timing between audio and images.
4) Images
The content of the final footage is identical, concerning images and rhythm, to the original B/W one. However, as the last step, some effects were applied to colourise it and to displace shapes in some points.
About the Author(s)
Enrico Dorigatti is a sound designer and sound artist based in Italy. He is mainly interested in music, technology and the connection between them. After a diploma as an IT specialised technician, he obtained a bachelor degree in electronic music. His thesis, URALi, a project currently under development, is a C# library designed to add real-time audio synthesis and manipulation capabilities to the Unity Engine. This project, selected to be part of the scientific program of the XVI SMC conference held during summer 2019 in Malaga, is the product of research started back in 2017 with Life, an auto generative multimedia artwork performed at the XXII CIM and the Biennale d’Arte Contemporanea di Salerno. Other of his works have been selected to be part of the programmes of national and international conferences and festivals. In winter 2020, he obtained a master degree in electronic music with a thesis on sound design and the electroacoustic piece Quantum, selected for NYCEMF 2021. During his academic studies, he attended several masterclasses held by, among the others, Alvise Vidolin, Daniel Teruggi, James O’Callaghan, Barry Truax. From late 2019 he has been part of Movimento Creative Label, an artistic collective specialised in the creation of multimedia contents and installations. With them, he realised sound designs and compositions. He also designed and implemented multimedia interactive touchless installations for museums and expositions. Later on, during summer 2021, he was an artist in residence for Art Stays Festival (Ptuj, SLO) as a sound designer and live performer for a site-specific video mapping realised by the collective. From fall 2021, he is a PhD student at the University of Portsmouth in the School of Creative Technologies, Faculty of Creative and Cultural Industries.
Three Tunes from the Ai Frontiers
Bob L. T. Sturm
About the Work
These three tunes come from my time exploring the frontiers of Ai-generated folk music. The melodies in each case come from material generated by the folk-rnn system, and performed by myself on accordion. When there is accompaniment, it comes from material generated by Music Transformer conditioned with the melody. Each tune is accompanied by visuals. For “The Irish Show”, the video is automatically generated by the “Audio to Video Clip” app at melobytes.com. The video material for “Heading back home” comes from found video material of peahix. The algorithmic dancers in “Djursvik Semester” were generated using the Transflower system conditioned on the audio recording. More information here: https://tunesfromtheaifrontiers.wordpress.com
About the Author(s)
Bob L. T. Sturm, Associate Professor of Computer Science at the KTH Royal Institute of Technology, Stockholm, Sweden. He has degrees in physics, music, multimedia, and engineering, and specializes in signal processing and machine learning applied to music data. He currently leads the MUSAiC project funded by the European Research Council (https://musaiclab.wordpress.com), and is probably most known for his work on horses, the GTZAN dataset, and playing AI-generated folk music on his accordion.
toy_5
Eric Lemmon
About the Work
This networked piece, written for new media ensemble Ensemble Decipher and titled toy_5, seeks to explore aspects of improvisational and generative algorithmic computer music. The work is designed to be immediately accessible to non- musicians through a participatory setting, if so desired, according to principles developed by Tina Blaine and Sidney Fels, while also providing the opportunity for more rehearsed performances like the one presented in the work sample above. The work’s visuals and sound are controlled by mouse cursor position, mouse clicks, and keystrokes through OSC. They are then shared, in this case through the consumer-grade teleconferencing software Zoom. The code for audio generation was written in SuperCollider, while the visuals were coded in hydra.
About the Author(s)
Composer Eric Lemmon’s artistic practice and academic research is preoccupied with the politics that circumscribe and are woven into our musical technologies and institutions. His music has been reviewed by the New York Times and featured on WQXR’s Q2 and has been performed in venues ranging from underground bars (le) Poisson Rouge and SubCulture to the DiMenna Center for Classical Music and FIGMENT arts festival on Governor’s Island. Eric’s work has been recognized locally and internationally with grants and residencies like MetLife’s Creative Connections Grant, UMEZ and LMCC Arts Engagement Grants, multiple Puffin Foundation Grants, a Tofte Lake Center Emerging Artist Residency, a Can Serrat International Artist Residency, a Westben Performer-Composer Residency, New Music for Strings, and ConEd’s Exploring the Metropolis Residency. Further, he has been awarded a Mancini Fellowship, a long-term fellowship from the German Academic Exchange Service (DAAD), Stony Brook University’s Presidential Dissertation Completion Fellowship, and a Fulbright Award for his artistic research and profile as a performer. Eric has written works for Yarn|Wire, Cadillac Moon Ensemble, Jacqueline LeClaire, and The Chelsea Symphony. He is a member of the experimental and technology- focused music collective Ensemble Decipher and is currently a Ph.D. candidate in Music Composition at Stony Brook University.
Ensemble Decipher is a modular, experimental music group that performs with vintage, contemporary, and emerging technologies. Founded in 2017 by Niloufar Nourbakhsh, Ensemble Decipher strives to redefine performer virtuosity by drawing on the technological advancements of our time in order to highlight new voices and ways of listening. By reexamining new music and integrating technology into their performance practice, Ensemble Decipher seeks to reflect on and challenge the power structures that lace the field of electroacoustic music. Recent works commissioned by the group have mobilized network technologies, accelerometers attached to rocks, boxes trained via machine learning to respond to touch, acoustic instruments, and laptops. This has led Ensemble Decipher to collaborate with notable composers and technologists including Mara Helmuth, Margaret Schedel, Hannah Davis, Yaz Lancaster, and Lainie Fefferman and premiere works by many others. Recent feature performances include concerts at the Society for Electro-Acoustic Music in the United States, International Computer Music Conference, New York City Electroacoustic Music Festival, Network Music Festival, and an ensemble residency at EarFest. Current members include Joseph Bohigian, Robert Cosgrove, Eric Lemmon, Chelsea Loew, Taylor Long, and Niloufar Nourbakhsh.
For the upcoming season Ensemble Decipher has been awarded a SUNY PACC Prize and a USArtists International Grant from the Mid-Atlantic Arts Council to commission composers Kamala Sankaram, Paul Leary, Jose Tomás Henriques, Jamie Leigh Sampson, and Mari Kimura for performances in Denmark and across New York State.
History Has Stopped at 2021
Vanissa Law
About the Work
History Has Stopped at 2021 is written for solo snare drum, with real-time generated Open GL visuals done by MAX. The three Open Gl objects are moving on the screen in random paths, and their sizes changes with the amplitude of the live audio input.
This performance was performed by Samuel Chan, a top prize winner in numerous prestigious competitions. Originated from Hong Kong, Samuel obtained his Artist Diploma from the Colburn School and Master of Music from The Juilliard School, as well as previously studied at the New England Conservatory and HKAPA.
About the Author(s)
Vanissa was born in Hong Kong and began her studies at the Hong Kong Baptist University in 2004, started out as a piano major and studied piano with Chinese composer Mr. Cui Shiguang. After graduated from HKBU Vanissa turned her focus towards electroacoustic music composition during her stay at Ball State University, Indiana, majoring in voice and music composition. She studied voice with Ms Katusha Tsui-Fraser and Dr Mei Zhong, and was the winner of the regional (Indiana) audition of the National Association of Teachers of Singing in US in 2008. Vanissa returned to Hong Kong in 2010 and obtained her PhD in 2016 under the supervision of Prof Christopher Keyes.
Vanissa’s pieces and installations have been premiered and exhibited internationally at various events and festivals including the 13th International Conference on New Interfaces for Musical Expression (Seoul, Korea), Zvukové Dobrodružství (Brno, Czech), Society of Composers (SCI) Region VI Conference (Texas, US), University of Central Missouri New Music Festival (Missouri, US), the soundSCAPE festival (Italy), 2013 International Workshop on Computer Music and Audio Technology (Taipei) and Hong Kong Arts Festival. Vanissa was granted the Fulbright Research Award and was sponsored to do a 10-month research at Louisiana Digital Media Center in 2014-15.
Golden Cuttlefish
Timothy Moyers
About the Work
Golden Cuttlefish explores the relationship between the organic and the abstract. A digital ecosystem is created exploring this juxtaposition in both the sonic and visual worlds. Abstract imagery is controlled by organic motion. Organic sound environments coexist with abstract sonic events. The organic flow of musical form and time is complemented by the fluid motion of the video.
A digital ecosystem is created exploring this juxtaposition in both the sonic and visual worlds. Abstract imagery is controlled by organic motion. Organic sound environments coexist with abstract sonic events. The organic flow of musical form and time is complemented by the fluid motion of the video.
About the Author(s)
Timothy Moyers Jr. is a composer and audio-visual artist originally from Chicago. He is currently an Assistant Professor of Music Theory and Composition at the University of Kentucky and supervises the Electroacoustic Music Studio. Prior to joining the University of Kentucky, Timothy was an Assistant Professor in the Department of Human Centered Design at IIIT-D (Indraprastha Institute of Information Technology), Delhi, India where he was the Founder & Director of ILIAD, Interdisciplinary Lab for Interactive Audiovisual Development, and GDD Lab, Game Design and Development Lab. He completed his PhD in Electroacoustic Composition from the University of Birmingham (England), an MM in New Media Technology from Northern Illinois University (USA), a BA in Jazz Performance and a BA in Philosophy from North Central College (USA).
Topos
Giuseppe Desiato
About the Work
The audiovisual work Topos (from Greek τόπος, literally ‘place’) arises from the interpretation, the analysis, and the manipulation of topological and abstract surfaces. Earth-inspired environments, combined with generative processes, are used to give birth to new imaginative places.
What defines a place? Is it an object able to identify it? Can a light determine a place? Till what point of subtraction or abstraction we are still able to perceive the idea of a place?
Through the use of 3d-generated visuals, I am exploring the possibilities of a non-existent place, the sensations produced by its presentation, and the spontaneous mental associations that it creates.
About the Author(s)
Giuseppe Desiato is a multimedia composer and 3d artist currently living in Boston, Massachusetts. His work focuses on researching, analyzing, and enhancing the several degrees of interrelation among different media. His composition approach has grown and propagated on the idea that distinct media types share deep core-rooted mechanisms that enable and make a spread across-media process possible. In the last years, he explored the relationship between computer graphics visuals and electroacoustic music
His recent works have been performed in festivals such as SICMF 2021, ICMC 2021, NSEME 2021, Evimus 2020, NYCEMF 2020, Evimus 2018, ICMC 2018, Echofluxx 2018, Mise-en place Festival 2017, Gaudeamus Muziekweek 2016, Emufest 2016, Festival ArteScienza 2016, and Le Forme del Suono 2015.
He is now pursuing a Ph.D. in Composition and Music Theory at Brandeis University (Waltham/Boston).
Things I Have Seen In My Dreams
João Pedro Oliveira
About the Work
We dream… sometimes we have nightmares, or dreams that makes us sad, anguished, or simply indifferent. But occasionally, there are dreams that project in our mind images and sounds of great beauty. This piece is a recollection and variations on some of these images and sounds I remember from my dreams. It is dedicated to Mario Mary.
About the Author(s)
Composer João Pedro Oliveira holds the Corwin Endowed Chair in Composition for the University of California at Santa Barbara. He studied organ performance, composition and architecture in Lisbon. He completed a PhD in Music at the University of New York at Stony Brook. His music includes opera, orchestral compositions, chamber music, electroacoustic music and experimental video. He has received over 70 international prizes and awards for his works, including three Prizes at Bourges Electroacoustic Music Competition, the prestigious Magisterium Prize and Giga-Hertz Special Award, 1st Prize in Metamorphoses competition, 1st Prize in Yamaha-Visiones Sonoras Competition, 1st Prize in Musica Nova competition. He taught at Aveiro University (Portugal) and Federal University of Minas Gerais (Brazil). His publications include several articles in journals and a book on 20th century music theory. www.jpoliveira.com
coalesce;
Tamara E Ray
About the Work
Revolving around a brief video of a dead leaf fluttering in the wind, coalesce; is a visual and auditory piece, inviting the audience to enter a world of evolution. The piece shifts as a living creature changing through life, beginning with lo-fi visuals and ambient sounds to a gradually more structured presentation.
Ableton Live 10 was used to record and manipulate ambient sounds, vocals, and acoustic violin as performed by the composer. Max 8 code programmed for the project was then linked up to Live to generate and synchronize the manipulation of video effects and the auditory component.
About the Author(s)
Tammy Ray is a junior at Transylvania University on track to a music technology degree. She plays violin and is currently learning piano, both of which have helped her create music for the university’s theatre program and for music pieces with friends.
Lullaby for Stepanakert
Joseph Bohigian & Ensemble Decipher
About the Work
Stepanakert is the capital of Artsakh, an unrecognized republic between Armenia and Azerbaijan. Formerly an autonomous region of Soviet Azerbaijan, the native Armenian inhabitants fought a war for their independence in the early 1990s. Since the ceasefire established in 1994, the region has been governed by Armenians, with occasional fighting on the border. However, on September 27, 2020, Azerbaijan, with the aid of Turkey, launched an attack on the entire line of contact, leading to full-scale war. For weeks, the civilians of Stepanakert faced daily shelling from Azerbaijan, causing over half the population to flee and the rest to shelter in bunkers. By the end of the war, tens of thousands of Armenians were displaced. Lullaby for Stepanakert begins with a siren warning of incoming bombs and cluster munitions falling on a street in Stepanakert on the morning of October 4, 2020. From the explosions emerges a reinterpretation of an oror (lullaby) transcribed by Komitas. The piece is dedicated to the people of Artsakh and all people who have lost sleep because of this war.
About the Author(s)
Composer bio:
Joseph Bohigian is a composer and performer whose cross-cultural experience as an Armenian-American is a defining message in his music. His work explores the expression of exile, cultural reunification, and identity maintenance in diaspora. Joseph’s works have been heard at the Oregon Bach Festival, June in Buffalo, Walt Disney Concert Hall, New Music on the Point Festival, TENOR Conference (Melbourne), and Aram Khachaturian Museum Hall performed by the Mivos Quartet, Decibel New Music, Great Noise Ensemble, Argus Quartet, Fresno Summer Orchestra Academy, and Playground Ensemble and featured on NPR’s Here and Now and The California Report. He is also a founding member of Ensemble Decipher, a group dedicated to the performance of live electronic music. Bohigian has studied at Stony Brook University, California State University Fresno, and in Yerevan, Armenia with Artur Avanesov.
Performer bio:
Ensemble Decipher is a modular, experimental music group that performs with vintage, contemporary, and emerging technologies. Founded in 2017 by Niloufar Nourbakhsh, Ensemble Decipher strives to redefine performer virtuosity by drawing on the technological advancements of our time in order to highlight new voices and ways of listening. By reexamining new music and integrating technology into their performance practice, Ensemble Decipher seeks to reflect on and challenge the power structures that lace the field of electroacoustic music. Recent works commissioned by the group have mobilized network technologies, accelerometers attached to rocks, boxes trained via machine learning to respond to touch, acoustic instruments, and laptops. Current members include Joseph Bohigian, Robert Cosgrove, Eric Lemmon, Chelsea Loew, Taylor Long, and Niloufar Nourbakhsh.
Butterfly Garden
Donya Quick
About the Work
Butterfly Garden is an audio-visual piece created with Haskell and Processing. The reactive visualization implemented in Processing responds to the musical score by creating colorful butterflies, flowers, and other shapes in real time and in sync with the music. The generative music algorithms use stochastic but constraint-based exploration of sets of pitches and durations. All of the algorithmic musical material was created from less than 200 lines of code. Generated sections for each instrument were arranged in a digital audio workstation and rendered to audio with a combination of analog and digital synthesizers.
About the Author(s)
Donya Quick is a composer and independent researcher whose work involves a programming languages, artificial intelligence, natural language, and music.
Forme Cangianti
Fabio Morreale
About the Work
Forme Cangianti is an experimental live-coded audiovisual composition. This piece is part of my ongoing artistic research aimed at developing a visual identity for computer-generated music. My aim is to distribute the ranking of the visual and auditory components by hierarchically placing the two components on the same level. All (major) sonic events must result in a perceivable visual change and vice versa. To level the music and visual component hierarchy, I grounded my composition on a third paradigm that drives both components: mathematical functions. The performance is structured into two stages. In the first stage, I pre-program a number of mathematical functions that inform the behaviours of both the auditory and the visual components. The auditory component is programmed in SuperCollider, and variables of these functions are mapped into acoustic features of a dozen SynthDefs. The visual component is programmed using GLSL shaders, and the functions’ variables determine shapes’ presence, properties, and relation among them. The second stage occurs at performance time. By live-coding in SuperCollider, I progressively intervene in the underlying mathematical functions by changing some of their variables in real time and enact changes in music and the visual (via OSC messages).
About the Author(s)
Fabio Morreale (IT): is a musician and music scholar from Italy. He is a lecturer in composition and computer music at the University of Auckland. He has a PhD in computer science and has developed numerous musical instruments, interfaces, and installations. His musical practice is mostly centered on experimentations with generative audiovisual and algorithmic compositions. His scholarly research is focused on exploring the ethical and political implications of technological innovations on music listening and creation.
Quartet
Ted Moore
About the Work
The sonic source material of quartet is about two minutes of eurorack synthesizer recordings transcribed for the [Switch~ Ensemble] to record. These recordings were then subjected to data analysis using audio descriptors and machine learning algorithms (PCA, UMAP, K Nearest Neighbors, Neural Networks). Approaching these acoustic and electronic sounds as data for comparison and manipulation offered me new strategies for combining the material in expressive ways, finding form, and creating sonic relationships and audio-visual objects that I wouldn’t otherwise consider. Quartet is a remote collaboration between myself and the [Switch~ Ensemble] designed to engage with the added technological mediation at play during the pandemic.
About the Author(s)
Ted Moore is a composer, improviser, intermedia artist, and educator. He holds a PhD in Music Composition from the University of Chicago and is currently serving as a post-doctoral researcher at the University of Huddersfield, investigating the creative affordances of machine learning and data science algorithms as part of the FluCoMa project. His creative work focuses on fusing the sonic, visual, physical, and acoustic aspects of performance and sound, often through the integration of technology. Ted’s work has been described as “frankly unsafe” (icareifyoulisten.com), “an impressive achievement both artistically and technically” (VitaMN), and “epic” (Pioneer Press) and has been performed by ensembles including the JACK Quartet, Talea, Spektral Quartet, Yarn/Wire, [Switch~], HOCKET, and Line Upon Line. His art has been featured around the world, including at South by Southwest, National Sawdust, The Walker Art Center, STEIM, Internationales Musikinstitut Darmstadt, City University London, Hochschule für Musik (Freiburg), Experimental Sound Studio, World Saxophone Congress (Croatia), and MASS MoCA, among others.
Ted also frequently performs as an electronic musician using his laptop and modular synthesizer, as well as resonant physical objects, lighting instruments, and video projection. As an improviser, he is one half of Binary Canary, a woodwinds-laptop improvisation duo with saxophonist Kyle Hutchins. As an installation artist, Ted has been featured at The American Academy in Rome, New York University, and Studio 300 Festival of Digital Art and Music, among others. As a theater artist, Ted has worked with many independent companies, notably with Skewed Visions and Umbrella Collective.
Inkblot
Serge Bulat
About the Work
INKBLOT is a sound experience/audiovisual presentation, designed to trigger the listener’s imagination to demonstrate our unique ability to process data and create a “personal reality” construct. Much like in the psychological evaluation, the audiovisual inkblots are expected to produce an association or feeling, which the audience interprets uniquely. The project takes the standard test one step further by including an additional sense: hearing; and aims to reveal more information about ourselves. Stimulated by both visuals and sound, the subjects are invited for a”self-diagnosis”, formed through the sensory experience. The success of the test depends solely on the testee, based on the idea that the participant is both the experiment and the experimenter. Described by Bulat as “listening parties for the thinkers”, INKBLOT is aimed to bring back the wonder of sound, interactivity and conceptualism in music. Visuals by Michael Rfdshir.
About the Author(s)
Serge Bulat is a Moldovan-American multidisciplinary artist, composer and sound designer, who has been contributing to both European and American art scenes, exploring various mediums: from music and visuals to video games, radio and theater productions.
Artist’s most notable works are “Queuelbum” (the concept project that won an IMA award for Best Electronic Album Of The Year), the immersive video game “Wurroom” (created in collaboration with Michael Rfdshir), the audiovisual installations “Third World Walker” and “INKBLOT” (presented at festivals and conferences around the world including NYCEMF / USA, Convergence / UK, Technarte / Spain, New Music Gathering / USA, and Video Art Forum / Saudi Arabia.
Bulat’s artistic approach is often perceived as a meditation on arts, philosophy, science, and society; and deals with such diverse subjects as creativity, technology, reality, culture, and identity.
Recent artistic activities include the release of the multi-format albums “Wurmenai”, Similarities Between Fish And A Chair” (a collaboration with artists from 10 countries), and the score for the experimental video game “Isolomus”.
Music for Virtual Togetherness
Poli∃tnico Choir
About the Work
Evening Rise, Spirit Come is a traditional evening song by an unknown author. The tune is very easy to learn. Yet, its humble simplicity builds a calm and intimate atmosphere of haunting beauty. The text appeals to universal feelings dedicated to Nature. Far from seeing the past day with gloom and despair, its fading represents an opportunity to bring one’s attention in the present moment, renewing the connection to the Earth.
The interpretation we present here begins with one voice only. Then a collective meditative “ohm” joins, commented by an instrumental accompaniment. Alongside the Poli∃tnico choir, a jazz quartet composed by Gianni Virone (sax, bass clarinet and flute), Guido Canavese (piano), Stefano Risso (double bass), and Donato Stolfi (drums).
The performance was recorded live in November 2019 during the musical event Linguaggi – il Poli∃tnico sfida il jazz (Languages – Poli∃tnico faces jazz ), under the banner of improvisation and experimentation. Our concert inaugurated the Festival della Tecnologia[8], the first edition of a public event organized by the Polytechnic University of Turin to inspire a broad reflection on technology and its role in society, and to explain people and society through technology, with the adoption of multiple languages.
During the almost two years of crisis, Poli∃tnico kept rehearsing with the creative use of multiple WCS like Zoom and Teamspeak. The first was used by the conductor to direct the choir with visual cues, the latter to manage sound – having people spread on multiple communication channels – with an almost real-time feel. The experience was frustrating at first, but choristers skills quickly refined, allowing them to learn and know each other in a new and enriching way. The choristers reported their satisfaction with these pleasant moments, in the midst of days full of anaffective and inexpressive contacts.
In 2020 the Italian University Network for Sustainable Development (RUS)[1] and Poli∃tnico promoted a contest for the collaborative creation of a choral piece of music – We are the Earth – multilingual, and on theme with the UN Agenda 2030 for Sustainable Development. The competition was open to Italian universities and required the creation of new stanzas using The river is flowing – a traditional Native American tune celebrating Mother Earth – as a starting point.
The song we propose is the winner of the contest. The harmonization is by Giorgio Guiot, while Giuseppe Sanfratello (University of Catania) and Augusta Sammarini (University of Urbino ”Carlo Bo”) composed the second and third stanzas, respectively.
The official score of ”We are the Earth” was shared on the 51st World Earth Day (April 22, 2021)[6]. The first stanza compares humans to rivers that flow to the sea, a metaphor of the oneness with humanity and with Nature. The second one, in Greek, asks for justice and equal rights for all, living in peace in the nature. The last stanza reminds us we already share a common home, a place to cherish and protect from our own selves.
About the Author(s)
Born in December 2013 from an idea of two professors and two students of the Department of Mathematical Sciences of the Polytechnic University of Turin, Poli∃tnico choir counts today about 70 members, mostly young engineering and architecture students, but also professors, researchers and employees of the University.
Poli∃tnico artistic direction is entrusted to two professional choir directors, Giorgio Guiot and Dario Ribechi. In addition to guaranteeing a quality standard of artistic production, they are a reference for the choristers in terms of individual motivation, educational effectiveness and group leadership. They are assisted by some students and collaborating teachers, who contribute to the organization of rehearsals, concerts and trips, manage the communication and evaluation of the choral activity.
Rooftops
Modality
About the Work
Modality telematically composed, rehearsed, and recorded Rooftops, a structured improvisation for violin, computer, guitar, keyboards, electronics, and video synthesis, streaming audio and video between their four studios in Missoula and Butte, MT and Pulaski, VA.
About the Author(s)
Modality is Charles Nichols on violin, bass, and computer, Clark Grant on guitars, Ben Weiss on keyboards, and Jay Bruns on electronics and video synthesis. This Montana and Virginia based collective swims through oceans of sound, conjuring immersive, psychedelic, beautifully strange worlds, sonic excursions for fans of drone, ambient, krautrock, and contemporary music. Their practice is to co-compose, through recording free-improvisation, harvesting material, and collaboratively arranging and rerecording. Since 2013, they have rehearsed, recorded, produced, and performed telematically between their four studios, in Missoula and Butte, Montana, and Blacksburg, Virginia. In 2016, the band toured Montana, Illinois, Indiana, Ohio, and Virginia, starting with a performance at the DAT Music Conference in Missoula and ending with a performance at the Cube Fest at Virginia Tech. In 2020, they performed telematically, streamed live to Newcastle Upon Tyne, England. for the Network Music Festival, and recorded live to be streamed from St. Petersburg, Russia, for the Theremin Fest, and in 2021, streamed live to Stanford, CA, for the Society of Electroacoustic Music in the United States Virtual National Conference. Modality have co-composed and recorded four albums, Particle City, their debut, Under the Shadow of this Red Rock, a double LP, The Moruvians, a split with the band Lazertüth, and Megacycles, their latest album. These and other recordings can be found at https://modality.bandcamp.com.
String Quartet
Hendrik Vincent Koops
About the Work
Hendrik Vincent Koops’ String Quartet is written for the standard string quartet ensemble of two violins, viola and cello and consists of three movements. The quartet is the result of a co-creative process between the composer and generative models. First, the composer wrote the majority of the themes and harmonisations. Then, using various combinations of generative models, variations, continuations and in-painting variations were generated. These were cherry picked by the composer and recomposed and worked into the composition. Next, recomposed parts were again used as input to the generative models to generate output that was again recomposed. Finally, all the parts were combined with the goal of creating an organically sounding piece with a strong narrative. As much of the composing and recording happened during the start and peak of the COVID-19 pandemic and lockdowns, all of the tryouts, rehearsals, recordings and interactions with the musicians had to be done virtually. The quartet was first performed by the Brusk Quartet in Stockholm in 2020 and has received international airplay. The visual for the string quartet was created by the composer using OpenAI’s CLIP, a neural network trained on a variety of (image, text) pairs. Using descriptions, lyrics and poems that are aligned to the music, visuals were generated that morph into each other.
About the Author(s)
Hendrik Vincent Koops is a composer and Senior Data Scientist at RTL Netherlands. He received a B.A. degree with Honors in Audio and Sound Design, and a M.A. degree in Music Composition in 2008, both at the HKU University of the Arts Utrecht. In 2012 he received a B.S. degree in Cognitive Artificial Intelligence and in 2014 the M.S. degree in Artificial Intelligence at Utrecht University. After completing research at the Department of Electrical and Computer Engineering at Carnegie Mellon University, he received a Ph.D. degree in Computer Science from Utrecht University in 2019, where he studied the computational modelling of variance in musical harmony. At RTL Netherlands, he is responsible for developing scalable audiovisual machine learning solutions to make video content more discoverable, searchable, and valuable. Hendrik Vincent Koops is a guest editor for a special issue on AI and Musical Creativity at the Transactions of the International Society for Music Information Retrieval, which focuses on new research developments in the domain of artificial intelligence applied to modelling and creating music in a variety of styles. In addition to his industry and academic work, Hendrik Vincent Koops is active as a composer, his music has received airplay on numerous local and international platforms, including part of selected and nominated works at international film festivals. He is also a co-organizer of the AI Song Contest, an international competition exploring the use of AI in the songwriting process.
Apocalypse – Future
Oregon Electronic Device Orchestra (OEDO)
About the Work
Apocalypse – Future is a networked ensemble piece created and performed by the Oregon Electronic Device Orchestra (OEDO) at the University of Oregon in winter 2021. During the pandemic, the ensemble connected via Cleanfeed, a real- time networking audio system, playing music in real-time from remote locations over the internet. All performers created sounds and textures based on scores with guiding words and contours written by the director, keenly using their ears to co-produce the electroacoustic ensemble music. A strong element of improvisation – listening and responding to ensemble members’ music – is present in the composition. The music was mixed in postproduction for fine-tuning, and OEDO created videos to emphasize the theme of each piece. Apocalypse incorporates video footage shot by the performers, and Future generates visuals based on the music by using a visual generative system.
Apocalypse - Future is two sections from a longer suite of six pieces representing the pandemic’s impacts on us, in alignment with the idea of a video game that has multiple stages.
About the Author(s)
Oregon Electronic Device Orchestra (OEDO) is an electroacoustic music ensemble at the University of Oregon. In winter 2021, the ensemble consisted of seven undergraduate students – Josh Dykzeul, Jacob Gibbs, Emma Kuhn, John Nikkhah, Alaric San Jose, Tom Slaff, and Eden Star, and one graduate student – Sunhuimei Xia. The ensemble was directed by Akiko Hatakeyama, an Assistant Professor of Music Technology.
The Seals: Networked ensemble of pre-recorded live performance with creative AI assisted visuals
The Seals
About the Work
The Seals networked ensemble performance for ISMIR is a composite view of our global collectives’ live performance with creative AI assisted visuals. We regard telematic music technologies, not only as means by which our live online networked performances can be accessed but also, as a chance-agent providing a completely individualized ephemeral experience. During our ISMIR performance we improvised over AI created content compiled and arranged into pieces using our respective audio/visual instruments. These include: 1) Custom made theramins made and played by Sofy Yuditskaya and other band mates, 2) Peeps Music Box s8jfou played by Margaret Schedel, 3) Harp, Guitar+pedals played by Sophia Sun, 4) Voice+gesture controlled granulator, Kalimba+spectral processing & electronics sung/played by Susie Green, and 5) preprocessed visuals by Sofy Yuditskaya using live aquarium camera feeds collected and mixed by Ria Rajan and “Seal Vision” (MOG2 and U^2-Net to simulate the multi-focal attention seals may give to moving objects, as well as motionmask.fs in VDMX1.) During this our performance we each recorded our view over zoom through our distinct tech stacks & systems spanning NY NY, Miami Fl, San Diego CA & Pune India. This edited version stitches together a strange loop - a simultaneous observation of both individuated experiences and collective concurrence.
About the Author(s)
The Seals, an AI-inspired electronic noise band, is the collective effort of Ria Rajan, Sophia Sun, Susie Green, Sofy Yuditskaya, Margaret Schedel, and augmented by the S.E.A.L. (Synthetic Erudition Assist Lattice), our AI counterparts that assist us in creating usable content with which to mold and shape our music and visuals. A detailed paper on our process can be read HERE. Our music came about during the height of the Covid-19 lockdowns across the globe. We individually created content with which to compile songs as a Networked Ensemble. This created a bed track for our Live Performances online using various tech stacks through which we improvise while incorporating Creative AI assisted visuals.
Ria Rajan (Pune, India) is a visual artist exploring both analog and digital mediums of image-making, with a focus on intangible and ephemeral experiences of spaces and objects, both natural and constructed. Through the lens of locative media, performance, sculptural explorations and interventions in spaces, she creates immersive experiences and invites interaction with her work that looks at placemaking both online and offline, and centers around our constantly changing relationship with technology. Ria has been invited to participate in multiple public art projects such as Urban Avant Garde (Bangalore, IN), Investment Zone (Bangalore, IN), Figment (NY, USA), Prakalp Pune (Pune, IN), Cyberia (Pune, IN). Along with these, her work - largely developed through residencies, has been included in festivals, showcases and exhibitions, both locally and internationally.
Sophia Sun (San Diego, CA) is a machine learning researcher and musician interested in machine creativity and digital humanities. She makes music, robots, and research concerning the cyborg nature of contemporary lives, and the relationship between machines and humans. She is currently pursuing a Ph.D. in Computer Science at University of California, San Diego.
Susie Green (Miami, FL) is a neuro-divergent, Cuban-American composer, sound-designer, audio/visual artist & vocalist working at the intersection of music, art & science.
Following a sound engineering internship at Crescent Moon Studios, she became a signed/published singer/songwriter with Sony/FIPP. She continued work in sound-design for immersive theater, short films & studio production while undertaking postgraduate research in composition & music technology at the University of Huddersfield, UK. Exploring means by which to harness the body’s movements to shape sound, based on Rudolph Laban’s Movement Theory, she expanded theoretically incorporating concepts from quantum mechanics, human dynamics, to cosmology. She’s held lectures & workshops for Electronic Music Composition, Production & Human Computer Interaction while also supporting STEAM efforts in her community. Additionally, she’s engaged in music tech tests, research & development both independently & with/for various teams, fellow artists and academic collaborators/composers/technicians.
Sofy Yuditskaya (Brooklyn, NY) is a site-specific media artist and educator working with sound, video, interactivity, projections, code, paper, and salvaged material. Her work focuses on techno-occult rituals, street performance, and participatory art. Sofy’s performances enact and reframe hegemonies, she works with materials that exemplify our deep entanglement with petro-culture and technology’s affect on consciousness. She has worked on projects at Eyebeam, 3LD, the Netherlands Institute voor Media Kunst, Steim, ARS Electronica, Games for Learning Institute, The Guggenheim (NYC), The National Mall and has taught at GAFFTA, MoMA, NYU, Srishti, and the Rubin Museum. She is a PhD Candidate in Audio-Visual Composition at NYU GSAS.
Margaret Schedel (Stonybrook, NY) With an interdisciplinary career blending classical training, sound/audio data research, and innovative computational arts education, Margaret Anne Schedel transcends the boundaries of disparate fields to produce integrated work at the nexus of computation and the arts. She has a diverse creative output with works spanning interactive multimedia operas, virtual reality experiences, sound art, video games, and compositions for a wide variety of classical instruments and custom controllers. She is internationally recognized for the creation and performance of ferociously interactive media and her work in sonification of data. As an Associate Professor of Music with a core appointment in the Institute at Advanced Computational Science at Stony Brook University, she serves as the co-director of computer music and is the Chair of Art while teaching computer music composition at the Peabody Institute of the Johns Hopkins University. She is the co-founder of www.arts.codes, a platform and artist collective celebrating art with computational underpinnings.
Horizon
Mojtaba Heydari, Frank Cwitkowitz
About the Work
“Horizon” is one of the songs from the album “Space Monkey”, the first album of the newly formed rock and roll band “Space Monkeys” consisting of Mojtaba Heydari and Frank Cwitkowitz, who are both the ISMIR community members and Ph.D. students of ECE at the Audio Information Retrieval (AIR) lab, Unversity of Rochester, NY.
The album’s storyline is about a group of monkeys traveling into space to find a better place for living but during their journey, many things happen. The name Spacemonkey is a metaphore about modern human who tries to climb up to discover the universe. The Song “Horizon” covers a part of the storyline when they encounter the event horizon of a black hole and …
About the Author(s)
Mojtaba (Moji) Heydari is a third year Ph.D. student of ECE, at the University of Rochester, NY. He recieved a Master’s in ECE from the same University and another masters in audio engineering from the Iran Boradcasting University. He loves music, composed and produces music, and plays a variety of music instruments. He is also a gym enthusiast and does athletic training and workout often. His research interests include Human-computer interaction and musician robots, AR/VR, music information retrieval, and deep learning.
Frank Cwithjowitz is a third year Ph.D. student of ECE, at the University of Rochester, NY. He received his Bachelor’s and Master’s Degree from the Rochester Institute of Technology (RIT) in Computer Engineering during 2019. His interests span music information retrieval and machine learning, with a main research focus on the problem of automatic music transcription. He will explore musical AR/VR applications, musical source separation, and music generation in the future. In his spare time, he practices guitar, composes and records music, trains in Kyokushin Karate and Brazilian Jiu-Jitsu, exercises, and reads a lot.