In a previous post, I briefly explained that AI is incapable of being creative as its entire functioning and creation capabilities are spurred by outside feedback; and it lacks the emotion, faults, or drive(s) ascribed to living creatures (which is what truly allows one to be considered creative). As the framework of AI neural networks – the gathering of references to cross-analyse, the storing of themes recurring throughout references, the results generated based on the central analysis – is not suited for being considered creative, it is important to note that it does however allow itself and users the ability to create so long as it is within the rules of its algorithm; and because said AI algorithm is dependent upon outside input in order to create, it can be said that it is the ideal collaborator within the art space.
But what exactly would it look like if humans and AI created collaboratively? How could this happen in a creative field, where AI’s lack of creativity should be a weakness? And furthermore, how in musical spaces?
In a sense it can be rather difficult. While AI is not a real entity, it isn’t truly a tool in a traditional sense due to its generative nature and inherent biases. To try and treat it as a tool, where one would interact and expect a reaction that is similarly repeated every time, could lead to disappointment; but because it lives in digital spaces, musical users will treat it in ways they have already learned – as though it were a midi, instrument, or DAW where they are completely in control and all creations are in their hands. Which makes it hard to imagine how they can work together since it won’t behave in ways that are expected.
So rather than treating AI like the tool it’s medium will imply it to be, it would seem better to consider AI as it’s own independent performer or creators. This way, creators will expect Ai to provide results in a way that will carry biases and randomness that is separate from them, and we can start to imagine the possible ways they can work together in the musical space.
With this in mind, we can move onto the next step and answer the question of how human and AI collaboration might work in the musical sense – one of which can be seen is in the gamification of live music performance/composition relationship in video games like PANORAMICAL.
As Video games are a form of user-based experiences that rely upon playing with learned behaviours, these genres allow for a more immersive experience through designing functions/game mechanics that are reactive to the user. The reason why such features are important, is because the more feedback an object gives to users, the more they will feel rewarded for interacting with it. The more rewarded users will feel, the more they will want to ‘play’ in order to earn the same reaction or experience new ones. When combined with generative art concepts, where a set of rules (aka an algorithms) are put in place to determine how art can be created, and applied to music, generative sound-based video games are a wonderful experience that can emerge.
Although the songs are temporary unless recorded, because PANORAMICAL allows play and exploration to be its main mode of composing performance the entire musical process can arguably become more accessible to users who don’t have music theory/playing knowledge - which has contributed to music becoming a gatekeeping community. This notion of play/experimentation that video games afford can help remove these biases and just overall make for a wonderful time – therefore allowing for a collaboration between conductor/composer and performer to create a temporal musical experience.