A Way Out (2018) – Powered by the Elias Music Engine
Adaptive Music Engines
The act of designing both music & sound in today’s world of both film & video games is, and always has been a task of great and continuing difficulty. In the realms of video games in particular, composers and sound designers are finding it more and more challenging to predict outcomes that players could experience while playing their games. Many claiming that to push things forward, ‘game music should be dynamic, that is, not designed with a fixed timeline in mind but aimed towards a large set of possible player actions, reactions and in-game situations’ (López Ibáñez, M., Álvarez, N., Peinado, F., 2018. p.360).
If we think about things logically, it makes more sense for music engines to be designed in a way that what could be described as “automatically adaptive”, with music existing within the players world with sharper coexistence and the sound resonating with his/her circumstances. As first glossed over in my previous blog post, this would move the composers tasks more towards implementation, and designing more complicated software systems to achieve these kinds of critical, immersive results.
Towards an Emotion-Driven Adaptive System for Video Game Music.
Continuing on from my brief but much enjoyed research of music implementation through the mind of Jonathan Meyer, and to prepare myself for the possibility of exploring this in my major project, I have chosen to look into a particular paper chapter written by Manuel López Ibáñez, Nahum Álvarez and Federico Peinado on interactive digital storytelling and affective computing player experience, called Towards an Emotion-Driven Adaptive System for Video Game Music.
Here, they take influence from Paul Ekman’s 5 basic emotions (anger, disgust, fear, joy & sadness) and expose and propose a new ‘Emotion-Driven Adaptive System’ that could be considered revolutionary in the world of game creation. Aiming to use ‘pre-designed fragments of music (in the form of short sound tracks)’ in conjunction ‘with in-game dialogues that allow the player to confront different characters and emotional situations, choosing from a variety of possible emotional responses (…) to create audio atmospheres which adapt to the emotions arisen during a gaming session’ (2018. p.363-364).
Below is a simple visual representation showing the internal structure for how this ‘Emotion Driven Adaptive Audio Engine’ would work:
Towards an Emotion-Driven Adaptive System for Video Game Music – Diagram
From what I could gather from their pitch, this is a step by step breakdown of their new proposed adaptive game audio engine (colour coded to the diagram above):
Dialogue system:
Receives messages from a non-player character (NPC) (text output).
Chooses answers to those messages (input choice) from a series of options.
Takes information from a database of tree-form short dialogues.
Allows the player to choose from 3 different answers each time a text output is shown on screen, representing the expression of a feeling.
Emotional Weighting (& Storage):
Every possible answer has an “emotional weighing”.
Contains a real value in a range from 0 to 1 for one of the 6 basic emotions of Ekman. (E.g. receiving a declaration of war will have an anger value of 1, and watching a butcher work will have a disgust value of 0.5).
“Emotional weights” of characters’ messages and player choices are stored, the system can remember the emotional context of a playing session.
The stored basic emotion variables with a higher value in the last seconds of gameplay are selected.
Three Track Audio System:
Sometimes, emotions can overlap (e.g. anger and disgust generate loathing), which means at least 2 simultaneous tracks are needed to create such a complex atmosphere.
Audio system manages fragments of music each delta second.
Emotion variables are read and a new piece of music related to those feelings starts playing if necessary.
Resulting music should represent the selected basic emotions.
If no emotional answers are given, the previous loop will keep playing indefinitely.
Pre-designed Tracks:
All tracks are chosen from a database of pre-designed/composed fragments of music.
Sharing a tempo of 110 beats per minute.
All of them will be loops, with lengths that range from 5 s to 15.
System is not designed to allow a sequence of quick changes in ambient music (less than 5 s each), due to the minimum duration of the loops used.
Audio Engine:
When a new track is selected and played, it adjusts to a fixed tempo grid, skipping up to a beat if necessary.
Combination of two or more tracks is playing, they are mixed in the audio engine.
Intensity (volume) is established depending on the values of emotional weight, for a normalized intensity of 1. Primary emotions have a normalized (non-logarithmic) intensity of 0.5, followed by secondary (0.3) and tertiary (0.2) emotions.
The resulting ambient music plays through the game engine’s audio system. This makes it possible to spatialise the mix or to add an extra layer of effects.
In reflection, I believe this system to be an effective, very original and intriguing way of looking at how music can be used adaptively. The culmination of emotions and music being at the heart of this project attracted me immediately and I will in my university FMP and projects with clients in the future look to implore these ideas with them when we look to create a coding system of our own.
Elias Adaptive Music Engine
In addition to this research and following my keen interest in this sophisticated type of adaptive music technology being able to adjust audio parameters to a players actions or reactions, I searched for a software that would be able to achieve these results on a base level in practice….
Elias Adaptive music engine does just that. It explores the idea of adaptive music on an easy, intuitive and interactive level of software design. The true adaptive game music principle with ‘Elias Music Engine’ has been unsurprisingly used in many incredible videogame titles over the last 2 years. With upcoming console tactical adventure game Mutant Zero: Road to Eden (2019) and the Hazelight Studios collaboration with Electronic Arts (EA) on A Way Out (2018) was praised for its original storyline, co-op gameplay and immersive audio experience earlier this year.
In addition to this research and following my keen interest in this sophisticated type of adaptive music technology being able to adjust audio parameters to a players actions or reactions, I searched for a software that would be able to achieve these results on a base level in practice….
Elias Logo – Bridging the gaps between game developers and music composers.
Elias Adaptive music engine does just that. It explores the idea of adaptive music on an easy, intuitive and interactive level of software design. The true adaptive game music principle with ‘Elias Music Engine’ has been unsurprisingly used in many incredible videogame titles over the last 2 years. With upcoming console tactical adventure game Mutant Zero: Road to Eden (2019) and the Hazelight Studios collaboration with Electronic Arts (EA) on A Way Out (2018) was praised for its original storyline, co-op gameplay and immersive audio experience earlier this year.
Although the Elias Studio Engine for game composers isn’t immediately designed for reacting to something along the lines of emotion from the player, it contains the key characteristics to explore the functionality and interactivity of adaptive music in games.
Allowing for horizontal and vertical slicing of audio so that you can weave layers upon layers of instruments/sounds per track to come in and out at your leisure is a huge step forward in the right direction for adaptivity in music for linear gameplay.
Best of all… Elias Studio is free! For a while… Until you need to upgrade your parameters when you require more than 4 loop audio tracks and 8 layer levels at once. The convincing full ‘Pro’ version priced at a subscribed rate of £107.29 per year includes:
Elias Adaptive Music – Elias Studio
There also seems to be quite a gap in the market for adaptive music packs of a good production level in their ‘Elias Assets’ store, with tracks of low production value selling at fairly high rates it would be a good start for me to get stuck in there and write some cinematic loops for adaptive gameplay AND get paid for dong so.
True Adaptive Music with Elias Music Engine – Video
Sources:
Alessandro Fillari, (2017), E3 2017: EA Reveals A Way Out, New Co-Op Action Game From Brothers Dev[ONLINE]. Available at: <https://www.gamespot.com/articles/e3-2017-ea-reveals-a-way-out-new-co-op-action-game/1100-6450697/> [Accessed 11 October 2018].
Asutay, E., V ̈astfj ̈all, D., Tajadura-Jim ́enez, A., Genell, A., Bergman, P., Kleiner, M.: Emoacoustics: a study of the psychoacoustical and psychological dimensions of emotional sound design. AES J. Audio Eng. Soc. 60(1–2), 21–28 (2012)
Bottcher,N.,Martinez,H.P., Serafin,S.: Procedural audio in computer games using motion controllers: an evaluation on the effect and perception. Int. J. Comput. Games Technol. 2013, article no. 6 (2013)
Collins, K.: An introduction to procedural music in video games. Contemp. Music Rev. 28, 5–15 (2009)
Ekman, I.: Psychologically motivated techniques for emotional sound in computer games. In: AudioMostly 2008, pp. 20–26 (2008)
Ekman, P.: An argument for basic emotions. Cogn. Emotion 6(3), 169–200 (1992)
Eladhari, M., Nieuwdorp, R., Fridenfalk, R.: The soundtrack of your mind: mind music – adaptive audio for game characters. In: Proceedings of the 2006 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, New York, p. 54 (2006)
Elias. (2018). True Adaptive Game Music with Elias Music Engine. [Online Video]. 28 February 2018. Available from: https://www.youtube.com/watch?time_continue=110&v=6sUumzdAIRY. [Accessed: 11 October 2018].
Jewell, M.O., Nixon, M.S., Prugel-Bennett, A.: CBS: a concept-based sequencer for soundtrack composition. In: Proceedings – 3rd International Conference on WEB Delivering of Music, WEDELMUSIC 2003, pp. 105–108 (2003)
Jørgensen, K.: What are these grunts and growls over there? Computer game audio and player action. Ph.D. thesis (2007)
Lopez Ibáñez, M., Álvarez, N., Peinado, F.: A study on an efficient spatialisation technique for near-field sound in video games. In: Proceedings of the 4th Congreso de la Sociedad Espan ̃ola para las Ciencias del Videojuego, Barcelona (2017)
López Ibáñez, M., Álvarez, N., Peinado, F., 2018. Towards an Emotion-Driven Adaptive System for Video Game Music, in: Cheok, A.D., Inami, M., Romão, T. (Eds.), Advances in Computer Entertainment Technology. Springer International Publishing, Cham, pp. 360–367. <https://doi.org/10.1007/978-3-319-76270-8_25>
Luhtala, M., Turunen, M., Hakulinen, J., Keskinen, T.: ‘AIE-studio’ – a pragmatist aesthetic approach for procedural sound design. In: Proceedings of the 8th Audio Mostly Conference, AM 2013, pp. 1–6 (2013).
Yanagisawa,H.,Murakami,T.,Noguchi,S.,Ohtomi,K.,Hosaka,R.: Quantification method of diverse kansei quality for emotional design: application of product sound design. In: ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. 461–470 (2007).
Comments