This module builds on Audio for Games I by introducing students to procedural and algorithmic methods for sound and music generation. Good practices of how pre-recorded sound assets are edited and tagged so as to respond seamlessly to player input alongside procedural audio systems will be explored. Students will work with audio middleware, as well as game engines, with a focus on their integration through a sample project provided at the start of the semester. Finally, the students will explore further possibilities for game event and parameter control, through scripting.
The aim of this Module is to provide the student with : an applied knowledge and understanding of how audio middleware is integrated into a game engine, and how, through scripting, advanced parameter control and procedural audio can be implemented.
By the end of this module the student should be able to:
1. Describe, discuss and analyse a range of procedural and algorithmic approaches that support the real-time adaptive playback of both sound effects and music in video games.
2. Apply suitable methods of analysis to analyse critically the technical and creative implementation of sound; music; dialogue and associated processes within the context of a moving picture soundtrack.
3. Integrate an audio middleware into a game engine.
4. Apply advanced event and parameter management through scripting.
5. Reflect critically on their approach to work and on the quality of the work itself.
We will compare and contrast a number of different middleware solutions, including UDK, Wwise and FMOD, exploring functionality and the options that they provide for creating immersive and adaptive game audio.
We will explore integration possibilities between the selected middleware and some of the most popular game engines, such as Unity and Unreal. Some of this work will require a basic understanding of coding so time will be spent to familiarize ourselves with event management and parameter control through scripting.
We will explore the different ways that sound and music can be triggered and/or driven by game events and the different ways that sound and music can either lead or react to player input.
Sound localisation cues can be set up and handled directly within most middleware engines. We discuss how spatial placement and reverberation help to create and characterise a sense of believable game space, and how this can be achieved.
Analysing the use of sound and music in a computer game requires a particular set of analytical tools and an appreciation of context. Here, we explore these notions and develop a framework for analysing interactive game-based audio.
We will explore project set-up and good practice with regard to the management of implementing and testing sound assets.
7 Professional practice
We will explore the full implementation cycle of sound assets, and look at what’s involved in getting multiple layers of sound to function correctly in response to player input in a game.
8 Professional practice
File handling is something that is often overlooked, but which is fundamental to the efficient use of sound in a game. Understanding differences in file formats and data encoding, and the artefacts that can be introduced can mean the difference between a soundtrack that behaves as it should, and hours of troubleshooting.
9 Professional practice
Testing is a vital part of implementation. Here we explore what methods are commonly used to test audio implementation, and investigate some of the common problems that affect game audio, and what solutions and/or workarounds exist.
10 Reflection and evaluation
At the end of the process, we will explore what it means for a game soundtrack to be ‘good’, and will think about the critical frameworks that we might use to make statements about the quality of sounds and music and their application in games.
Statement on Teaching, Learning and Assessment
This module explores the technical implementation of sound and music in computer games. A significant part of this will be integrating the middleware into a game engine, as well as becoming familiar with scripting events and parameters. We begin by looking at different approaches to implementing procedural, interactive and adaptive soundtracks and the middleware that supports these approaches. We continue by examining the relationship between the game mechanics and interface and the soundtrack, using this as a framework around which to implement the set of sound assets created in AUD301. We conclude by considering the criteria by which we might judge a game soundtrack, and use these to draw conclusions about the effectiveness of our own implementations. Delivery will be focused around one 1-hour lecture and one 2-hour tutorial each week, where students will get experience of sound design and content creation. Students will be set additional development exercises for completion outside of scheduled class time. For assessment, students will be set the task of implementing, testing and evaluating a complete set of sound assets for a supplied computer game demo.
Teaching and Learning Work Loads
|Supervised Practical Activity||0|
|Unsupervised Practical Activity||20|
Credit Value – The total value of SCQF credits for the module. 20 credits are the equivalent of 10 ECTS credits. A full-time student should normally register for 60 SCQF credits per semester.
We make every effort to ensure that the information on our website is accurate but it is possible that some changes may occur prior to the academic year of entry. The modules listed in this catalogue are offered subject to availability during academic year 2017/18 , and may be subject to change for future years.