Right then, a ‘We Are Alive’ update! This is not going to be a short post, but then again - this has been a lengthy wait. In the past several months I have not only changed jobs and made a new website which you are currently enjoying, but also research different possibilities for implementing audio in ‘We Are Alive’. I could go with the default option that comes with LibGDX, but decided that it would make sense to invest some time in understanding what other options are out there and how applicable what they can offer is to what I need. Well, and also, how can I know exactly what I need if I don’t know what’s possible, right? So these are the main tools I looked at:
FMOD seems to be one of the go-to sounds engines for developers of video games. Here’s some video-games developed with FMOD. It is a pretty powerful tool for playing and mixing sounds on a wide range of platforms. It also helps that it’s free for non-commercial and educational use, it is also free if you’re an indie studio with a budget of less than $100k, working on your first title. If it’s not your first title it’s still laughably cheap ($500).
The main issue for me was that it’s written in C++ and ‘We Are Alive’ is being written in Java. Naturally that’s not the end of the world, there are a few options available that have been around for a while.
There’s NativeFmod by Jouvieje, I think it’s distributed under the LGPL License. I suppose my main issue with NativeFmod was that it was last updated 5 years ago and I wasn’t convinced that I could easily get all the needed support for it if I wanted to.
There’s also fmod-jni, a cross-platform Java JNI wrapper distributed under the Apache 2 License. It is ‘experimental and incomplete’, but it is still getting updated. The project has a pretty good readme and is being developed for a game. I’ve tried setting it up and playing around with it, but in the end I felt that using a project that only has one contributor might not be the best decision.
After a bit more searching around I decided to try something other than FMOD.
PureData is something altogether different. It is a visual programming language made specifically to create interactive computer music. It’s an open-source project released under the BSD license and runs on a wide range of OSs. PureData is what you use if you want to have generative music.
The two main Java implementations of PureData are pdj and libpd. There are lots of written tutorials, youtube tutorials and introductions and interactive tutorials (these come with the download) on PureData. There is also a sizeable amount of examples kicking around of generative music made with it.
It’s definitely something that I would love to dive into as a separate project, but making it a part of ‘We Are Alive’ felt like massive overkill.
OpenAL (Open Audio Library), OpenGLs cousin. It’s an audio API and is really good at working with 3D audio. Years ago it used to be open-source, these days it is not. It supports a wide range of OSs and has been used in a sizable list of games. It is written in C, but when did that ever stop anyone.
JOAL is a Java implementation of OpenAL which I attempted to use. It comes with a handy set of tutorials, originally from OpenAL, rewritten to work with JOAL. OpenAL also comes with a pretty good and also massive Programmer’s Guide. It is a powerful tool, even though the method names for it are a nightmare to understand without a manual, but I guess it’s to be expected for a piece of software that allows you to represent fancy stuff like the Doppler Effect relatively painlessly.
After spending some time going through the tutorials and the manual I was convinced that this is going to be the option I am going to settle on until something caught my eye… So I was reading about OpenAL one day and came across the phrase that went something like: ‘LWJGL uses OpenAL’. LWJGL, by the way, stands for ‘Lightweight Java Game Library’ and is an open-source Java game development tool. Notably, apparently Minecraft has been written using LWJGL. But that’s not the point.
The point is - I’m using LibGDX to develop my game. LibGDX is cool, because it provides a bunch of neat extensions (an AI framework, a physics library, etc), and also because it can compile written code to run on Android, on iOS, on a HTML5 page or on the desktop. For the desktop it implements LWJGL. At some point in development I have decided that I’ll be making ‘We Are Alive’ only for the desktop. So by creating my own representations of the few bits and pieces of JOAL that I will actually need I’m literally reinventing the bike that already kinda came in the package.
And that is how I came back to LibGDX. By that time I knew what sort of functionality I wanted and, turns out, LibGDX could offer it to me. Here is how the audio in ‘We Are Alive’ works:
Sounds split into two categories: environment sounds and character sounds.
Unimaginatively environment sounds live in the environment. These are the sounds enemies, elevators, travellators and other stuff make. I decided to work with the assumption that the player is hearing everything the main character is hearing, from the position of the main character. So the sounds coming from the environment should be played relative to the player. Environment sounds can have pan and volume. They also have a hearing distance. As the source of sound gets closer to the character the sound simultaneously gets both louder and more centered.
Here’s how it’s done. The sounds for each non-player character live in the respective character class, originally. When a level gets created - all the NPCs pass their sounds to the EnvironmentSoundController. On each run of the game loop the player character passes its position to the controller and the controller decides which sounds it wants to play and what volume, pitch and pan values to use.
Perhaps it’s not immediately obvious that the volume is calculated as the length of a vector from the character to the source of sound measured as a percentage against the overall hearing distance. That way the loudness distributes equally in any direction from the source. Pan, however, is only calculated as a percentage of the horizontal distance to the source against the overall pan distance, because with the current setup a sound can only be panned to the left, right or center, but not top or bottom.
Character sounds are the sounds a character makes (what did you expect?). They programmatically live inside the character class. Those include: opening doors, collecting items, jumping, walking, falling. These sounds always happen in close proximity to the character, so they will always be centered, and, currently with a single exception, they will always be played at a 100% volume. The exception being - sounds of collision with the ground. It felt more natural that the less distance the character needs to fall - the quieter should the sound be. It seems to work well.
‘We Are Alive’ has three types of music. Threshold agnostic music; music with transitions and music without transitions.
Threshold agnostic is a fancy way of saying that it will keep playing with the same volume regardless of what the player is doing, as long as they are in the same room. When the player leaves the room the music stops. I honestly tried coming up with a better name for it, but had some red wine in me and the name kind of stuck.
Music without transitions is different. A room can have several musics defined. When the player reaches a certain threshold - music designated to that specific bit of the room stops and a track assigned to the bit of the room beyond the threshold starts. Immediately.
Music with transitions is a variation of the above. The only main difference is that the previous track does not end abrubtly, but instead slowly fades out with the new track slowly fading in.
Note that all the sounds and tracks you’re hearing in the video are temporary placeholders.
That about sums it up. That’s the audio side of the preparations done. Next step - code manipulation!