VR Sound Design in Unity (Getting Started Guide)

JohnUnity2 Comments

Not long ago, I finished working on the sound design and music for Flood, a virtual reality performance project, in development by Megaverse.

Working on the project was a fantastic opportunity to experiment with Audio in Virtual Reality, which has its own, unique, challenges and quirks.

Realistic, believable audio is hugely important in VR or, at the very least, there’s more of an opportunity to create realistic audio in VR, as the benefits of Ambisonic and Spatialized Audio technologies, are more noticeable in VR games than in games where you can’t freely move and rotate your head.

So in this article I’m going to show you step by step how to get started with Ambisonic and Spatial audio in Unity VR, as well as some of the more common pitfalls to avoid, from my experience working on the Flood VR project.

What You’ll Find in this Article

Ambisonic Audio

Ambisonic Audio, although not actually a new technology, has become much more important since the emergence of VR and 360° video.

If you’re not familiar with Ambisonic Audio, the best way to think of it is as kind of an audio Skybox.

In a game, a Skybox would provide the illusion of a 3D sky and distant horizon, that rotates with the player’s head movements but that you can never get closer or further away from, and sitting behind the ‘real’ world geometry.

Ambisonic Audio works in a similar way.

Moving around the world won’t change the volume of the audio, but rotating your head does, or at least changes the volume of the four (sometimes more) channels in the Ambisonic file, depending on which direction you’re looking.

The result is a much more realistic representation of ambient sound, which works great for ambient soundscapes.

Anatomy of an Ambisonic File

One of the most common Ambisonic file formats, B-Format, sometimes called First Order Ambisonics, is made up of four channels.

In simple terms, the four channels represent 3 directional channels along the X, Y and Z axis and an omni-directional channel, W, which contains a recording of all sounds in the Ambisonic sphere.

For a great description of Ambisonic Audio from an audio engineer’s perspective there’s a fantastic post on Waves’ blog, that explains it much better than I can.

How to get Ambisonic Audio Working in Unity

Actually using Ambisonic audio in Unity is fairly straightforward, however, getting it working can be a little unintuitive.

This is because using an Ambisonic Audio Clip in a Unity scene will cause it to completely bypass the standard audio pipeline. So, if you’re like me, and you didn’t check the documentation first, you may have tried to set up Ambisonic Audio, and only got silence.

Here’s what you need to do to get it working…

1. Install an Ambisonic Decoder Plugin

To use Ambisonic Audio you first need an Ambisonic plugin installed. For this project I used Google’s Resonance Audio plugin, which is already installed in Unity 2018 or, if you’re using Unity 2019, it can be installed using the package manager.

How to set an Ambisonic Decoder Plugin in Unity

Once Resonance Audio is installed, remember to select it as your Decoder Plugin.

Once it’s installed, select Resonance Audio as the Ambisonic Decoder Plugin in Project Settings.

2. Mark which Audio Clips are Ambisonic in the Audio Import Settings

In the project window, select any Ambisonic audio clips and, in the import settings in the Inspector, check the Ambisonic checkbox and apply the settings.

Unity Ambisonic Import Settings

Mark which Audio Clips are Ambisonic in the Import Settings.

Where to Find Free Ambisonic Audio Files

If you don’t have any Ambisonic audio, try Sonnis’ free GDC giveaway. As well as being a great resource for source audio it also includes a B-Format Ambisonic audio file.

To save you the trouble of searching through, you’ll find the Ambisonic file in part 1 of the 2019 giveaway.

3. Create a Mixer Group with Resonance Audio Renderer Effect

To use Resonance Audio in Unity you must add an Audio Mixer Group with a Resonance Audio Renderer plugin enabled on it.

You’ll then need to route any Audio Sources playing Ambisonic sound to that Mixer Group, Otherwise you won’t hear anything and you’ll just get silence.

How to add a Resonance Audio Renderer Plugin in Unity

If you miss this step (like I did) you’ll get nothing but silence.

This is because Resonance Ambisonic audio (and Spatialized audio too) is processed differently to other audio in the scene and separate to the Unity audio pipeline.

Adding the Resonance Audio Renderer to an Audio Mixer Group reintroduces the Ambisonic Audio into that audio pipeline so that you can a) hear it, and b) apply effects to it.

Using Audio Mixers with Ambisonic Audio

Plugin effects and volume changes that are made on the same Mixer Group with the Resonance Audio Renderer on won’t have any effect. To use Ambisonic Audio with a Mixer, you will need to route the Group with the Renderer on it to another Group or Audio Mixer and then apply any effects or volume changes on that Group or later on in the chain.

4. Add an Audio Source to the Scene

Finally, with everything set up, add an Audio Source to the scene and set the Audio Clip field to your Ambisonic Clip, remembering to route the mixer output of the Audio Source to the Mixer Group with the Resonance Audio Renderer set up on it.

To control the perceived direction of sounds in the Ambisonic audio file (the Azimuth), you can simply rotate the Audio Source object until sounds line up where you want them to be.

When working on Flood, we used a coastline Ambisonic Loop, with the sound of waves lapping against a shore, which I needed to line up with the direction of the sea in the Unity Scene.

Ambisonic Audio Source Settings

Many of the other Audio Source settings will still apply, for example you can use controls like volume and pitch in the normal way and even Spatial Blend to add a roll off to the sound, however you’ll generally get better results by leaving the Spatial Blend at 2D.

Editing Ambisonic Audio Files

Chances are, if you’re working with Ambisonic Audio, the files you have may not be ‘game-ready’.

It’s unlikely that the file you get will be loopable out of the box and, with four channels of audio at least, the file size is going to be much larger than normal, especially as commercial ambient recordings are also often longer than you will need them to be for a game.

This means that, just as I did, you’re probably going to need to do some editing.

I used Cubase to trim each file down to size and added a smooth crossover to turn it into a seamless loop.

I then re-exported the project as a new Ambisonic Loop.

Making Ambisonics in Cubase

Cubase includes built in plugins to work with and decode Ambisonic Audio.

If you don’t have Cubase, or a similar program, don’t worry as you can also edit and re-export Ambisonic Audio files using Audacity, which is completely free.

To do so, simply select Import / Export options in preferences and make sure Custom Mix is selected.

Audacity Export Settings for Ambisonic Audio

In Audacity, Select Custom Mix in the Export Settings to save multi-channel files.

Dragging an Ambisonic file into Audacity will create four mono tracks. Enabling the Custom Mix setting will allow you to export a four-channel file out again.

Exporting Ambisonic Audio in Audacity

You Can Export 4 Mono Channels to create an Ambisonic audio file in Audacity.

For more on how to actually get the file looping, try my guide on how to loop any audio file in Audacity:

Spatial Audio

What is Spatial audio?

In Unity, Spatial audio more realistically replicates how we hear sound in an environment.

This is especially useful in Virtual Reality where being able freely rotate your head requires more accurate modelling of sounds in 3D space.

Whereas standard 3D audio in Unity uses attenuation, panning and doppler to create an impression of a sound’s location, Spatial Audio also uses early and late reflections, head related transfer functions and occlusion to not only simulate where a sound is coming from, but also the shape of the space that it’s in.

To hear what this actually sounds like, the video below shows the effect of changing a room’s shape, in real time, to affect how the sound is heard.

As you can hear, the result is an accurately modelled audio space, great for creating a believable VR experience.

And while Spatial audio is not limited to Virtual Reality applications, the benefits are more noticeable in VR, given that it’s an almost exclusively first person genre in which you have full range of movement of your head, and the ears attached to it.

For more information, see Google’s detailed summary of the the technologies being used for Resonance Audio:

How to Use Spatial Audio in Unity

1. Install a Spatial Audio Plugin

Just like Ambisonic Audio, I used Google Resonance as the Unity Spatial plugin which is already installed in 2018.

Unlike before, however, I also installed the Resonance Audio SDK, which adds extra functionality, such as audio directionality, which is useful for Audio Sources that sound different on one side than they do they other (such as a guitar, or a vehicle), and the Resonance Audio Rooms feature (more on that later).

2. Set Which Audio Sources will be Spatialized

Unlike Ambisonic audio, which is decided on a per Audio Clip basis, Spatial audio is selected on the Audio Source itself using a checkbox beneath the 3D settings.

Where to find the Spatial Audio Source Setting

Spatial Audio Source Settings

Most of the other settings on the Audio Source, such as volume and pitch, will continue to work as normal, with the exception of Spatial Blend, which you will want to set to fully 3D in most cases.

Setting the blend to 2D on a Spatial Audio Source works a little differently than it does on a normal Audio Source. It will render the sound in 2D, in that it won’t be affected by 3D settings, but will continue to appear to come from the direction of the Audio Source Object. This means that the sound will sound like it’s coming from its 3D location, but won’t get quieter as you move away from it, just like with an Ambisonic Audio Clip.

The reason for this is to do with how Resonance Audio calculates Spatial audio in a scaleable way…

It works by combining multiple Spatial Audio Sources into one Ambisonic Audio image which allows Resonance Audio to support large numbers of Spatialized Audio Sources, without a significant processing overhead.

3. Route the Audio Source to an Audio Mixer Group with a Resonance Audio Renderer

Just like with Ambisonic Audio, you must route Spatialized Audio Sources to an Audio Mixer Group with a Resonance Audio Mixer Renderer on. If you don’t you’ll still hear the audio, but it won’t be Spatialized in the scene.

If you already have one of these set up for Ambisonic Audio, I’d recommend using the same Mixer Group for Spatial Audio Sources as well. Although your results may vary, I experienced some unusual audio glitching when trying to use multiple Resonance Audio Renderers in Unity 2018.

4. Use Resonance Audio Room objects to create spaces

Resonance Audio Rooms allow you to add realistic reverb effects to an area by adding a Resonance Audio Room Component to a Game Object.

You can adjust the size of the room, set which sides of the room have reflective surfaces and even choose what materials they’re made of.

Resonance Audio Room being Used in Unity

Using the Resonance Audio Room to create reflections on two sides.

When to use Resonance Audio Rooms

Normally, it’s good practice to only use room modelling indoors, as open areas don’t usually have many surfaces that are close enough for quiet and medium volume sounds to bounce off of.

When working on Flood, however, the environment, which was a large open area that’s surrounded by tall buildings on two sides, would generate reflections. The Resonance Audio Room Component allowed me to set which walls were reflective and which were not, so I simply matched the approximate size of the area, marked the rear and right walls as brick, the floor as water and everything else as transparent.

This generated a convincing audio space with sounds that echo from far away but are dry when up close.

Troubleshooting Resonance Audio Rooms

To get the most out of Resonance Audio Rooms, remember these tips:

  • Resonance Audio Room effects are applied depending on the position of the Audio Listener, not the Audio Source. This means that the reverb effect will be applied to all spatial sounds (even if they’re outside the bounds of the Audio Room) so long as the Audio Listener is inside the room.
  • In some circumstances, having multiple Audio Listeners in the scene (even if they’re disabled!) can affect the Audio Room Component. Whilst working on the Flood project we spent a significant amount of time trying to work out why reverb effects weren’t being applied inside the room, only to find that an inactive Audio Listener, that was outside of the bounds of the Audio Room, was to blame.

Just How Many Spatial Audio Sources can Unity handle?

Turns out, quite a lot.

I ran a basic test where I duplicated a Spatial Audio Source, which was playing the same Audio Clip, but from a new random position each time, so that I could measure the impact of adding more and more Spatial Audio Sources.

To avoid Unity optimising the audio playback, therefore spoiling the test, each Audio Source was in an Audible range and, importantly, I increased the max voice count to its highest setting, 255, essentially turning off virtualisation.

Chart Showing the Number of Spatial Audio Sources Unity Can Handle

Resonance Audio can handle a lot of Spatial Audio Sources.

As you can see from the results,  just with standard 3D audio, there’s little difference between running one spatial Audio Source and many. In fact, things only really get out of hand after 32 voices, at which point Unity will start to virtualise Audio Sources by default anyway.

While this is a highly unscientific test, it does highlight the scalability of Resonance Audio. Regular 3D audio does take less processing power, as you might expect to be the case.

However…

Once you’re using Spatial Audio, the difference between using a single Spatial Audio Source, and 16, is not as high as you might have thought.

Finally, it’s worth mentioning that the minimum processing overhead for the two technologies does appear to stack up, meaning that a single Spatial Audio Source and a single Standard Audio Source combined will actually use more processing power than two Spatial Audio Sources would.

This suggests that, generally speaking, it’s more efficient to use one technology throughout however, I’d suggest not taking this suggestion too literally and to do what sounds best for the project.

For example, when working on Flood, we used a mix of standard 3D and Spatialized Audio to control how sounds were heard. Some sound effects needed to sound as if they were in the same space as the player, while others needed to sound as if they were coming from elsewhere (in this case from inside a building) and finally, there were sound effects that needed to be heard from everywhere, such as thunder and lightning, for which we used a 2D/3D blend.

More Resources

Hopefully this article will help you to get started with audio in Unity VR however, if you’d like to find out more about Resonance Audio, below are some useful links to documentation and guides:

Now I’d Like to Hear From You

Are you using Resonance Audio in your project?

Or perhaps you’re working with another plugin, such as Oculus or with an integrator, like Wwise.

Let me know by leaving a comment.

Image Attribution

by John Leonard French

Game Composer & Sound Designer

Comments

2 Comments on “VR Sound Design in Unity (Getting Started Guide)”

  1. Hi John,

    Started in june 2018 with VR Audio…
    Working on a VR Audio demo reel right now using the oculus spatializer… I experience some glitches sometimes and some strange behaviours in the audio mixer of unity. I’m using my own recorded ambisonic files, but sometimes it looks like a phasing effect is audible on especially the low end… Maybe I should try resonance instead or do at least a comparison between these two.
    Anyway maybe nice to connect on the socials. You can find my details on http://www.sphereofsound.com
    cheers, Jelmer

    1. Hi Jelmer, I experienced similar issues when trying to use two Resonance Audio Renderers at the same time, maybe Oculus has a similar issue? Hope you find the solution!

Leave a Reply

Your email address will not be published. Required fields are marked *