How to fade audio in Unity: I tested every method, this one’s the best

JohnUnity62 Comments

Fading audio in Unity is a simple thing to do, however, because there are several different ways to do it, it often gets overcomplicated.

Even as someone who works in game audio, I didn’t know for sure what the absolute best method was. Often I would resort to whatever method was the most convenient at the time.

While I do think that the most convenient method often is the best method, I wanted to know if there was a noticeable difference in quality between doing it one way and doing it another way.

So I actually tested the different methods to find out, once and for all, which one is the best, and why.

Here’s what I found out…

What’s the best method for fading audio in Unity? Overall, the best way to fade audio in Unity is to Lerp an Audio Mixer Group’s volume using a coroutine over a set duration with a logarithmic conversion calculation. Doing it in this way produces a very smooth, linear fade, even at lower frame rates. 

What are the other options then, and why is this method the best?

Generally, there are three different ways to fade audio in Unity:

  • The first method, which I’m calling the easy method, is to fade the Audio Source directly. In this case using a coroutine to Lerp the volume from one value to another over a set duration for an even, linear fade. This method is fine, and very convenient, it’s just not the best way to do it.
  • The second method, and technically the best method, is to still use Lerp in a coroutine but, instead of fading the Audio Source directly, fade an Audio Mixer group.  This method requires an Audio Mixer to work but produces a much smoother fade. Like the first method the fade is even and linear, but only if you use logarithmic conversion (I’ll explain why later on). This is the best overall method for fading audio in Unity.
  • The third method is to use an Audio Mixer Snapshot. The Snapshot Method produces a smooth fade, just like with the second method, and it’s quite simple, since it doesn’t require coroutines. The drawback with this method, however, is that the fade is not linear (it will sound as if it fades out too fast or fades in too slowly) which means the fade probably won’t sound how you expect, particularly for longer fades.

Below I’ll explain the difference between these methods, and show you how they performed when I tested them, so that you can choose the best one for your project.

Method 1: How to fade an Audio Source (the easy method)

This is the easy method, and the simplest way to fade any single Audio Source.

It works by Lerping the Audio Source volume over a set duration and produces an even fade, just as you’d expect.

This method is very convenient. Once the class is in your project all you have to do is use one line of code to call it from anywhere you like. There’s no separate function for fading in and out. You can fade audio in by setting the target volume to 1. Set it to 0 to fade it out, or smoothly change to any volume in between, all with one simple function. Just use whatever target volume you want and the script will do the rest.

Just copy the script below into a new class called FadeAudioSource (you don’t even have to add it to an object):

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public static class FadeAudioSource {
    public static IEnumerator StartFade(AudioSource audioSource, float duration, float targetVolume)
    {
        float currentTime = 0;
        float start = audioSource.volume;
        while (currentTime < duration)
        {
            currentTime += Time.deltaTime;
            audioSource.volume = Mathf.Lerp(start, targetVolume, currentTime / duration);
            yield return null;
        }
        yield break;
    }
}

Then every time you want to trigger a fade, use this line of code, passing in the parameters for the Audio Source, the duration and the volume you want to fade to.

For example:

StartCoroutine(FadeAudioSource.StartFade(AudioSource audioSource, float duration, float targetVolume));

What are the drawbacks of using this method?

First off it’s limited to only one Audio Source so this really only works for single Audio Sources or for music tracks.

Secondly, because the changes to the volume are frame rate dependent, you’ll get a stepped volume change while fading out.

What does that mean?

It means that the script can only change the volume as frequently as it is updated. Unlike the audio itself which is made up of tens of thousands of samples being processed every second, the fade is limited to whatever frame rate the game is running at.

At high frame rates, such as 60fps, this isn’t a problem, as the stepping effect isn’t audible, although it is visible when inspecting a recording of the audio waveform:

Fading Audio Source with a Coroutine at 60fps

The audio is sampled 44,100 times each second, but the fade changes volume only 60 times each second, causing this step effect.

Whilst this isn’t really a problem at high frame rates, if the frame rate drops then the stepping effect starts to become audible.

Here’s what the same fade looks like when it’s limited to 10fps:

Low Framerate Audio Fade in Unity

The stepping effect gets much worse at lower frame rates.

The easy method: Pros & cons

Pros:

  • Super convenient. Once the class is in your project all you have to do is use one line of code to call it from anywhere you like.
  • Good for crossfades, just start two fades on two audio sources at the same time to crossfade audio.

Cons:

  • Only works on single Audio Sources.
  • Sounds worse at low frame rates.

When to use this method

The main benefit of this method is that it’s very convenient. One class, one line of code and you’re done. It’s quick, easy and you don’t need an Audio Mixer.

So if you want a quick fade for a single Audio Source then this is an easy way to do it.

However…

If you’re concerned about the quality or the smoothness of the audio fade, or if you want to easily fade out multiple Audio Sources then you might be better off using the next method which, although very similar, produces dramatically better results.

Method 2: How to fade an Audio Mixer Group (the best method)

Of all the methods I tried, this one is the best. It’s not necessarily the easiest but, if you want to fade multiple Audio Sources all at once, it’s the one I’d recommend using.

This method is basically the same as the first except that it fades an Audio Mixer Group instead of an Audio Source.

This is ideal for when you want to fade all your audio, for example when switching between Scenes.

There’s another benefit though…

Despite nothing else being different – both this method and the one above both use the same code, (Lerp in a coroutine) – the fade appears to be much, much smoother.

This is due to an apparent smoothing of the value changes by the audio engine that takes place in between frames, which is visible when the output is recorded.

Audio Fade Waveform Examples

Fading an Audio Mixer, instead of an Audio Source produces a much smoother fade.

At first I assumed that the difference in quality was to do with the fade being processed completely independently of frame rate, like PlayScheduled does.

What appears to be happening, however, is a smoothing of the value change.

That means that the stepping effect still occurs, as the value is still only being changed every frame, but the audio output is kind of interpolated between the changes.

Reducing the frame rate reveals how it works:

Visual example of different audio fades

The smoothing effect is less effective at lower frame rates.

At lower frame rates, fading with an Audio Mixer produces the same stepping issues as fading with an Audio Source. They’re just less noticeable, thanks to the smoothing.

In reality though, your game would have to drop to lower than 30fps before this is noticeable at all as, at that level and above, the smoothing of the steps (just about) overlaps the frame duration.

How to fade an Audio Mixer Group?

You might be tempted to use an Audio Mixer Snapshot, but despite being easier, it’s not the best method (see below). The main issue is to do with the curve of the fade which, when using a Snapshot, is too sensitive.

This is because Audio Mixer Snapshots do not use a logarithmic scale for changing volume (even though the fader control is logarithmic).

This method doesn’t have that problem, as it takes a simple 0-1 float value which is converted in the script to a fader value. This results in an even, linear fade.

It takes a little extra work, but the results can be dramatically better than both of the other options.

Here’s how to set it up:

  1. Create an Audio Mixer
  2. Route any audio you want to fade to a Group on that Mixer
  3. Click the Audio Mixer Group and right click on the Volume Component label in the Inspector (pictured below)
  4. Select Expose ‘Volume (of Mixer)’ to script
  5. Rename the Exposed Parameter in the Audio Mixer panel (also pictured below)
Where to find the option for exposing parameters in the Unity inspector

Right click the volume label in the Inspector to expose a Mixer Group’s fader to scripting.

Image: How to change the names of exposed parameters

Rename the Exposed Parameter in the Audio Mixer window, you’ll need to reference the name exactly later on.

Create a script called FadeMixerGroup and copy this into it (in full). It doesn’t need to be added to a game object to work.

using System.Collections;
using System.Collections.Generic;
using UnityEngine.Audio;
using UnityEngine;
public static class FadeMixerGroup {
    public static IEnumerator StartFade(AudioMixer audioMixer, string exposedParam, float duration, float targetVolume)
    {
        float currentTime = 0;
        float currentVol;
        audioMixer.GetFloat(exposedParam, out currentVol);
        currentVol = Mathf.Pow(10, currentVol / 20);
        float targetValue = Mathf.Clamp(targetVolume, 0.0001f, 1);
        while (currentTime < duration)
        {
            currentTime += Time.deltaTime;
            float newVol = Mathf.Lerp(currentVol, targetValue, currentTime / duration);
            audioMixer.SetFloat(exposedParam, Mathf.Log10(newVol) * 20);
            yield return null;
        }
        yield break;
    }
}

Then, when you want to trigger a fade, use this line.

StartCoroutine(FadeMixerGroup.StartFade(AudioMixer audioMixer, String exposedParameter, float duration, float targetVolume));

You’ll need to specify the Audio Mixer and the Exposed Parameter name for it to work.

Just like the previous method, this is static and can be called from anywhere, without needing an instance of the script in the scene.

Unlike the previous method however, you’ll need to reference the Audio Mixer and the Exposed Parameter Name when calling the script.

You will also have to add the using UnityEngine.Audio namespace wherever this is used.

The Audio Mixer Group method: Pros & cons

Pros:

  • This is, by far, the smoothest way to fade one or more Audio Sources in Unity, even at low frame rates.
  • This also works great for crossfades, just use two mixer groups, two exposed parameters and trigger them at the same time.

Cons:

  • It takes a little extra effort to set up and requires an Audio Mixer to work.

When you’d use this:

If you want to fade multiple Audio Sources then this is the best way to do it. It’s also much more convenient than fading multiple Audio Sources individually and you get the added benefit of a much smoother fade.

Method 3: Fading with Audio Mixer Snapshots (the Snapshot method)

The last method, the Snapshot method, is probably the first one that many people will try. I certainly did.

This involves using an Audio Mixer and two Snapshots, with on and off states to fade audio in and out.

Here’s how to do it:

In this example, I’m preparing the audio to fade out.

  1. Add an Audio Mixer
  2. Create a second Snapshot called Off, rename the default Snapshot to On
  3. Select the Off Snapshot and turn the volume of the Mixer Group you want to fade down to 0
  4. Select the On (default) Snapshot and set the volume to -0.05db (this is a little trick to avoid a bug that causes a pop in the editor)
  5. Route any Audio Sources you want to fade to the Mixer Group

Then when you want to trigger the fade, use this line of code:

audioMixerSnapshot.TransitionTo(float duration);

You’ll need to add using UnityEngine.Audio; to the script that this is called from otherwise this won’t work, all together it looks like this:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Audio;
public class TriggerFade : MonoBehaviour {
    public AudioMixerSnapshot fadeOut;
    void Start () {
        audioMixerSnapshot.TransitionTo(4);
    }
}

This is quite an easy way to set up audio fading, as it requires the least code of all of these examples.

However, there are drawbacks.

The big problem with Snapshot fades

The main issue with this method is the curve of the fade which is overly sensitive, which is is caused by a lack of logarithmic conversion. This is a problem because, while the volume fader works on a logarithmic scale, the weighting of the Snapshot mix is linear.

For more information about exactly why this happens, try my article on Audio Volume Slider in Unity (using logarithmic conversion) which explains it more detail.

In real terms, what this means is that you’ll get a fade that ends too soon, starts too late or, if you’re trying to crossfade with Snapshots, you’ll get a dip in volume in the middle. What’s more, unlike in the previous method, this can’t easily be fixed with a conversion calculation.

What effect does this have?

Here’s what a 4 second Snapshot fade out looks like side by side with a 4 second fade that uses logarithmic conversion:

Logarithmic Conversion Example

Audio fades with and without logarithmic conversion. Can you spot the problem?

Fortunately, there are some settings that can alleviate this issue, although they won’t fix it. These are the Snapshot Transition settings which changes how the curve is applied for each Snapshot transition.

To change the Snapshot Transition setting, select a Mixer Group and, just like when you expose the parameter, right click on the volume label to view the transition options.

Snapshot Transition options

Right click on a Mixer Group’s volume label to set the Transition Curve.

Which Audio Snapshot Transition should you use?

While none of the options produce a linear fade, the Squared Snapshot Transition appears to get the closest to it. As can be seen here:

Snapshot Transition Curves

If you need to use a Snapshot to fade out audio, the Squared transition produces the best results.

However…

this only applies if you’re fading the audio out.

If you’re fading in then the best Snapshot Transition to use is actually Square Root. As can be seen here:

Snapshot Transition Curves for Fading In

If you’re fading audio in, suddenly Squared becomes the worst.

Luckily, it’s possible to select different Transitions for different Snapshots.

For best performance, set the Off Snapshot to Squared and the On Snapshot to Square Root.

The Snapshot method: Pros & cons

Pros:

  • Smooth fade.
  • Requires the least amount of code to implement.
  • Works fine for smaller volume changes (as opposed to full fades)
  • State system is already built in.

Cons:

  • Non-linear fade curve (no matter what transition you pick)
  • Terrible for crossfades

When you’d use this:

Although not the best method to use there are times when this method may be useful. For example, fading multiple effects, not just volume, all at once is really only possible with Audio Mixer Snapshots. It may also sometimes be more convenient than setting up an Exposed Parameter and starting a Coroutine and the overly-sensitive fade is less noticeable if the fade is quicker or when partially fading in our out (e.g. just changing the volume).

Which method is the best

Of the three methods, one is clearly better in terms of quality and, in some ways, convenience too. The best method for fading audio in Unity is to Lerp an Audio Mixer Group, using logarithmic conversion.

However…

If you just want to quickly and easily fade out a single Audio Source then scroll back up to the first method, copy the script and use that.

Even though the first method produced a lower quality fade, it took recording and examining the waveform to actually find that out. In most use cases, especially if the frame rate is above 30fps, it’s difficult to hear the difference.

And even Snapshot fades, although definitely not a great way to do full fades, have their own benefits and are easy to use.

Which leads me to my recommendation.

Which method do I think you should use?

The most convenient one…

Because you now know the different options that are available to you, and how they perform, you can easily use a different method if you hear something you don’t like. You’ll be able to switch methods to improve quality, if you have to, or to make fades easier to implement, if that’s what’s needed most.

So if you end up throwing a quick fade on an Audio Source because it’s easier, or use a cheeky Snapshot because that’s what’s going to get the job done, then go for it.

I won’t judge 😉

And since you made it all the way to the bottom, let me know if this helped you by leaving me a comment.

by John Leonard French

Game Composer & Sound Designer

Comments

62 Comments on “How to fade audio in Unity: I tested every method, this one’s the best”

  1. Wow! I’m a noob to unity and I have always wondered how to fade sounds in and out, and now I have three methods to do so. Thanks, man!

  2. It is great article. Thanks.I prefer snapshot version more easy for me but sometimes transitions are bottleneck.

    1. I haven’t tested this yet, so I can’t say for sure, but I would expect that it’s only worth taking the extra time & effort to do it with a timeline if that’s right for the situation (e.g. a cutscene) even if it is butter-smooth. I’ll look into it though, so thanks for the tip.

      1. I’m not big fan of timelines for simple things, but Playables are awesome!

        He’re two scripts which perform cross-fade using playables:
        https://www.paste.org/103561
        https://www.paste.org/103560

        Playable graph is built like this:
        ClipPlayables -> AudioMixerPlayable -> CrossfadePlayable -> AudioOutput

        CrossfadePlayable is custom playable which is just ‘pass-through’, but controls input weights for the mixer (uses Sqrt fade-in and Sq fade-out)

        Typical usage is like this:
        https://www.paste.org/103562

    1. Generally, if you want to be able to change the volume and fade in/out at the same time, it’s going to be easier to keep those actions separate. You can only really do this by also using an Audio Mixer. Alternatively, if you’re only playing One-Shot Sounds through the Audio Source you can set a volume scale when triggering the sounds and then fade the Audio Source volume, but, the best option is probably going to to use a mixer.

  3. I’m trying to use the AudioMixer method.

    What do you mean by “Route any Audio Sources you want to fade to the Mixer Group”?

    1. To fade using Audio Mixers you’ll need to have any Audio Source that you want to fade in or out pass through the Mixer Group that you’re fading. You do this by creating an Audio Mixer and setting the Output value of each Audio Source to one of the groups (the individual faders) on that mixer. You can find out more about Audio Mixers in Unity here.

      1. Thanks for the fast response!

        Hm, I thought I’d set the AudioSource Output field, must be doing something else wrong (I’m new to AudioMixers :P).

        Anyway, just got your first method (lerp coroutine) working, so might just stick to that. Thanks for such a detailed article!

    1. If you’re using the Static Fade Audio Source script example, start the Coroutine from a script in your scene. You won’t be able to start a Coroutine from a static script, as far as I understand it.

  4. Thank you so much! This is exactly what I was looking for. Have Method Two working to fade out when changing scenes and have it set up to handle background music and SFX differently, if needed. I am pretty new to Unity, and this is my first time ever even using the mixer.

  5. Fantastic write up. It was detailed, precise, and simply works. Thank you for contributing to the growing developer community!! Content like this is lowering the barrier to creative Unity development.

  6. This is a great article. I used Method 2, and it works well, but I don’t know how to fade back in to the volume setting stored in PlayerPrefs, since it could be a different value for every player.

      1. Thank you! I had used the other article to set up my volume sliders before. I just didn’t know the syntax calling the exposed parameter. I’m still new to C#, but I’m learning a lot. This worked perfectly.

      2. Thank you! I had used that article before to set up my volume faders. I just wasn’t sure what the syntax was for accessing exposed parameters. I’m still new to C#, but I’m learning a lot.

  7. Hi, thank you for this in depth article.

    I’m a total idiot though and need a little more info on method two.

    how to I “Then when you want to trigger the fade, use this line of code:

    audioMixerSnapshot.TransitionTo(float duration);”

    How do I do that? its in another script I assume and is it a void { } ?

    All I want to do is fade a sound that will appear in multiple instances across the level (a fire sound as people walk in and out of houses)

    Thanks. <3

    1. Also, wat did you rename the Exposed Parameter? The picture doesn’t show that so I’m not sure what name/part it is of the script code – yes I don’t know coding at all at the moment 🙁

      1. So in the 2nd example, it’s a static method in a static class, meaning that the script that the method is in sits in the project (you don’t need to add it to an object) and you can call it from another script without having a reference to it first. The name of the exposed parameter should be whatever you called it in the preceding steps. This is done in the editor, it’s not defined in the script.

  8. Thank you for your post. Can I use your code for my commercial game?
    If possible, which license do you use?

  9. Fascinating – didn’t even know this capability was in there in this respect. I used your queueing clip code to fade stuff in and out, but I think I cheated – I used Ableton Live 11 to create fade in/out versions of my tracks, with it setting the durations I wanted, and use that to do my audio transitions.

  10. Thank you for an (as always) straight to the point article.

    I tried your code and it works great. Having an issue with implementation I don’t seem to be able to solve though.

    I put the “StartCoroutine” of your code into a script that iterates over a list of prefabs containing audioSources. The goal is to fade between them in a looping fashion. I double and tripple checked all my code and it orders the coroutines correctly (as far as I can see).

    However. The fade only executes the first time. Is there something in the coroutine that prohibits them from running a second time? I tried to explicitly “StopCoroutine” after the first use in an attempt to “reset” the coroutine to no avail.

    I’d be happy to send you the entire code via e-mail but asking here is a shot in the dark.

    In any case – Big thanks to you for your work!

  11. Hi John, thank you for sharing your experience on this! It is very descriptive, and call me an old man but I prefer blogs / forums / manuals instead of long videos. It is way easier to look for the right properties and functions.
    As a hobbyist programmer and musician, this subject is quite relevant for me.
    Just to share a thought, I’m working on a project with multiple transitions on BG music, like different stances (pre-battle, battling, victory-end, defeat-end, etc). Do you have any tips about the best approach for this scenario? The fade in/out between stances makes the most sense to me, but I would like to hear if you will follow a different path, based on your experience.
    Thank you again for this great tutorial!

    1. Thank you! To answer your question, it depends, but I’d imagine the best system for that type of progression, with the timing of each part and the final outcome being the only variables that could change, would be one piece of music, split into parts that are beat-matched. So when you move into the battle, the battle music comes in straight away on the next beat. It might not be right for your project, but off the top of my head, that’s what I’d do.

      1. I think you nailed it… I was missing a way to match parts, which of course must run on the same beat. I will give it a try!

  12. Hey, thanks for the article.
    However, there is something I don’t get. Why would you consider the non-linear curve to be a drawback. Isn’t that a poor way to make a transition in general?
    I’m asking because I’m not so sure about audio, but I know linear curves are very unnatural for movements.

    1. Thanks! So in this case the non-linear curve is to do with the logarithmic scale of the volume fader in the mixer. If you apply a linear value to it (ie. 0db down to -80db) half volume would end up being -40db, when really it should be -6. I did a whole article on it here. This happens with snapshot transitions and with sliders (on the mixer only, audio source volumes are fine) where the music will fade out far too quickly.

  13. Hi,

    I was trying to implement version 2 and carefully recreated each step, but whenever I want to insert the StartCoroutine part in another script, I get red underlining for all the values inside the brackets (AudioMixer audioMixer, String exposedParameter, float duration, float targetVolume)

    I get the error message ” ‘AudioMixer’ is a type, which is not valid in the given context ”

    (Yes I inserted using UnityEngine.Audio; at the beginning of both scripts and used both my mixer and exposed parameter names)
    Any ideas what could be the problem here? Maybe a problem with Unity 2021?
    Any help would be appreciated.

    AudioMixer audioMixer, String exposedParameter, float duration, float targetVolume));

    1. When you start the Coroutine, you’ll need to pass in a reference to an audio source, the string value of the exposed parameter, a float value for the duration and a float for the target volume. without the type declarations before them. When I originally wrote the examples in this article, I left the Type declarations in (i.e. AudioSource audioSource) to show what each value was but, to use it, you’ll need to refer to a an actual instance of an audio source and pass that in, along with in the data for the rest of the function (string and floats etc.). Hope that helps.

  14. Hello, and thank you for this Sweet code!

    I’ve set up Method 2, although I keep getting this Null Reference Exception, somewhere right at the beginning of FadeMixerGroup.cs. I copy-pasted everything, and I finally figured it out so this is the only error left, lol.

    I’ve declared no variables in this script, and I’ve read that Coroutines cannot run without MonoBehaviour – I’m VERY new to C# coding, and something must be wrong, something simple. I wonder if you can help?

    – Braven

    1. My guess would be that you don’t have a reference to the AudioMixer that the group is on. If you copied it exactly, but changed nothing else, that would happen. Set Float is a function of the Audio Mixer class but you need an instance of an Audio Mixer for it to work.

      If that’s not it, if you’d be willing to send the script to [email protected] I could take a look for you.

  15. Hi John, I ‘m a game developer and have just picked up Minimal Orchestral, perfect for the game I’m currently working on. I wondered if I could get a bit of advice from you. Most of the game is procedural and built at runtime. Because of this I’m using the snapshot method of transitioning. How do you deal with playing tracks that have transitioned to 0 volume? Do you leave them playing (using CPU and memory) or stop them. If the best method is to stop them, how do you know when to stop them? There don’t seem to be OnTransitionEnd events. Any thoughts welcome. Nice job with the music.
    Cheers
    chris

    1. Thanks Chris, to answer your question, yes, if you’re using multiple long tracks you’re probably going to to want to free up their resources in some way. If you’re using the streaming load type, then pausing them should work, in which case you could pause the audio source at the end of the fading coroutine if it’s been faded to 0. You’ll need a reference to the audio source for that to work though or you could raise an event there. Alternatively, if you’re keeping the music in memory, there’s going to be much less cpu use, but you may want to unload the audio data, which you can do on a clip by clip basis. I wrote an audio optimisation article on my other blog if that’s any help: https://gamedevbeginner.com/unity-audio-optimisation-tip. Personally, I’d probably do it with an event that music audio sources subscribe themselves to.

  16. Im currently working through many of the (complex) subsystems of Unity. Your article is EXACTLY the type of explanation which warms my hearth and tickles my brain. Thank you, this is EXACTLY the way I love to learn stuff. I know how much effort is required to write articles like this and really appreciate it. Let me know if I can buy you a coffee or something.

  17. Great Tutorial, but in my experience you don’t get a smooth transition with Mathf.log(v) *20, if v is 0.1 by example, the audio mixer volume goes to -20, what actually is very loud, if multiply * 80 you get de smooth transition. Although I don’t know if is very different from use, Mathf.lerp(-80,0,v), well in my ears hehe.

    1. The issue is the logarithmic scale. For example, where 0db is the full signal, to achieve half volume, the attenuation needs to be at -6.02db. So Mathf.Log10(0.5f) * 20) will convert to -6.02 correctly. If you used * 80 instead, you’d get -24.08db at the halfway point on the scale, which is actually closer to around 6% of the full volume. Hope that helps.

  18. I’ve been banging my head against the wall with this one for a long time. I have some projects that involve fast and continuous volume changes. This script allows me to use your ‘easy method’ while completely avoiding the stepping effects you describe by lerping inside Unity’s OnAudioFilterRead() and IMHO this is much simpler than wiring up the mixer.

    public class SmoothVolume : MonoBehaviour
    {

    public float volume;
    float volumeCounter = 0;
    float prevVolume;

    void OnAudioFilterRead (float[] data, int channels) {
    for (int i = 0; i < data.Length; i += channels) {
    float fraction = ((float)i/data.Length);
    float val = Mathf.Lerp(prevVolume,volume,fraction);
    data[i] = data[i] * val;
    if(channels==2)
    data[i+1] = data[i+1] * val;
    }
    prevVolume = volume;
    }
    }

  19. Cool tricks you are showing us!
    I just don´t get one thing: Why would anybody want to use Time.deltaTime for fading Audio?
    Audio is supposed to glue everything together in a game or in a movie, so it needs to be as consistent as possible. By making it frame-dependent one achieves the complete opposite.
    So wouldn´t it be far more appropriate to use FixedUpdate and Time.fixedDeltaTime?
    That way the faded audio is independent of the current frame-rate and should fade smooth in any case.
    I mean maybe I´m missing something, it just doesn´t make sense to me =)
    Thanks again for the inspiration!

    1. Thanks. So the entire purpose of Time.deltaTime is to do something evenly over frames of a different length. Fixed Update is tied to the physics engine update schedule and is usually consistently timed, but that’s not what it’s for, while fixedDeltaTime does the same thing as delta time but for the physics step interval instead. Although delta time will report the same value as fixed delta time if you call it from fixed update.

      Hope that helps, let me know if you have questions.

Leave a Reply

Your email address will not be published. Required fields are marked *