How to Queue Audio Clips in Unity (the Ultimate Guide to PlayScheduled)

JohnUnity56 Comments

There are many methods for triggering audio in Unity. Which one you choose depends on what you’re trying to achieve.

For example, you can do a lot with only the basic Play function, which plays an Audio Source as soon as the method is called. Want to delay playback by a few seconds? That’s easy too, using PlayDelayed.

With these functions, and some conditional logic, it’s easy to create simple music and audio events that will serve many purposes.

But what if you want to do more?

For example what if you want to queue different Audio Clips to play back to back seamlessly? Or if you want to build a dynamic music system. Maybe you want to trigger events in musical time, beat matching sounds to a precise musical structure.

To do any of these requires very accurate audio timing that can’t be achieved with Play or PlayDelayed.

Luckily, however, Unity provides a very reliable and accurate method for scheduling audio precisely, using PlayScheduled and Audio DSP Time.

In this article I explain everything you need to know to get the most out of PlayScheduled; When you should use it over other methods, how to start stop and pause scheduled audio as well as some best practice techniques that I’ve learned from implementing audio scheduling in my clients’ games.

What you’ll find in this article

  1. A bit about me and why I wrote this article
  2. When you should use PlayScheduled (and when you don’t need to)
  3. How to queue Audio Clips in Unity (How to start and stop scheduled clips)
  4. What happens to scheduled Audio Clips when you pause the game
  5. What you can do with PlayScheduled
  6. How to queue Audio Clips to play back to back seamlessly
  7. How to beat match audio events to a musical structure

A bit about me and why I wrote this article

I’m a game composer and sound designer, I do a lot of work in Unity and, when I’m working with developers, I often help create custom music and audio systems.

It might be that I build a prototype system to demonstrate an idea or that I’ll use Unity to test if my own work is going to sound as I imagined it would when it’s triggered dynamically.

I’ve had the opportunity to build music and audio systems for all kinds of uses, each of them different in their own way.

All of them, however, required a reliable method of scheduling clips accurately.

And it seems that I’m not alone…

While researching for this article I found numerous forum posts from people asking for a way to do the same, how to queue clips back to back seamlessly, how to prevent pops, clicks and avoid timing errors created by other methods.

But here’s the thing…

While everything in this guide is either available somewhere online, detailed in the Unity documentation or can be easily tested in the Unity editor, the majority of solutions and answers I found in forum posts were, very often…

Just plain wrong.

Answers were either inaccurate, out of context or were links to an Asset Store extension that maybe, kind-of, solved the issue but definitely isn’t necessary.

But don’t worry.

I’ve had a lot of practice putting together music systems, and in the past I’ve worked through a lot of problems, tried a lot of ideas and experimented with all kinds of audio systems. Some turned out great and are, right now, powering the audio in released games. Others, like some of my early attempts at dynamic audio, were not so great and were a valuable learning experience.

I’ve used my experience to develop the, now tried and tested, methods that I use every day with my clients. I wrote this article to show you how you can do the same, so that you can make better audio for your game more easily, with less frustration, using proven, best-practice techniques.

I hope you enjoy it.

When should you use PlayScheduled?

The main reason for using PlayScheduled over other methods of triggering audio is so that you can schedule audio events using the Audio System’s DSP Time, which is not just accurate but also operates independently of frame rate.

The DSP Time value, which you can access in a script with ‘AudioSettings.dspTime’, is based on the actual number of samples processed by the audio system and returns a double value, which is more accurate than the float values that are used by PlayDelayed, Time.time and WaitForSeconds.

Unity double variable compared to float

This is what makes it so suitable for precision audio work.

Put simply, if you want to queue audio clips to play back to back seamlessly (without audible gaps) or if you want to create audio events that trigger in a musical structure (and not sound out of time) then you should be using PlayScheduled for the extra accuracy that it provides.

If however, all you want to do is delay a sound, or a music loop, by a few seconds then, in many cases, PlayDelayed is enough and you should use that instead.

How to schedule Audio Clips in Unity?

So how does PlayScheduled work, and what can you do with it?

Setting a Clip to start

Calling PlayScheduled on an Audio Source allows you to schedule it to play at an exact time in the future. It takes one double value parameter, which is the time that the Audio Source will play.

It’s important to remember that PlayScheduled takes a time value, not a delay.

This is different to the PlayDelayed function and the older, now deprecated, Delay parameter of Play, which both specify a delay before the Audio Source will trigger.

For example, if you wanted to use PlayScheduled to trigger an Audio Clip with a two-second delay, you would need to pass the current Audio DSP Time plus two seconds:

// Schedule an Audio Source to Play in 2 Seconds.
AudioSource.PlayScheduled(AudioSettings.dspTime + 2);

Diagram: How to play Audio with a 2 second delay in Unity using Playscheduled

Keeping track of when audio events started is important when calculating the trigger times for other, related, audio events.

Because of this, it’s often a good idea to store the scheduled time in a double variable when using PlayScheduled, so that you can reference the time a clip started in later calculations.

// Saves the start time for later use
Double startTime = AudioSettings.dspTime + 2;

Scheduling a clip to play immediately (and why you probably shouldn’t do it)

Although it is possible to call PlayScheduled at the current DSP Time, which will cause the audio to start as soon as possible, one of the main benefits of using PlayScheduled is to avoid instantly preparing the sound for playback.

For this reason, to avoid timing errors when scheduling the first Clip, I add a slight delay.

// Start the first Clip as soon as possible
audioSource.PlayScheduled(AudioSettings.dspTime + 0.5);

What if you’ve changed your mind? How to reschedule PlayScheduled

It’s possible to change the scheduled time of an Audio Source by calling SetScheduledStartTime in the exact same way as PlayScheduled.

Doing so will reschedule when the Audio Source will trigger, interrupting it if it’s already playing.

// Change when a scheduled Audio Source will play

It’s worth noting however that this will only work if PlayScheduled has already been called and, in my experience, you can achieve the same result by simply calling PlayScheduled on the same Audio Source a second time.

How to cancel PlayScheduled: Stop an Audio Clip that’s already scheduled to play

To cancel PlayScheduled, and stop an Audio Source from playing in the future, simply call Stop on that Audio Source, just like you would with any Audio Source that’s already playing.

// Stop a scheduled Audio Source

This works because, even if a clip on a scheduled Audio Source hasn’t started yet, the Audio Source itself is considered to be playing. In fact AudioSource.isPlaying will return true on any Audio Source that’s scheduled, whether it’s outputting audio or not.

This is important to remember if you’re using any logic that checks against the isPlaying parameter of the Audio Source as it will return true even before the scheduled AudioSource begins to play.

Stopping a clip at a precise time

Just as you may want to start a clip at a precise time, you may also wish to end a clip at an exact time too. For example: if you want to transition a piece of music into a new Clip at the next bar (more on how to do that later).

To stop a clip at an exact time call SetScheduledEndTime on the Audio Source in the same way as PlayScheduled, passing in the time value of when you want the clip to end.

// Stop a Scheduled Audio Source at an exact time

What happens to scheduled audio when you pause the game?

Using the common method of setting the timescale to zero (Time.timeScale=0;) to pause the game won’t affect DSP Time, or stop any Audio Sources from playing regardless of when they were scheduled. Just like any other Audio Source, they will continue to play while the game is paused.

This is great if, for example, you want to continue the same music into your pause menu but is less useful if you’re using an audio system that’s being driven by events that are dependant on scaled time.

For this reason, it’s a good idea to avoid logic calculations that mix scaled and unscaled time, for example comparing Time.time to dspTime in an IF statement.

How to pause audio events in Unity with AudioListener.pause

Alternatively, you can freeze the entire audio system when the game is paused, including freezing DSP Time, by setting AudioListener.pause to true whenever you pause the game.

// Pause all Audio Sources
AudioListener.pause = true;

This pauses every Audio Source, and prevents scheduled audio sources from playing whenever the game is stopped. They will resume from their exact position when AudioListener.pause is set back to false and the game is re-started again. 

In this scenario, you’re probably still going to want some sounds to play when the game is paused, such as user interface sound effects or menu background music for your pause menu, and because of this it’s possible to allow selected Audio Sources to continue playing even when AudioListener.pause is set.

To do this, set ignoreListenerPause to true on any Audio Source that you want to be able to use when the game is paused and it will continue to play as normal.

// Use an Audio Source when the Listener is paused

Even scheduled Audio Sources will continue to work as normal, despite the fact that the DSP Time value has been stopped.

Please note however that in this specific scenario, while PlayScheduled will work, DSP Time based logic conditions won’t (because the visible DSP Time is stopped) so although you can use PlayScheduled while DSP Time is paused, you won’t be able to reference DSP Time in IF statements.

What can you do with PlayScheduled?

Now that you understand the fundamentals of how to schedule audio in Unity, it’s time to put them to creative use.

How to queue Audio Clips to play back to back seamlessly

One of the main benefits of using PlayScheduled is the ability to stitch any two, or more, Audio Clips together to play back to back seamlessly. As soon as one Clip ends, the next one starts, with no gap in between.

This is especially useful for building music systems that use sequential parts, that you can swap out on the fly, or for adding an intro to a looping track.

It’s done by recording the start time of the first Audio Source, calculating the length of the Audio Clip that’s going to play and scheduling a second Audio Source to play at the exact moment the first ends:

Diagram showing how to queue Audio Clips back to back in Unity

Here’s how to do it:

First, set up two Audio Sources

It’s not possible to do this with a single Audio Source so you will need to use two.

This is because you can’t schedule an Audio Source that’s already been scheduled to play in the future without resetting it

Using two Audio Sources allows you to schedule one while the other is playing. You can then switch between the two as needed. If you want to play an endless number of Clips back to back this too can be achieved with two Audio Sources. Just toggle whichever Audio Source is used next.

A method I often use to switch between Audio Sources in scripting is by creating an Audio Source Array and using a integer toggle that I can switch every time a new Clip is scheduled. The toggle is simply an integer variable that switches between 0 and 1. When you want to flip the switch, simply set the value to 1 minus itself. I then use the current value of the toggle as the Array Index to select which Audio Source to play next.

// Use two Audio Sources in an Array
public AudioSource[] audioSourceArray;
int toggle;
// Whenever you schedule a clip
toggle = 1 - toggle;

How to accurately calculate the length of an Audio Clip (without using AudioClip.length)

In order to schedule the next Clip when the current one ends, you will need to calculate an accurate clip duration.

The AudioClip object includes a value for length but it’s a float value, so it won’t be accurate enough to use.

Instead, take the Samples Value of the Clip (AudioClip.samples), which is the total number of samples in the audio file, and divide it by the Clip’s frequency (AudioClip.frequency), which is the sample rate (the number of samples in each second of audio).

The result is a highly accurate duration value which you can then use to calculate the next Clip’s start time:

// Calculate a Clip’s exact duration
double duration = (double)AudioClip.samples / AudioClip.frequency;

Note that in order for this to work you have to cast samples, which is an integer, as a double when calculating the duration.

Now, to calculate when the next Clip should play, simply add the calculated duration to the Start Time of the last clip and pass it into the PlayScheduled function.

// Queue the next Clip to play when the current one ends
audioSourceArray[toggle].PlayScheduled(startTime + duration);

Creating endless playlists: When to schedule the next Clip

Exactly when you run the code that queues up the next clip to play depends on your use case. For example, if you want to play a music intro, immediately followed by a looping Clip, then both of these events can be queued up at the same time.

// Play an intro Clip followed by a loop
AudioSource introAudioSource;
AudioSource loopAudioSource;
void Start () {
double introDuration = (double)introAudioSource.clip.samples / introAudioSource.clip.frequency;
double startTime = AudioSettings.dspTime + 0.2;
loopAudioSource.PlayScheduled(startTime + introDuration);

If, however, you want to endlessly feed in clips that play back to back, perhaps for a dynamic music system or a never ending playlist, you will need to define a point in time to evaluate which Clip is going to play next be and exactly when it’s going to play.

Typically, when I do this with dynamic music systems, I will have the system look one second ahead until just before the next clip is needed.

To do this, use Update to check when the next Clip is due to finish:

public AudioSource[] audioSourceArray;
public AudioClip[] audioClipArray;
int nextClip;
void Update () {
    if(AudioSettings.dspTime > nextStartTime - 1) {
    AudioClip clipToPlay = audioClipArray[nextClip];
    // Loads the next Clip to play and schedules when it will start
    audioSourceArray[toggle].clip = clipToPlay;
    // Checks how long the Clip will last and updates the Next Start Time with a new value
    double duration = (double)clipToPlay.samples / clipToPlay.frequency;
    nextStartTime = nextStartTime + duration;
    // Switches the toggle to use the other Audio Source next
    toggle = 1 - toggle;
    // Increase the clip index number, reset if it runs out of clips
    nextClip = nextClip < audioClipArray.Length - 1 ? nextClip + 1 : 0;

Beatmatching audio events (how to trigger audio to play on the next bar, beat or musical note)

It’s possible to use PlayScheduled and DSP Time to schedule events in precise musical time. This is great for swapping out audio clips at an exact bar or beat, or triggering audio events to play over another track in musical time.

In the example below, I’ll show you how you can use beat matching to end a looping track on the next bar.

How to calculate the length of a note, beat or bar

To calculate musical time, you first need to know exactly how long a beat is.

To do this, you will need to know the tempo and, if you’re calculating bar length, you will need to know the time signature too.

Find the track’s tempo

You may already know the tempo of your track but if you don’t, try a beat finder tool like to work it out.

Once you know what it is, divide a double value of 60 seconds by the tempo to get the duration of one beat:

// To calculate the beat length of a 95 bpm track
double beatLength = 60d / 95;

Note the ‘d’ after 60, which specifies it as a double. Without it, it would be treated as an integer, breaking the calculation.

You can use the beat length to calculate other musical unit lengths.

For example: Multiply the beat length by the number of beats in a bar to get a bar length (Not sure how many beats are in each bar? See the next section for more about time signatures).

Divide the beat length by four to get the length of a 16th note (a semiquaver):

// To calculate the semiquaver note length of a 110 bpm track in 4/4: 
double noteLength = (60d / 110) / 4;

 …and so on.

Finding the length of a bar

A time signature consists of two numbers. It will tell you first how many beats are in a bar and, second, how long each beat is.

For example, a typical time signature is 4/4 meaning four beats to a bar that are each a quarter note (crotchet) in length.

The first number of the time signature shows how many beats are in a bar

You’ll usually be able to count the number of beats in each bar just by listening to it. Very often this value will be either 4 beats in a bar, or 3 (e.g. 4/4 or 3/4).

// To calculate the bar length of a 100bpm track in 4/4
double barLength = 60d / 100 * 4;
// To calculate the bar length of a 70bpm track in 3/4 
double barLength = 60d / 70 * 3;

The second number in a time signature shows you how long the beat is.

The second number in the time signature is, very often, four, which means a quarter note length for each beat (or a crotchet in the UK).

Sometimes other note lengths are used and, in some pieces of music, the time signature will change throughout the track.


Unless you already know what it is, it’s not possible to work out the second value of the time signature is just by listening to the music.

This is because a difference in tempo can also be interpreted as a difference in time signature. For example: 4/4 at 100bpm will technically sound the same as 4/8 at 50bpm.

With that in mind, the following only applies if you know what the time signature is: For example, you wrote the music and, even then, you probably won’t need to modify your calculation.

If you do know what it is, for the purpose of the calculation explained here, if the second number is a four, then there’s nothing more to do. If, however, it’s any other value, for example: 4/8, then you can modify the bar length using this method:

Divide the first number in the time signature by the second to give you a modifier value. For example, if the time signature is 4/8:

4 / 8 = a modifier of 0.5

Multiply the bar length by this modifier to adjust for a time signature other than 4/4.

// To calculate the bar length of a 100bpm track in 4/8:
double barLength = (60d / 100 * 4) * (4/8);

Calculating when the next bar will occur

Once you know the duration of a note, beat or in this case a bar, you can use that value to calculate when the next musical unit will occur.

Returning to our example, say that you’re playing back a combat music loop and you want to transition the track to an ending cue at the next bar because all of the enemies are dead.

You can use the current position in the Audio Clip, and the bar duration that was just calculated, to work out when the next bar will occur.

This is done using the Modulo Operation, which is a computing function that returns the remainder, after division, of one number by another.

In C Sharp, it’s represented by the Percent Symbol %, and it works like this:

5 % 2 = 1

Why does it return 1? Because 2 goes into 5 twice (a total of 4), leaving 1 left over.

Some more examples:

25 % 7 = 4

18 % 5 = 3

10 % 2 = 0

How to use the Modulo Operation to calculate musical time:

Using the Modulo Operation to divide the time elapsed so far by the duration of the musical unit you want to snap to will return a remainder value.

That value represents the progress through the current, incomplete, bar, beat or note at the time of the calculation.

Subtracting the remainder value from a full bar (or note, or beat – whatever you’re using) will return the time left in seconds until the next one.

This allows you to schedule an audio event at that moment, using PlayScheduled.

Here’s what it looks like:

Beat matching audio to music in Unity using Modulo

And here’s how it works:

Get the time elapsed in the Audio Clip so far

To calculate when the next bar will occur, first you need to find the elapsed time of the current Clip. There are two ways to do this:

  • Take the DSP Time that the clip started and subtract it from the current DSP Time. 
  • Divide the AudioSource sample position by the clip’s sample rate.
// Get the current Time Elapsed
double timeElapsed = AudioSettings.dspTime - startTime;
// Or
double timeElapsed = (double)AudioSource.timeSamples / AudioClip.Frequency;

Note that timeSamples is an integer, so I need to cast it as a double before division.

In this example I’m going to use the second method, using the Clip’s sample position to calculate the time elapsed.

Get the time elapsed through the current bar

Next, divide the time elapsed by the bar duration using the Modulo operation to return a remainder:

// Use the Modulo Operation to get the time Elapsed in the current bar
double remainder = timeElapsed % barDuration;

This will leave a time value that represents the time elapsed of only the current bar.

Find out how long before the next bar

Subtract this value from the length of a full bar to get the time remaining in the current bar.

// Calculate time remaining in the current bar
double timeToNextBar = barLength - remainder;

Calculate a value you can use in PlayScheduled

Add this to the current DSP time to get the absolute time value of the next bar.

// Get the time value of the next bar
double nextTime = AudioSettings.dspTime + timeToNextBar;

When you put it all together, here’s how it looks in scripting:

// Calculate the duration of a bar that's 80bpm in 4/4
double barDuration = 60d / 80 * 4;
// This line works out how far you are through the current bar
double remainder = ((double)audioSource.timeSamples / audioSource.clip.frequency) % (barDuration);
// This line works out when the next bar will occur
double nextBarTime = AudioSettings.dspTime + barDuration - remainder;
// Set the current Clip to end on the next bar
// Schedule an ending clip to start on the next bar

Other uses for beat matching: Musically timed hits

Another use for this method is to create audio triggers that are tied to game events that can snap to the next musical beat. An example of this could be creating a drum hit, that sounds every time you kill an enemy in the game that triggers on the next note, so that it sounds like it’s part of the music.

Image Attribution:

by John Leonard French

Game Composer & Sound Designer


56 Comments on “How to Queue Audio Clips in Unity (the Ultimate Guide to PlayScheduled)”

  1. Hi John,

    thank you so much for this Guide, that helped me to achieve what I wanted after a lot of searching!
    I needed this not for music, but for a rather short sound effect like a thruster with a distinct start sound followed by a loop that needed to be at least 2 seconds to sound good. In order to stop looping at any point I used the volume property – please let me know if that is not a good idea.

    And by the way – your encryption loop sounds great, so thanks for that as well! : )

    Here is the code if anyone needs something like that – please feel free to cut / change it as you see fit. I made everything public to see in the inspector what is going on.

    using System.Collections;
    using System.Collections.Generic;
    using UnityEngine;

    public enum AudioState {Off, Going1, Going2};
    public class ThrusterSoundTest : MonoBehaviour
        public AudioState state = AudioState.Off;

        public AudioSource thruster_Start;
        public AudioSource thruster_Going_1;
        public AudioSource thruster_Going_2;

        public double startDuration = 0;
        public double goingDuration = 0;
        public double nextStartTime = 0;

        public bool thrusterOn = false;
        public bool shuttingDown = false;

        public float thrusterVolume = 1.0f;

        void Start ()
            startDuration = (double)thruster_Start.clip.samples / thruster_Start.clip.frequency;
            goingDuration = (double)thruster_Going_1.clip.samples / thruster_Going_1.clip.frequency;
        // Update is called once per frame
        void Update ()
            thrusterOn = Input.GetKey(“space”);

            bool prepareNextAudio = (AudioSettings.dspTime > nextStartTime – 0.5);

                case AudioState.Off:
                    if (thrusterOn)
                        thrusterVolume = 1.0f;

                        double startTime = AudioSettings.dspTime + 0.05;

                        nextStartTime = startTime + startDuration;
                        state = AudioState.Going1;
                case AudioState.Going1:
                    if (prepareNextAudio)
                        nextStartTime += goingDuration;
                        state = AudioState.Going2;
                case AudioState.Going2:
                    if (prepareNextAudio)
                        nextStartTime += goingDuration;
                        state = AudioState.Going1;

            if (state != AudioState.Off && !thrusterOn)
                shuttingDown = true;

            if (shuttingDown)
                thrusterVolume -= 0.1f;

                if (thrusterVolume < 0)

                    state = AudioState.Off;
                    shuttingDown = false;

        void SetAudioSourceVolume(float volume)
            thruster_Start.volume = volume;
            thruster_Going_1.volume = volume;
            thruster_Going_2.volume = volume;

    1. That’s great, really glad that this article helped you out. Thank you for sharing your work!
      All the best,

  2. Just wanted to add that if you are having issues with starting with an offset with PlayScheduled you should use AudioClip.timeSamples not AudioClip.time.

    1. Thanks for your comment, please note however that I’m not recommending using AudioSource.time, for exactly the reason you suggest. It’s a float value and isn’t accurate enough. (double)AudioSource.timeSamples / AudioClip.frequency returns an accurate playback position.

      Let me know though if I’ve misunderstood your comment.

      All the best,

  3. It’s more then just floating point error .time is in discrete chunks and it’s not accurate. Something like “.length” is actually accurate it just isn’t as precise as double.
    To that point unless your dealing with ridiculous huge sample sizes you probably won’t see any difference using float over double, or just casting “.length” to a double. Anything that represents the dspTime has to be double because it deals with huge numbers. Sample time is tiny compared to clock time.

    I’m doing lots of sample accurate stuff I found this article helpful.
    I think I’m just adding that if you do ever what to start a scheduled clip with an offset you have to use .timeSamples.

    1. A VERY helpful article!! Finding in depth information about unity audio is usually so challenging, so this was wonderful!

      I’ve been trying to create a music sequencer using unity and has been using playOneShot with the poor timing you’d expect .

      I’ve implemented John’s suggestion of two audio sources per track. Each “click”, it schedules the specific sound to play extremely exactly one click later. My trouble now is figuring out how to silence the already playing audio source exactly when the other audio source begins to play. There’s no Stop Scheduled method.

      Any suggestions on how to smoothly mute the already playing audio source right as the other audio source begins playing?


      1. Okay, now I’m noticing the set scheduled end time method. I think that should do what I need. I’m hoping it won’t pop or click!

        1. So set scheduled end time is probably the right way to go, seems to be working, but I’m hearing an audible click between notes. I suspect it’s the instantaneous silencing of one of the audio sources, going from a non zero point of the waveform to zero.

          Any ideas how to eliminate / reduce this kind of clicking?

          1. Hi there, yes that will most likely be because you’re cutting the audio instantly. Best way to deal with this would be to add a short fade, which you could probably do with the Stop function and a coroutine. This would be off of the audio timeline, so it wouldn’t be as precise but, but because you’re fading and stopping it shouldn’t be too noticeable.

          2. Thanks for the advice, John. I’ve tried that and it helps but like you said, it’s a bit imprecise, and I really wish the penny whistle notes flowed now smoothly into each other. I realize what I really need is the ability to schedule audio source parameters value changes to precise DSP clock times. Then I could just use a looping single note and just charge the pitch at precise times, and could also squash the volume precisely before the next now sounds (for non looping sounds). Maybe I should submit that as a request to the Unity audio gods.

  4. Thank you so much! This guide helped me a lot.
    I was beginning to think that it was impossible to play a music in tempo with another one in Unity after reading so many wrong ways to do it. I wondered that was the reason for people using audo middleware like FMOD. Now I see it can be done without FMOD. So… (I’m a big noob so dont kill me for doing this question) why to use FMOD? Is there something you can’t do only with Unity?
    Thank you again and again

    1. You’re welcome! There are a lot of things that you can do with FMOD, and with Wwise too. Probably more than can be done in Unity alone, or at least more easily in some cases. However, Unity’s audio engine is (for now) built on top of Fmod, so in some ways it’s just a different way of interfacing with the same engine. I’d like to do an article in the future on working with the different integrators so keep an eye out in the future for that.

  5. Thank you so much for this.

    What I wanted to do was to play some BGM from other videogames, where there is an intro and then a loop that plays indefinitely.

    I tried to check the timestamps where the loop occurs in my Audio Clip, a beginning and an end. My idea was to play, wait until end, then play from the start point (a bit later than 0). The problem in the implementation was the use of a coroutine for waiting, so in the time that the code was setting the replay it was already too late and sounded inaccurately.

    I had given up on the idea by just having the audio for the loop part, but I was really missing the intro part. But I found this you gave me the solution, by using two AudioSources and by using PlayScheduled and SetScheduledEndTime, which I didn’t know about before.

    Here is the relevant part of the code, in case anyone finds it interesting.

    Note that currentData.iniLoop and currentData.endLoop represent the timestamps for the loop of the audio. I obtained them by carefully looking at the corresponding clip with Audacity and annotating those values.

    void Start()
    sourceArray[0].clip = audio;
    sourceArray[1].clip = audio;

    // Start in a second
    nextStartTime = AudioSettings.dspTime + 1;

    clipIndex = 0;

    sourceArray[clipIndex].time = 0;

    // Until what we consider the end
    nextStartTime += currentData.endLoop;

    void Update()
    if (AudioSettings.dspTime > nextStartTime – 1)
    clipIndex = 1 – clipIndex; //Swap the clip

    //We’ll play the source when the other finishes,
    //and starting with a delay
    sourceArray[clipIndex].time = currentData.iniLoop;

    //Wait time is the interval established for the loop
    nextStartTime += currentData.endLoop – currentData.iniLoop;

  6. “Although it is possible to call PlayScheduled at the current DSP Time, which will cause the audio to start as soon as possible, one of the main benefits of using PlayScheduled is to avoid instantly preparing the sound for playback.

    For this reason, to avoid timing errors when scheduling the first Clip, I add a slight delay.”

    In the event that you do want an audio clip to play right away and then loop infinitely, like turning on a metronome click, would you suggest using audiosource.Play() for the initial sound and then use your scheduling code loop listed above? Or would it be wiser to start calculating your beat right from Start() and then when the user turns on the metronome find the next global beat and then use playScheduled to schedule the first tick?

    1. Hi Dustin, it’s probably going to be better to keep it all synced using dsp time wherever possible. It is possible to schedule a clip immediately, I believe that the risk is that the clip may not play at all, or the actual playback will be slightly delayed although I have not actually encountered either issue before. With that in mind, maybe start with no delay and introduce one if you run into issues.

  7. Great guide. I found it easy to understand, with all the details that I needed, and a nice flow through the guide.

  8. It finally worked 🙂 Thank you!!!
    I ran into a small problem tho’
    I was trying to play an intro Clip followed by a loop

    However, I was unable to assign the Audio Sources correctly,
    the option was missing in the editor…

    By trying to figure out what was different between that code and the other ones in here with arrays, so I tried to just put the AudioSources public and everything was solved for me.

    I’m a complete noob at the moment, so I don’t know if it was the right call…
    But it works 🙂


    1. Yeah that would likely have been it. A public Audio Source reference is accessible to other Scripts AND it means you can access it in the Inspector. You also have the option of keeping a variable private but showing it in the inspector with SerializeField but setting it as public will work just fine. Glad it helped.

  9. Hello John,

    thanks for this post. It’s been very helpful. I wanted to take it one step further and I synced the playback of the current loop to the next one and crossfade the transition, but when I the “play on next bar”, the crossfade stats to act weird and I’m not sure what to do. Here’s my code. Cheers!

    void Start()
    toggle = 0;
    nextClip = 0;

    startTime = AudioSettings.dspTime + 0.2;
    barDuration = 60d / 136 * 4;

    // Set the first clip and play it
    baseToPlay = baseClipArray[nextClip];
    baseSourceArray[toggle].clip = baseToPlay;


    // Update is called once per frame
    void Update()
    if (Input.GetKeyDown(KeyCode.Alpha1))

    public void PlayClipAtNextBar()
    // Increase the clip index number, reset if it runs out of clips
    nextClip = nextClip < baseClipArray.Length – 1 ? nextClip + 1 : 0;
    baseToPlay = baseClipArray[nextClip];

    // This line works out how far you are through the current bar
    remainder = ((double)baseSourceArray[toggle].timeSamples / baseSourceArray[toggle].clip.frequency) % (barDuration);

    // This line works out when the next bar will occur
    nextBarTime = AudioSettings.dspTime + barDuration – remainder;

    // Set the current Clip to end on the next bar

    // Start the fade out
    StartCoroutine(StartFade(baseSourceArray[toggle], crossfade, 0f));

    // Get the current playback position
    currentPosition = baseSourceArray[toggle].timeSamples;

    // Switch AudioSource
    toggle = 1 – toggle;

    // Load the clip, set volume and playback postion (so it matches the current clip's position)
    baseSourceArray[toggle].clip = baseToPlay;
    baseSourceArray[toggle].volume = 0f;
    baseSourceArray[toggle].timeSamples = currentPosition;

    // Star the fade in of the proceding clip and play it
    StartCoroutine(StartFade(baseSourceArray[toggle], crossfade, 1f));

    private IEnumerator StartFade(AudioSource audioSource, float duration, float targetVolume)
    float currentTime = 0;
    float start = audioSource.volume;

    while (currentTime < duration)
    currentTime += Time.deltaTime;
    audioSource.volume = Mathf.Lerp(start, targetVolume, currentTime / duration);
    yield return null;

    yield break;

    1. Hi, so there are 3 issues here that I can see:

      1. When you start the crossfade, you start it immediately, while the music transition waits for the next bar. Pass the remainder into the coroutine (cast it as a float) and use Wait For Seconds to delay the start of the fade.
      2. The scheduled end time of the first clip does not account for the fade duration (i.e. it stops when the transition starts.) To fix it pass in the nextBarTime + the crossfade duration.
      3. When you calculate the start time of the 2nd clip you’re using the time position at the point of calculation not when the clip is scheduled to start. Either have both layers playing constantly, start them at the same time and fade between them (this is the more reliable method) or account for the offset by using the remainder value (e.g. something like remainder * audioSource.clip.frequency should give you a sample value which you can add on to the one that you’re using).

      Hope that help!

  10. Hi,

    in the and I’ve managed to make it work, but I forgot to reply here. Here’s what I did:

    // Prepare the clip
    baseToPlay = baseClipArray[clip];

    // This line works out how far you are through the current bar
    remainder = ((double)baseSourceArray[toggleBase].timeSamples / baseSourceArray[toggleBase].clip.frequency) % (barDuration);

    // This line works out when the next bar will occur
    nextBarTime = AudioSettings.dspTime + barDuration – remainder;

    // This is for delaying the start of fade in/out (so it starts right on the next bar)
    delay = (float)(barDuration – remainder);

    // Stop the current Clip on the end of 2 bars
    baseSourceArray[toggleBase].SetScheduledEndTime(nextBarTime + barDuration);

    // Start the fade out
    StartCoroutine(StartFadeOnNextBar(baseSourceArray[toggleBase], (float)barDuration, 0f));

    // Get the current playback position
    currentPosition = baseSourceArray[toggleBase].timeSamples;

    // Switch AudioSource
    toggleBase = 1 – toggleBase;

    // Load the clip, set volume and playback postion (so it matches the current clip’s position)
    baseSourceArray[toggleBase].timeSamples = currentPosition;
    baseSourceArray[toggleBase].clip = baseToPlay;
    baseSourceArray[toggleBase].volume = 0f;

    // Star the clip and fade it in
    baseSourceArray[toggleBase].PlayScheduled(nextBarTime – barDuration);
    StartCoroutine(StartFadeOnNextBar(baseSourceArray[toggleBase], crossfadeDuration, 1f));

  11. Hi John,

    If you want to .Pause a scheduled source but also pause the delay value of it, so that when you Unpause the scheduled source, it resumes the delay to where it paused. How would you do that ?

    For example :
    You .PlayScheduled(AudioSettings.dspTime + 2); Lets admit the value of AudioSettings.dspTime + 2 on this frame is 98 and the sound will play at 100 (98 + 2).
    Then you wait 1 second and .Pause it. So the dspTime on this frame is 99. The dspTime will continue and go beyond “100”. So when you will .Unpause it, it will not wait another second before it starts playing, but it will play immediatly because we are beyond 100.

    Would the only solution be to record a (double)Value of the passedDelay in Update() and reschedule the sound when you .Unpause it : .PlayScheduled(AudioSettings.dspTIme + (2 – passedDelay)) ?

    Thanks a lot for help and advices on this !

    1. If you pause the audio listener, the timing of scheduled clips is also paused, even though the DSP time value will appear to continue, any scheduled clips are and, as far as I remember this extends to any delay you add on also, paused and will then operate on different timeline accordingly. Put simply, so long as you’re not using an out of time point of reference to schedule clips, when you use audioListener.pause, it should just work. Let me know if you’re still having issues with it.

  12. Hi John, great article! It helped me a lot to get a grasp on what can be done with music in unity. Now, I’m still unsure I still can do what I was looking for. Maybe you have a clue:
    I have a music loop that I know the bpm and signature, so I know exactly when a second music piece will enter, the problem is that even if the second AudioClip starts at the right time, I also need to stop playing the original loop at that same time :/
    I can precisely PlaySchedule, but if I stop playing when it’s scheduled, i have a gap that cap last up to 1 tab of silence.. Any ideas? I have been looking at Unity’s docu but couldn’t find anything useful.. thanks before hand! 🙂

    1. Use SetScheduledEndTime and pass in the start time of the next clip, then you will be able to stop one clip at the same time as staring the other. What I usually do, however, is I don’t set the audio sources to loop, instead I just endlessly queue up clips to play once. Either the same clip, to create a loop or a new clip to change to a different part.

  13. “I can precisely PlaySchedule, but if I stop playing when it’s scheduled, I have a gap that can last up to 1 bar of silence..” (sorry for the typos)

  14. Very helpful. I’ve used what you’ve got here to build a DJ for my game. Each clip I’ve tripled up – one as a fade in, one as a fade out, and one to loop. Asking the dj to swap track (such as when the boss battle is about to start) waits for the bar to end, then starts fading the current track out and the new one in. Then the main loop runs. Thanks!

  15. Thanks for the article. Can I use PlaySchedled type method to call a method that isn’t an audio related method? If I do a planned dstTime check in Update it can be missed because Update doesn’t fire every dspTime tick. If I want a method to fire at the exact time an audio clip is started it do I have to do a if(a22.00) check? That creates a limit in the precision.


    1. You’re welcome! As far as I know, there isn’t a way to do that, (to fire other functions using the audio dsp time) so you may be stuck with Update.

  16. Hi John thanks for this article help me a lot, altough im struggling to join several pieces of audio clips together, i’m using the code you provided but with no luck, the result i see is everything playing at the same time and it’s a mess.

    Here is the Code:

    private double startTime;
    private int toggle;
    private int nextClip;
    public AudioSource[] guitar1AudioSource;
    // currentSongToPlay is a Scriptable Object that contains all the audioClips in a List

    void Update()

    if (AudioSettings.dspTime > startTime – 1)
    //Check if there is an empty audio Clip
    if(currentSongToPlay.leadGuitarTrackList[nextClip].audioClip != null)
    AudioClip clipToPlay =currentSongToPlay.leadGuitarTrackList[nextClip].audioClip;

    // Loads the next Clip to play and schedules when it will start
    guitar1AudioSource[toggle].clip = clipToPlay;

    // Checks how long the Clip will last and updates the Next Start Time with a new value
    double duration = (double)clipToPlay.samples / clipToPlay.frequency;
    startTime = startTime + duration;
    // Switches the toggle to use the other Audio Source next
    toggle = 1 – toggle;

    // Increase the clip index number, reset if it runs out of clips
    nextClip = nextClip < currentSongToPlay.leadGuitarTrackList.Count – 1 ? nextClip + 1 : 0;



    1. If you haven’t already, try turning on the debug mode in the inspector or setting the time variables to public, including duration etc. just to make sure everything is working as it should.

      1. Hi John thank you for your fast reply, it was my mistake with the “startTime” variable, it was set to “0” at the start instead to the audioSettings.dspTime. Now it is working perfect. This is my bible now, i read it every time i work on my Music Game. I’m making a Game where you can make your own music joining together different instrument loops of different durations and then play that customized song like a Guitar Hero game, your blog just saved my life and tons of hours of coding and research plus headaches.

        1. this was super helpful to me:

          my mistake with the “startTime” variable, it was set to “0” at the start instead to the audioSettings.dspTime.


  17. Hi John, I’m a beginner and just started to learn audio in Unity. The guide is very easy to understand and also all of the gamedevbeginner articles are also very easy to understand.

    I’m making an audio game right now with no visual for my university project and I want to make a script that continue the audio when the screen is clicked (Mobile). Is using the playscheduled is the correct method to use when achieving the result that I want?

    I’m searching for similar question/project but I can’t seem to find anything that talked about this. Really hope for the fast response. Thanks a lot!

    1. Thanks Patrick, glad my articles have been helpful. To answer your question, PlayScheduled is good for queueing up clips, but if all you want to do is not stop the audio, then you could do probably that with volume. I’m not 100% sure of what you’re trying to make though, so if you would be happy to send more information to [email protected] I can try to help.

  18. Hi John,
    I was using a similar approach to queue clips. I was using PlayDelayed instead, and was just using the intro.clip.lenght as the delay.

    I stumbled to some problems in some of my levels, where the loop would start “prematurely” and you’d hear intro and loop play for at the same time for a bit, like .5 seconds.
    Adding some delay to the start kind of fixed the issue, but adapted to your approach, and I still had add over 1 second delay (AudioSettings.dspTime + 1) to the startTime. In some of my levels that use different intro -> loop sequences there are no problems. They all are using Streaming as loading type.

    Any ideas why this delay is necessary for queuing to work?

    1. This could be a number of things but the streaming format could be one of them. First I’d change that to check to see if that’s causing the issue, otherwise there might be a mistake elsewhere. Email me at [email protected] with more details and I can try to help.

  19. Hi John!

    I’ve been using the PlayScheduled the way this guide suggests.
    However, I’ve noticed that sometimes in some of the levels in my game that I’m developing, the transition from a song intro to loop section is not perfect. Depending on the size of the scene and length of the intro (I think), the loop starts 0.5-1 seconds too early. I had to add over 1.5s start delay to the scheduled play of the intro to make the bug disappear. Any ideas why this is happening?

  20. Hi John, this is the best article I could find on playsheduled, so thank you for that!

    I’m surprised though that you don’t say anything on how to make it fit within the gameplay.

    For example let’s say you want to change the color of an object at each beat, we’re still gonna have to use something like:

    yield return new WaitForSeconds(beatDuration);
    //turn red
    yield return new WaitForSeconds(beatDuration);
    //turn green

    Or is there a better way to do?

    1. Thank you! So regarding beat matching, you’re right, the general idea as far as I understand is to use the dsp timeline as a master clock, against which you could check to see which beat you were on, but anything you do as a result of being on one beat or another will still have to happen in Update or a coroutine or one of the other frame rate dependent events.

  21. Hi, thank you for the tutorial. I wanted to ask a question:

    I’m completely unable to get the audio source components for the audio source array,

    So I have both in the inspector:

    public AudioSource[] _aSourceMusic;
    int toggle;

    Then I get the audio Source component (from another GameObj whose reference I stored already as _musicManager):

    _aSourceMusic = _musicManager.GetComponent();

    And this is giving me the following error:
    “Cannot implicitly convert type ‘UnityEngine.AudioSource’ to ‘UnityEngine.AudioSource[]”

    If I try using the following for the array (the 2 audio sources we need for playing audio clips seamlessly)

    _aSourceMusic [0] = _musicManager.GetComponent();
    _aSourceMusic [1] = _musicManager.GetComponent();

    I get the same audio source for the 2 items of the array: There’s no way for me to distinguish between AudioSource1 and AudioSource2.

    So I would really appreciate if you could elaborate on how to get the 2 audio sources we need from code

    PS: I cannot set them in the inspector because everytime I reload a scene the references get null, that’s why I need to set them from code

    1. Thanks! So to answer your question, you need to use Get Components which will get all of the components of a particular type and put them in an array, instead of Get Component, which will only, always, get the first. So, _aSourceMusic = _musicManager.GetComponents<AudioSource>(); should do what you need.

      Hope that helps!

  22. Hello,

    first of all, thanks for the article, I definitely learned something new today!

    I’m trying to schedule the audio clips one after another using AudioSource.PlayScheduled as you suggested but I’d like to start the next clip from a point somewhere in the middle, not from the beginning. To achieve that I tried to set the AudioSource.time or AudioSource.timeSamples field to the desired start point.

    It works correctly on Windows and on Android but I am getting a very weird behaviour on WebGL build. The clip starts playing earlier than scheduled so that it reaches the requested starting point in the scheduled time. Have you encountered anything like that and do you have any idea how to work around that?

    The simplest code to reproduce the problem is the following:

    public void StartPlaying() {
    double scheduleTime = AudioSettings.dspTime + delay; // the exact time of start (e.g. after 5 s)
    audioSource.clip = clip;
    audioSource.time = fromTime; // setting the clip somewhere in the middle (e.g. 4 s)
    audioSource.PlayScheduled(scheduleTime); // setting the AudioSource to start playing after some delay

    If I have the delay set to 5 and the fromTime set to 4, then it starts playing after 1 s and from the beginning of the clip (so that after 5 s it is in the point of 4 s).

    Unfortunately I couldn’t find anything related to that on the internet and I am completely out of ideas what to try next. Do you have any suggestion what could be the problem and how to solve it? I would be very grateful for any piece of advice.

    Thank you very much in advance for any help provided.

    1. Hi, so as I understand it WebGL doesn’t use Fmod internally, whereas every other Unity platform does, which is probably why you’re getting inconsistent behaviour. WebGL also doesn’t support all audio APIs, however Play Scheduled, technically should be supported. But, from my limited experience with WebGL, the results can be unpredictable. More information on WebGL audio can be found here:, my best suggestion for fixing your problem is to try to isolate which calculation isn’t working properly on WebGL and work around it from there. Hope that’s some help.

  23. Hi John,

    great article, thanks for it!

    I want to be able to create “melodies” by playing clips containing a single note of the scale.
    I planned to use Play in the Update method according to deltaTime, but maybe PlaySchedule is a better option.
    What do you think ?

    Do you have any more tips for what I want to achieve?

    Thanks in advance for your response!

Leave a Reply

Your email address will not be published. Required fields are marked *