When creating the node using the createMediaStreamTrackSource() method to create the node, you specify which track to use. This API manages operations inside an Audio Context. The ChannelSplitterNode interface separates the different channels of an audio source out into a set of mono outputs. The Web Audio API involves handling audio operations inside an audio context, and has been designed to allow modular routing. The following snippet creates an AudioContext: For older WebKit-based browsers, use the webkit prefix, as with webkitAudioContext. Because the code runs in the main thread, they have bad performance. The API consists on a graph, which redirect single or multiple input Sources into a Destination. A sample that shows the ScriptProcessorNode in action. <audio loop>.. should totally work without any gaps, but it doesn't - there's a 50-200ms gap on every loop, varied by browser. What's Implemented AudioContext (partially) AudioParam (almost there) AudioBufferSourceNode ScriptProcessorNode GainNode OscillatorNode DelayNode Installation npm install --save web-audio-api Demo Get ready, this is going to blow up your mind: Using the Web Audio API, we can route our source to its destination through an AudioGainNode in order to manipulate the volume:Audio graph with a gain node. Using audio worklets, you can define custom audio nodes written in JavaScript or WebAssembly. It is an AudioNode audio-processing module that is linked to two buffers, one containing the current input, one containing the output. To be able to do anything with the Web Audio API, we need to create an instance of the audio context. The AudioBufferSourceNode interface represents an audio source consisting of in-memory audio data, stored in an AudioBuffer. Check out the final demo here on Codepen, or see the source code on GitHub. The goal of this API is to include capabilities found in modern game audio engines and some of the mixing, processing, and filtering tasks that are found in modern desktop audio production applications. The basic approach is to use XMLHttpRequest for fetching sound files. Many sound effects playing nearly simultaneously. The Web Audio API handles audio operations inside an audio context, and has been designed to allow modular routing. This API can be used to add effects, filters to an audio source in the web. The AudioNode interface represents an audio-processing module like an audio source (e.g. The Web Audio API uses an AudioBuffer for short- to medium-length sounds. We can disconnect AudioNodes from the graph by calling node.disconnect(outputNumber). Depending on the use case, there's a myriad of options, but we'll provide functionality to play/pause the sound, alter the track's volume, and pan it from left to right. An audio context controls the creation of the nodes it contains and the execution of the audio processing, or decoding. So, let's start by taking a look at our play and pause functionality. If nothing happens, download Xcode and try again. The Web Audio API does not replace the