Tutorial on MIDI: Building Web-Based Audio Apps Controlled by MIDI Devices

Although the Web Audio API is gaining traction, particularly among developers creating HTML5 games, the Web MIDI API remains relatively unknown in the frontend development community. This obscurity likely stems from its limited browser compatibility and the scarcity of accessible documentation. Currently, only Google Chrome, with a specific flag enabled, supports the Web MIDI API. The prioritization of this API by browser developers remains low, as its integration into the ES7 standard is anticipated.

MIDI, an abbreviation for Musical Instrument Digital Interface, emerged in the early 1980s as a standardized communication protocol for electronic music devices, thanks to the collaborative efforts of music industry stakeholders. Despite the subsequent emergence of alternative protocols like OSC, MIDI has maintained its position as the dominant communication standard in the audio hardware industry for over three decades. It’s challenging to find a contemporary music producer without at least one MIDI device in their studio setup.

With the Web Audio API experiencing rapid development and widespread adoption, we are entering an era where browser-based applications can seamlessly bridge the gap between the digital realm and the tangible world. The Web MIDI API empowers us to not only craft synthesizers and audio effects but also to embark on the development of browser-based DAWs (Digital Audio Workstations). These DAWs possess the potential to rival their current Flash-based counterparts in both functionality and performance, as exemplified by Audiotool.

This MIDI tutorial aims to provide a foundational understanding of the Web MIDI API. Together, we’ll construct a straightforward monosynth that you can control using your preferred MIDI device. The complete source code for this project is accessible here, and a live demonstration is available for you to explore*. For those without a MIDI device, this tutorial remains accessible. By utilizing the ‘keyboard’ branch within the GitHub repository, you can simulate basic MIDI functionality using your computer keyboard to play notes and adjust octaves. This keyboard-controlled version is also featured in the live demo. It’s worth noting that due to hardware constraints, velocity and detune functionalities are deactivated when using your computer keyboard to interact with the synthesizer. A comprehensive key/note mapping is provided in the readme file on GitHub for your reference.

* Please note: The demo is no longer functional, as Heroku discontinued its free hosting services following the publication of this tutorial.

Toptal's midi tutorial

Essential Preparations for the MIDI Tutorial

Before delving into the tutorial, ensure you have the following:

  • Google Chrome (version 38 or higher) with the #enable-web-midi flag activated.
  • (Optional) A MIDI device, capable of triggering notes, connected to your computer.

To provide our application with a degree of organization, we’ll be employing Angular.js. Therefore, a basic familiarity with this framework is assumed.

Embarking on Our Journey

Our approach to building this MIDI application will be modular. We’ll divide it into three distinct modules:

  • WebMIDI: Responsible for managing the various MIDI devices linked to your computer.
  • WebAudio: Serving as the sound generation engine for our synthesizer.
  • WebSynth: Bridging the gap between the web interface and the audio engine.

User interaction with the web interface will be handled by an ‘App’ module. A visual representation of our application’s structure is shown below:

1
2
3
4
5
6
7
|- app
|-- js
|--- midi.js
|--- audio.js
|--- synth.js
|--- app.js
|- index.html

Furthermore, the following libraries should be installed to aid in the construction of your application: Angular.js, Bootstrap, and jQuery. Utilizing Bower is likely the most efficient method for installation.

The WebMIDI Module: Forging a Connection with the Physical Realm

Let’s begin our exploration of MIDI by establishing a connection between our MIDI devices and the application. This will involve creating a basic factory that returns a single method. The navigator.requestMIDIAccess method, part of the Web MIDI API, is instrumental in establishing communication with our MIDI devices:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
angular
    .module('WebMIDI', [])
    .factory('Devices', ['$window', function($window) {
        function _connect() {
            if($window.navigator && 'function' === typeof $window.navigator.requestMIDIAccess) {
                $window.navigator.requestMIDIAccess();
            } else {
                throw 'No Web MIDI support';
            }
        }

        return {
            connect: _connect
        };
    }]);

And with that, the connection is established!

The requestMIDIAccess method returns a promise. We can directly return this promise and then manage its resolution within our app’s controller:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
angular
    .module('DemoApp', ['WebMIDI'])
    .controller('AppCtrl', ['$scope', 'Devices', function($scope, devices) {
        $scope.devices = [];

        devices
            .connect()
            .then(function(access) {
                if('function' === typeof access.inputs) {
                    // deprecated
                    $scope.devices = access.inputs();
                    console.error('Update your Chrome version!');
                } else {
                    if(access.inputs && access.inputs.size > 0) {
                        var inputs = access.inputs.values(),
                            input = null;

                        // iterate through the devices
                        for (input = inputs.next(); input && !input.done; input = inputs.next()) {
                            $scope.devices.push(input.value);
                        }
                    } else {
                        console.error('No devices detected!');
                    }

                }
            })
            .catch(function(e) {
                console.error(e);
            });
    }]);

As previously mentioned, the requestMIDIAccess method returns a promise, which, upon fulfillment, passes an object containing two properties to the then method: inputs and outputs.

In prior iterations of Chrome, these properties were methods used to directly retrieve arrays of input and output devices. However, recent updates have transitioned these properties into objects. This change is significant, as it necessitates invoking the values method on either the inputs or outputs object to obtain the corresponding device list. The values method acts as a generator function, yielding an iterator. Given that this API is intended for inclusion in ES7, the implementation of generator-like behavior is logical, even if it introduces a level of complexity compared to the original approach.

The size property of the iterator object provides us with the device count. If at least one device is detected, we iterate through the results obtained by calling the iterator object’s next method. Each device is then added to an array defined within the $scope. On the frontend, we can create a basic select box to display all available input devices, allowing us to designate the active device for controlling our web synth:

1
2
3
<select ng-model="activeDevice" class="form-control" ng-options="device.manufacturer + ' ' + device.name for device in devices">
    <option value="" disabled>Choose a MIDI device...</option>
</select>

This select box is bound to a $scope variable named activeDevice, which we’ll later employ to link the selected device to our synthesizer.

connect this active device to the synth

The WebAudio Module: Generating Sound

The WebAudio API empowers us to not only work with audio files but also to synthesize sounds by replicating fundamental components of synthesizers, including oscillators, filters, and gain nodes amongst others.

Creating an Oscillator

Oscillators are tasked with producing waveforms. While numerous waveform types exist, the WebAudio API supports four primary ones: sine, square, triangle, and sawtooth. These waveforms “oscillate” at specific frequencies. When within the audible range of human hearing, these oscillations are perceived as sound. It’s also possible to define custom wavetables for specialized needs. Oscillators oscillating at low frequencies can be used to create LFOs (“low-frequency oscillators”), enabling sound modulation, a topic we won’t delve into in this particular tutorial.

To initiate sound creation, we begin by instantiating a new AudioContext:

1
2
3
function _createContext() {
    self.ctx = new $window.AudioContext();
}

With our AudioContext, we can instantiate any of the components the WebAudio API offers. Since we might need multiple instances of these components, it’s wise to establish services to generate new, unique instances as needed. Let’s start by crafting a service dedicated to creating new oscillators:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
angular
    .module('WebAudio', [])
    .service('OSC', function() {
        var self;

        function Oscillator(ctx) {
            self = this;
            self.osc = ctx.createOscillator();

            return self;
        }
    });

Now we can freely instantiate oscillators, providing the previously created AudioContext instance as an argument. For convenience, we’ll add some wrapper methods—purely for syntactic sugar—and return the Oscillator function:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Oscillator.prototype.setOscType = function(type) {
    if(type) {
        self.osc.type = type
    }
}

Oscillator.prototype.setFrequency = function(freq, time) {
    self.osc.frequency.setTargetAtTime(freq, 0, time);
};

Oscillator.prototype.start = function(pos) {
    self.osc.start(pos);
}

Oscillator.prototype.stop = function(pos) {
    self.osc.stop(pos);
}

Oscillator.prototype.connect = function(i) {
    self.osc.connect(i);
}

Oscillator.prototype.cancel = function() {
    self.osc.frequency.cancelScheduledValues(0);
}

return Oscillator;

Creating a Multipass Filter and a Volume Control

Our rudimentary audio engine requires two additional components: a multipass filter, allowing us to shape our sound, and a gain node, enabling volume control and muting capabilities. We can create these components using the same approach employed for the oscillator: crafting services that return functions equipped with wrapper methods. The AudioContext instance is supplied, and the appropriate method is invoked.

Invoking the createBiquadFilter method of the AudioContext instance allows us to create a filter:

1
ctx.createBiquadFilter();

Similarly, for a gain node, the createGain method is used:

1
ctx.createGain();

The WebSynth Module: Connecting the Dots

With our components in place, we’re nearing the point where we can construct our synth interface and link MIDI devices to our audio source. The initial step involves interconnecting our audio engine and preparing it to receive MIDI notes. Connecting the audio engine is a matter of instantiating the necessary components and then chaining them together using the connect method available to instances of each component. The connect method takes a single argument: the component to which you wish to connect the current instance. This method facilitates the creation of more elaborate component chains, allowing for scenarios like cross-fading and beyond.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
self.osc1 = new Oscillator(self.ctx);
self.osc1.setOscType('sine');
self.amp = new Amp(self.ctx);

self.osc1.connect(self.amp.gain);

self.amp.connect(self.ctx.destination);
self.amp.setVolume(0.0, 0); //mute the sound
    self.filter1.disconnect();
    self.amp.disconnect();
    self.amp.connect(self.ctx.destination);
}

We’ve successfully established the internal connections of our audio engine. Feel free to experiment with different wiring configurations; however, exercise caution with the volume to protect your hearing. Now, let’s integrate the MIDI interface into our application and enable the transmission of MIDI messages to our audio engine. We’ll establish a watcher on the device select box to simulate the “plugging in” of our MIDI device to the synth. This watcher will listen for incoming MIDI messages from the device and relay this information to the audio engine:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
// in the app's controller
$scope.$watch('activeDevice', DSP.plug);

// in the synth module
function _onmidimessage(e) {
    /**
    * e.data is an array
    * e.data[0] = on (144) / off (128) / detune (224)
    * e.data[1] = midi note
    * e.data[2] = velocity || detune
    */
    switch(e.data[0]) {
        case 144:
            Engine.noteOn(e.data[1], e.data[2]);
            break;
        case 128:
            Engine.noteOff(e.data[1]);
            break;
    }

}

function _plug(device) {
    self.device = device;
    self.device.onmidimessage = _onmidimessage;
}

In this code snippet, we are actively listening for MIDI events originating from the device. The data embedded within the MidiEvent Object is analyzed, and this information is subsequently passed to the appropriate method, either noteOn or noteOff. This decision is made based on the event code (144 for noteOn, 128 for noteOff). We can now add the necessary logic to the corresponding methods within the audio module to trigger sound generation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
function _noteOn(note, velocity) {
    self.activeNotes.push(note);

    self.osc1.cancel();
    self.currentFreq = _mtof(note);
    self.osc1.setFrequency(self.currentFreq, self.settings.portamento);

    self.amp.cancel();

    self.amp.setVolume(1.0, self.settings.attack);
}

function _noteOff(note) {
    var position = self.activeNotes.indexOf(note);
    if (position !== -1) {
        self.activeNotes.splice(position, 1);
    }

    if (self.activeNotes.length === 0) {
        // shut off the envelope
        self.amp.cancel();
        self.currentFreq = null;
        self.amp.setVolume(0.0, self.settings.release);
    } else {
        // in case another note is pressed, we set that one as the new active note
        self.osc1.cancel();
        self.currentFreq = _mtof(self.activeNotes[self.activeNotes.length - 1]);
        self.osc1.setFrequency(self.currentFreq, self.settings.portamento);
    }
}

Let’s dissect the events transpiring within this code. The noteOn method initiates by adding the current note to an array dedicated to storing notes. Although our focus is on building a monosynth, capable of playing a single note at any given time, it’s possible to press multiple keys simultaneously. Therefore, these notes need to be queued, ensuring that when a key is released, the subsequent note in the queue is played. Next, we halt the oscillator to assign the new frequency, which is converted from a MIDI note value (ranging from 0 to 127) to an actual frequency value. This conversion is accomplished using a bit of math:

1
2
3
function _mtof(note) {
    return 440 * Math.pow(2, (note - 69) / 12);
}

Shifting our attention to the noteOff method, we first locate the corresponding note within the array of active notes and remove it. If this note was the sole entry in the array, we simply mute the volume.

The setVolume method’s second argument governs the transition time. In musical terms, this equates to the attack time when a note is triggered and the release time when a note is released.

The WebAnalyser Module: Visualizing Sound

To enhance our synth’s capabilities further, we can incorporate an analyzer node. This node grants us the ability to visually represent our sound’s waveform using canvas for rendering. The creation of an analyzer node is slightly more involved compared to other AudioContext objects, as it requires the instantiation of a scriptProcessor node to perform the analysis. We begin by selecting the canvas element from the DOM:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
function Analyser(canvas) {
    self = this;

    self.canvas = angular.element(canvas) || null;
    self.view = self.canvas[0].getContext('2d') || null;
    self.javascriptNode = null;
    self.analyser = null;

    return self;
}

Next, we introduce a connect method, within which we’ll create both the analyzer and the script processor:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Analyser.prototype.connect = function(ctx, output) {
    // setup a javascript node
    self.javascriptNode = ctx.createScriptProcessor(2048, 1, 1);
    // connect to destination, else it isn't called
    self.javascriptNode.connect(ctx.destination);

    // setup an analyzer
    self.analyser = ctx.createAnalyser();
    self.analyser.smoothingTimeConstant = 0.3;
    self.analyser.fftSize = 512;

    // connect the output to the destination for sound
    output.connect(ctx.destination);
    // connect the output to the analyser for processing
    output.connect(self.analyser);

    self.analyser.connect(self.javascriptNode);

    // define the colors for the graph
    var gradient = self.view.createLinearGradient(0, 0, 0, 200);
    gradient.addColorStop(1, '#000000');
    gradient.addColorStop(0.75, '#ff0000');
    gradient.addColorStop(0.25, '#ffff00');
    gradient.addColorStop(0, '#ffffff');

    // when the audio process event is fired on the script processor
    // we get the frequency data into an array
    // and pass it to the drawSpectrum method to render it in the canvas
    self.javascriptNode.onaudioprocess = function() {
        // get the average for the first channel
        var array =  new Uint8Array(self.analyser.frequencyBinCount);
        self.analyser.getByteFrequencyData(array);

        // clear the current state
        self.view.clearRect(0, 0, 1000, 325);

        // set the fill style
        self.view.fillStyle = gradient;
        drawSpectrum(array);
    }
};

Our first step is to create a scriptProcessor object and link it to the destination. Subsequently, we create the analyzer itself, feeding it with the audio output from either the oscillator or filter. It’s crucial to note that we must still connect the audio output to the destination to ensure audibility. Additionally, we define the gradient colors for our graph by invoking the createLinearGradient method associated with the canvas element.

The scriptProcessor will trigger an ‘audioprocess’ event at regular intervals. Upon each trigger, we calculate the average frequencies captured by the analyzer, clear the canvas, and redraw the frequency graph by calling the drawSpectrum method:

1
2
3
4
5
6
7
8
function drawSpectrum(array) {
    for (var i = 0; i < (array.length); i++) {
        var v = array[i],
        h = self.canvas.height();

        self.view.fillRect(i * 2, h - (v - (h / 4)), 1, v + (h / 4));
    }
}

Lastly, we need to make adjustments to our audio engine’s wiring to accommodate this new component:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// in the _connectFilter() method
if(self.analyser) {
    self.analyser.connect(self.ctx, self.filter1);
} else {
    self.filter1.connect(self.ctx.destination);
}

// in the _disconnectFilter() method
if(self.analyser) {
    self.analyser.connect(self.ctx, self.amp);
} else {
    self.amp.connect(self.ctx.destination);
}

We now have a functional visualiser that dynamically displays the waveform of our synth! Although the setup required a bit of effort, the results are insightful, particularly when experimenting with filters.

Expanding Our Synth’s Horizons: Adding Velocity & Detune

Our synth is quite impressive, but it currently plays all notes at a uniform volume. This stems from our use of a fixed volume value of 1.0 instead of incorporating velocity data. Let’s rectify this and then explore how to enable the detune wheel commonly found on MIDI keyboards.

Enabling Velocity

For those unfamiliar with the concept of velocity, it essentially refers to how forcefully a key is pressed on a keyboard. The velocity value influences the perceived loudness or softness of the generated sound.

Within our MIDI tutorial synth, we can simulate this behavior by manipulating the gain node’s volume. This requires some mathematical calculation to convert the MIDI data into a float value between 0.0 and 1.0, suitable for the gain node:

1
2
3
function _vtov (velocity) {
    return (velocity / 127).toFixed(2);
}

MIDI devices have a velocity range of 0 to 127. By dividing this value by 127, we obtain a float value, rounded to two decimal places. Now, let’s modify the _noteOn method to pass this calculated value to the gain node:

1
self.amp.setVolume(_vtov(velocity), self.settings.attack);

And there you have it! Our synth now responds to key pressure, with volume variations reflecting how hard or soft keys are pressed.

Enabling the Detune Wheel on your MIDI Keyboard

The detune wheel, a feature found on most MIDI keyboards, provides the ability to subtly adjust the frequency of a note, resulting in an effect known as ‘detune’. Implementing this functionality is fairly straightforward in our context, as the detune wheel also emits a MidiMessage event with its unique event code (224). We can listen for this event and respond by recalculating the frequency value and updating the oscillator accordingly.

First, let’s equip our synth to capture this event. We’ll add an additional case to the switch statement we established within the _onmidimessage callback:

1
2
3
4
case 224:
    // the detune value is the third argument of the MidiEvent.data array
    Engine.detune(e.data[2]);
    break;

Next, we define the detune method within our audio engine:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
function _detune(d) {
    if(self.currentFreq) {
        //64 = no detune
        if(64 === d) {
            self.osc1.setFrequency(self.currentFreq, self.settings.portamento);
            self.detuneAmount = 0;
        } else {
            var detuneFreq = Math.pow(2, 1 / 12) * (d - 64);
            self.osc1.setFrequency(self.currentFreq + detuneFreq, self.settings.portamento);
            self.detuneAmount = detuneFreq;
        }
    }
}

The default detune value is set at 64, signifying no detune. In this scenario, we pass the current frequency directly to the oscillator.

Finally, we need to update the _noteOff method to factor in the detune value in case other notes are queued:

1
self.osc1.setFrequency(self.currentFreq + self.detuneAmount, self.settings.portamento);

Creating the Interface

Thus far, our interface consists solely of a select box for choosing the MIDI device and a waveform visualiser. We lack the ability to directly manipulate the sound through interaction with the web page. Let’s construct a basic interface using standard form elements and link them to our audio engine.

Creating a Layout for the Interface

Our interface will incorporate various form elements for controlling our synth’s sound:

  • A radio button group to select the oscillator type.
  • A checkbox to enable or disable the filter.
  • A radio button group for filter type selection.
  • Two range sliders to adjust the filter’s frequency and resonance.
  • Two range sliders to fine-tune the attack and release characteristics of the gain node.

Here’s an example of an HTML document for our interface:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
<div class="synth container" ng-controller="WebSynthCtrl">
    <h1>webaudio synth</h1>
    <div class="form-group">
        <select ng-model="activeDevice" class="form-control" ng-options="device.manufacturer + ' ' + device.name for device in devices">
            <option value="" disabled>Choose a MIDI device...</option>
        </select>
    </div>
    <div class="col-lg-6 col-md-6 col-sm-6">
        <h2>Oscillator</h2>
        <div class="form-group">
            <h3>Oscillator Type</h3>
            <label ng-repeat="t in oscTypes">
                <input type="radio" name="oscType" ng-model="synth.oscType" value="{{t}}" ng-checked="'{{t}}' === synth.oscType" />
                {{t}} 
            </label>
        </div>
        <h2>Filter</h2>
        <div class="form-group">
            <label>
                <input type="checkbox" ng-model="synth.filterOn" />
                enable filter
            </label>
        </div>
        <div class="form-group">
            <h3>Filter Type</h3>
            <label ng-repeat="t in filterTypes">
                <input type="radio" name="filterType" ng-model="synth.filterType" value="{{t}}" ng-disabled="!synth.filterOn" ng-checked="synth.filterOn && '{{t}}' === synth.filterType" />
                {{t}} 
            </label>
        </div>
        <div class="form-group">
            <!-- frequency -->
            <label>filter frequency:</label>
            <input type="range" class="form-control" min="50" max="10000" ng-model="synth.filterFreq" ng-disabled="!synth.filterOn" />
        </div>
        <div class="form-group">
            <!-- resonance -->
            <label>filter resonance:</label>
            <input type="range" class="form-control" min="0" max="150" ng-model="synth.filterRes" ng-disabled="!synth.filterOn" />
        </div>
    </div>
    <div class="col-lg-6 col-md-6 col-sm-6">
        <div class="panel panel-default">
            <div class="panel-heading">Analyser</div>
            <div class="panel-body">
                <!-- frequency analyser -->
                <canvas id="analyser"></canvas>
            </div>
        </div>
        <div class="form-group">
            <!-- attack -->
            <label>attack:</label>
            <input type="range" class="form-control" min="50" max="2500" ng-model="synth.attack" />
        </div>
        <div class="form-group">
            <!-- release -->
            <label>release:</label>
            <input type="range" class="form-control" min="50" max="1000" ng-model="synth.release" />
        </div>
    </div>
</div>

While aesthetic embellishment is beyond the scope of this basic MIDI tutorial, feel free to enhance the user interface further. Here’s an example of a more polished look:

polished midi user interface

Binding the Interface to the Audio Engine

We need to define a set of methods to establish the connection between these controls and our audio engine.

Controlling the Oscillator

A single method is all that’s needed to manage the oscillator type:

1
2
3
4
5
Oscillator.prototype.setOscType = function(type) {
    if(type) {
        self.osc.type = type;
    }
}

Controlling the Filter

Three controls are required for the filter: one for type selection, one for frequency adjustment, and one for resonance control. We can also link the _connectFilter and _disconnectFilter methods to the state of the checkbox.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
Filter.prototype.setFilterType = function(type) {
    if(type) {
        self.filter.type = type;
    }
}
Filter.prototype.setFilterFrequency = function(freq) {
    if(freq) {
        self.filter.frequency.value = freq;
    }
}
Filter.prototype.setFilterResonance = function(res) {
    if(res) {
        self.filter.Q.value = res;
    }
}

Controlling the Attack and Resonance

To add depth to our sound, we can manipulate the attack and release parameters of the gain node. This necessitates two methods:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
function _setAttack(a) {
    if(a) {
        self.settings.attack = a / 1000;
    }
}

function _setRelease(r) {
    if(r) {
        self.settings.release = r / 1000;
    }
}

Setting Up Watchers

The final step involves setting up a few watchers within our app’s controller and binding them to the methods we’ve defined:

1
2
3
4
5
6
7
$scope.$watch('synth.oscType', DSP.setOscType);
$scope.$watch('synth.filterOn', DSP.enableFilter);
$scope.$watch('synth.filterType', DSP.setFilterType);
$scope.$watch('synth.filterFreq', DSP.setFilterFrequency);
$scope.$watch('synth.filterRes', DSP.setFilterResonance);
$scope.$watch('synth.attack', DSP.setAttack);
$scope.$watch('synth.release', DSP.setRelease);

Conclusion

This MIDI tutorial has covered a substantial amount of ground, primarily focusing on demystifying the WebMIDI API—an API with limited documentation beyond the official W3C specification. Google Chrome’s implementation is relatively straightforward, though the transition to an iterator object for handling input and output devices might require some refactoring of legacy code.

The WebAudio API, in contrast, is very well-documented, especially on resources like the Mozilla Developer Network. It’s a rich API, and we’ve only scratched the surface of its capabilities. The Mozilla Developer Network is an invaluable resource, offering a wealth of code examples and detailed breakdowns of arguments and events for each component, aiding you in building custom, browser-based audio applications.

The continued evolution of both APIs promises to unlock exciting possibilities for JavaScript developers. We’re on the cusp of an era where fully-fledged, browser-based DAWs can rival their Flash-based predecessors. For desktop developers, tools like node-webkit](https://github.com/rogerwang/node-webkit) open doors to creating cross-platform applications. This progress has the potential to foster a new generation of music tools, empowering audiophiles by bridging the gap between the physical and digital realms.

Licensed under CC BY-NC-SA 4.0