Implementing a MIDI player in Kotlin from scratch
In this series I’ll try to show you how to implement a tracker-like environment in pure Kotlin. The goal is to divide this into 3 parts:
- Getting familiar with MIDI protocol and its abstractions in the JVM standard library to implement a simple MIDI player using coroutines (this post)
- Introduce Open Sound Control (OSC), it’s advantages over MIDI and use SuperCollider for precise timing, synth design and sample playback
- Discover interactivity possibilities with Kotlin Scripting.
Built-in Java MIDI support
Recently I discovered that the standard JVM library contains a feature-rich implementation of the MIDI protocol. We can grab a MIDI file from the web, and play it using the JVM with the following piece of code:
You should hear something like this:
You can find this code here. The goal of this post is to reimplement this player with pure Kotlin code using coroutines.
Reverse engineering the MIDI events
First of all, let’s try to reverse engineer the contents of a MIDI file. Starting with sequencer.sequence
we can easily discover that:
- a MIDI track contains a
Sequence
ofTrack
s - each
Track
contains a sequence ofMidiEvent
s - each
MidiEvent
has atick
(a timestamp of the event) and aMidiMessage
So let’s print it out:
Ok, so apart from discovering that MidiMessage
doesn’t have proper toString
implementation, we can see something that is specified in MidiMessage
javadocs - that the events
include not only the standard MIDI messages that a synthesizer can respond to, but also
meta-events
that can be used by sequencer programs
These must be MetaMessage
s, so for now let’s focus on the standard MIDI messages - ShortMessage
s:
OK, that’s better! We can now see that each event has a commmand
and two data
fields. This could be a good time to look at MIDI specification - a nice brief is in this article. From this table we can discover that for playing notes the NOTE ON
and NOTE OFF
events are used, and their data1
is the key number (MIDI note) and data2
is the velocity of the sound:
Digging into ShortMessage
class, we can also find command codes for both NOTE ON
and NOTE OFF
messages:
Given this knowledge we can now interpet these events:
as:
- at tick 0 playing the key with note 57 (
A
) with velocity 64 - at tick 240 releasing the key with note 57
- at tick 240 playing the key with note 45 (lower
A
) with velocity 64 - at tick 480 releasing the key with note 45
Modeling the melody
OK, at this time we know how MIDI files are constructed, so it’s good time to think about our own representation of melody. I think it will be good idea to “resolve” two issues with MIDI events:
- Each
NOTE ON
event must be “terminated” with correspondingNOTE OFF
event; this could cause problems when theNOTE OFF
event is missing, a better idea would be to just have a note duration, just as in sheet music notation - Dealing with
tick
s might be good for machines, but it would be more readable if we just represent time of the notes usingbeat
s and calculate the song tempo with beats per minute (BPM).
Using these assumptions we can introduce the Note
class as:
To translate tick
s to beat
s we can just use the sequence resolution
field, since most of the MIDI files all modeled using the PPQ
division type:
So, to translate MIDI events to List
of our Note
objects, we can write this extension function:
To simplify, I just set each Note
duration constantly to 0.25
instead of calculating it by finding the corresponding NOTE OFF
event.
Implementing the Player
We are now ready for a final part of implementation - a Player
. The most important thing for now it the timing - so let’s introduce helper Metronome
class to correctly transpose BPM
to milliseconds:
So for example, for a common 120 BPM
we should have 0.5 seconds per beat.
To implement a Player
we’ll use coroutines
which allow us to write really simple code by just using the delay
function to wait until the timestamp of the next note to play. This is really neat, as opposed to traditional multithreaded code, when you don’t want to block the running thread with the Thread.sleep
calls.
Then, to play all the notes, we just need to schedule
until their beat
translated to millis from some starting time:
To play MIDI notes, we need to pass javax.sound.midi.Receiver
instance, which allows us to send MidiMessage
s. We send NOTE ON
immidiately, and schedule NOTE OFF
to play after note’s duration
:
Summing it up, the code to play first 16 beats of Giorgio by Moroder
looks like this:
We can also now very simply convert it into looper, by just making these 16 notes a bar
and playing it one after another. Do make it possible let’s pass the loop lenght to the Metronome
:
And then let’s add playBar
function to our generic Player
:
Then we just need to start the looper by playing the first bar:
Now we’re ready to play the final result, with additional feature of adjusting the tempo. Here is an example with 90 BPM:
The complete code for MidiSequencer
is here.
Connecting to a real synthesiser
Finally, using a MIDI as interface gives us an opportunity to connect to various software and hardware devices. If you don’t have a hardware synth you can use for example open-source Surge XT which sounds pretty well.
ℹ️ For linux users: you should install Virtual MIDI kernel driver to trigger software synth events, see this link for detailed instructions
To connect to given MIDI device, we have to filter out the MidiSystem.getMidiDeviceInfo
list by description. And don’t forget to open
the device.
Then we just need to pass this receiver
to Player
instead of synstesiser.receiver
. In this example I’m looking for VirMIDI
, which was created by Linux kernel driver:
Here is a sample session with SurgeXT:
It’s Kotlin sequencer playing a real synth, enjoy! 😍