MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
midi
Recherche

What is MIDI? The Essential Guide

vendredi 7 juin 2019, 13:54 , par Sweetwater inSync
MIDI is more important than ever — and you need to know what it can do for you.
Computers are totally integrated into our lives, from desktops to smartphones to musical instruments. Personal computers have always had a language for working with numbers and the alphabet so you could do spreadsheets and write letters. But they didn’t have a language for music — and that’s what MIDI is all about.
Here’s an analogy. To print a letter with a computer, you type on its QWERTY keyboard. This sends data to your computer that corresponds to the letters you type, using a computer language called ASCII (American Standard Code for Information Interchange). This standardized set of codes represents letters, numerals, and symbols. Because the computer speaks ASCII, a word processor can display those letters on your screen. And because your printer speaks ASCII, the computer can send data from the word processor to the printer, which will print out your letter.
MIDI (Musical Instrument Digital Interface) is also a language — a set of standardized codes that represent musical parameters, like pitch, dynamics, tempo, and much more. It works similarly to the ASCII example given above: you play notes on your MIDI-compatible keyboard, which sends data to the computer that corresponds to what you play. A recording or notation program recognizes what those notes are and displays them on the screen. You can then send MIDI data from the computer to a MIDI-compatible tone generator (think of it as a printer for music), which reproduces what you originally played on your keyboard. You can also use MIDI in real time, by connecting your keyboard directly to a MIDI-compatible tone generator.
It’s important to remember that MIDI isn’t audio; it’s data. MIDI creates no sounds; it’s a computer language that triggers sounds. It’s like the piano roll in a player piano — the piano roll itself doesn’t create sound. It only triggers notes on the piano, which, like a tone generator, plays the actual sounds.
The MIDI Origin Story
Every superhero needs an origin story, and MIDI is no different. Back in the ’80s, synthesizers were becoming increasingly affordable and popular. But every time you bought a synthesizer, a keyboard came along with it. While this led to some visually impressive stage setups, let’s face it — you have only two hands. To avoid this kind of wasteful (and expensive) redundancy, Dave Smith and Chet Wood presented a paper to the Audio Engineering Society in 1981 on a “Universal Synthesizer Interface,” which later became the basis for MIDI as we know it today. The concept was simple: you could have a single, master keyboard that generated data corresponding to what you played. This data could then feed a tone generator that understood the data it was receiving and could therefore produce sounds that corresponded to what you played. What’s more, you could trigger several tone generators simultaneously to layer sounds or buy some great new tone generator without having to change keyboard controllers. This streamlined stage setups dramatically, brought down the price of synthesizers, and opened up new possibilities for musicians.
In addition to defining a language, MIDI also needed hardware that could send and receive MIDI data, including a specification for a cable that could patch a MIDI controller (like a keyboard) to a MIDI tone generator. Fortunately, the entire music industry saw the virtues of MIDI and agreed on a simple hardware interface that cost only about $2 to add to gear like keyboards. Manufacturers figured that if this “new MIDI thing” was successful, it was $2 well spent — and if not, well, it wasn’t much of a loss. To make hooking up systems even easier, MIDI was designed to “daisy-chain” — in other words, with multiple MIDI-compatible units, the MIDI out from one device could feed another device’s MIDI in, and its MIDI out (or MIDI thru, which just passed the data along without any changes) could go to another device’s MIDI in, and so on.
In 1983, the initial MIDI spec was finalized. Sequential Circuits and Roland demo’d two synths exchanging MIDI data at a NAMM (National Association of Music Merchants) trade show, and MIDI was off and running. It gained traction almost immediately and never let up. In a world weary of format wars like Beta vs. VHS, Mac vs. Windows, and FireWire vs. USB, MIDI stands alone — not only as a technical achievement, but as an example of just how cool the music industry is, and the tremendous results that happen when companies work together for the good of their customers. Since then, it has stood the test of time by adapting to new technologies, like MIDI over USB, and gaining the ability to control a wide number of devices (fig. 1).
Figure 1: A representative MIDI setup. A keyboard controller is sending MIDI data to a MIDI-controlled effects device, a tabletop synthesizer, and even a lighting/fog machine controller, over standard MIDI cables. It’s also sending MIDI data over USB to a computer that’s running MIDI-compatible software.
But That Was 36 Years Ago — MIDI Must Be Obsolete!
Actually, no. For three very good reasons.

The MIDI language expresses musical parameters, and those haven’t changed. People still play notes, notes still have pitches, songs still have tempos, bending notes and vibrato still exist, and dynamic control remains an important emotional component of music. Until people stop playing music, that aspect of MIDI will never be obsolete.
Because it’s a language, it doesn’t care what technology you use. MIDI data can go over a hardware cable, USB, Thunderbolt, data streams on the web, or even Apple’s Lightning connector — it doesn’t matter (fig. 2). MIDI data doesn’t care what operating system you use, as long as you’re running a program that speaks MIDI. It’s like any language: we didn’t have to stop speaking our native language just because telephones were invented, or when video conferencing came along.

Figure 2: The rear panel of Nektar’s Panorama P6 keyboard controller can send MIDI data over a 5-pin DIN connector (the circular jack on the right outlined in orange) or USB (left).

Perhaps most importantly, the specification has continued to evolve, again thanks to a climate of industry-wide cooperation fostered by the MMA (MIDI Manufacturers Association) and Japan’s AMEI (Association of Music Electronics Industry), who work closely together. MIDI has expanded to control lighting, trigger pyrotechnics, provide automation in the recording studio, and more. Hardware MIDI instruments have been joined by virtual, software-based instruments that live inside your computer and speak MIDI. Controllers that generate MIDI data are no longer limited to keyboards, and they now include MIDI drum controllers, guitar controllers, wind controllers, audio-to-MIDI converters (both hardware and software; see fig. 3), and more.

Figure 3: Even Melodyne Essential, the introductory version of Celemony’s Melodyne software family, can convert audio signals into MIDI data. In PreSonus Studio One, Melodyne has converted a bass line played on guitar (the track labeled 1) to MIDI data, and the data has been dragged into an instrument track (2) and opened in a MIDI editor window (3) so it can be transposed down an octave and drive a bass synthesizer.
With the recent announcement of MIDI 2.0, MIDI is ready to grow again. But MIDI 2.0 doesn’t obsolete MIDI 1.0 — it just expands it. Existing MIDI 1.0 gear will continue working in a MIDI 2.0 environment and may possibly acquire a few new features.
MIDI’s continued relevance is remarkable. Seriously, do you still use anything else computer-related that dates back to 1983? If so, I have some SCSI drives, NuBus cards, and RS-232 cables I’d like to sell you.
The Language Itself
A lot of MIDI articles talk about bits and bytes, but you don’t need to know that — any more than you need to know the code that makes up the letter “A” when you type on your QWERTY keyboard. The MIDI language addresses two broad areas: musical expression and timing/synchronization. Let’s cover musical expression now and leave timing/synchronization for a future article.
The two main types of musically expressive MIDI data involve notes and controllers. This can be confusing, because the word controller has two different meanings: A) devices (like keyboards) that control sound generators, and B) a particular type of MIDI message. To keep things clear, we’ll refer to controllermessages, or controllernumbers, when talking about MIDI data. Now back to our regularly scheduled program…
Note data expresses when you play a note, its pitch, when you release the note, and how hard you hit the key (called velocity, which corresponds to dynamics—namely, how loud or soft the note should play). Velocity uses a clever way to measure dynamics. When you hit a keyboard key harder, it takes less time for the key to travel from the up position to hitting the keybed. When you hit a key more softly, it takes more time for the key to go from the up position to hitting the keybed. By measuring the time it takes for the key to travel from the up position to when it hits the keybed (i.e., the velocity with which you hit the key), MIDI derives a value that corresponds to that dynamic.
A few keyboards also have release velocity, which indicates how rapidly you released the key.
Controller messages modify the sound of what you’re playing, based on some performance-oriented gesture. The following are some of the most common hardware devices that generate controller messages.

Pitch bend. Most keyboard controllers have wheels or levers you can move to change pitch, like the way a guitarist bends a string or a violinist slides between notes (fig. 4).

Figure 4: Arturia’s KeyLab mkII keyboard controller includes wheels (outlined in orange) you can rotate to change pitch (left) and modulation (right).

Modulation. This will also be some kind of wheel (also shown in fig. 4) or lever. Typically it adds vibrato, but it can just as easily open or close a filter, change a signal processor’s effect (like the amount of echo), or affect some other parameter.
Pressure (also called aftertouch). Some keyboards send data that corresponds to pressure applied to a key after it’s down. For example, you might press on the key to bend pitch or add vibrato. Pressure data can represent the average of all keys being held, while the far rarer polyphonic aftertouch (fig. 5) generates individual pressure data for each note that’s held down.

Figure 5: CME’s Xkey 37-key mobile keyboard controller is compact and affordable yet is among the relatively small number of keyboards that offer polyphonic aftertouch.

Footpedal. Most controllers have a footpedal jack, and you can use the pedal to control a parameter (typically volume, but it could be something else).
Sustain pedal. Similar to the footpedal, a sustain pedal uses a footswitch to control sustain, like a piano’s sustain pedal.

Breath controller. You blow into this, much as if you were playing a wind instrument, to create a MIDI data stream of controller messages.
Ribbon controller. This is a strip; running your finger along it sends out controller messages (fig. 6).

Figure 6: Native Instruments’ Komplete Kontrol S-series keyboards include a ribbon controller below the pitch and modulation wheels.
However, realize that because the MIDI spec is so deep, not all MIDI gear necessarily implements all aspects of the MIDI spec. For example, a keyboard might not implement polyphonic aftertouch, and a home piano might not have a mod wheel. Most gear includes an associated MIDI implementation chart that lists its MIDI capabilities.
Anyway, to get back to the nuts and bolts of MIDI, we’re dealing with data — so let’s look at how MIDI organizes this data.
Channel Number
When you play a note, you can choose to send it over any of 16 MIDI channels. This is useful for many reasons. Suppose you have a tone generator that makes a great piano sound, and a second tone generator that does fantastic orchestral sounds. Set your controller to transmit over channel 1, set both the piano and string modules to receive over channel 1, and you’ll trigger both of them at once. But perhaps you want piano on some songs, and strings on other songs. Set the piano to channel 1, the strings to channel 2, and then choose to transmit over either channel 1 or 2 on your keyboard, depending on which sound you want to hear.
Channels are also a crucial part of MIDI sequencing. This is the process of recording MIDI data into a computer and is the MIDI equivalent of multitrack audio recording. Suppose you want to record data to trigger MIDI-controlled drum sounds, then other data to trigger a MIDI-controlled bass sound, and finally, data to trigger a MIDI-controlled piano tone module. Without channels, all the instruments would play back all the same notes, at the same time. But if you record the drum notes on channel 1, the bass notes on channel 2, and the piano on channel 3, then each instrument will play back only the notes intended for it. Note there’s no particular meaning to the different numbers — those notes could also be recorded on channels 4, 11, and 16. The only common channel assignment is that the default for drum sounds is channel 10, but that’s not a rule.
Some controllers transmit over multiple channels. For example, a MIDI guitar controller may operate in a mode that allows it to send data for each string over its own channel. So the bottom two strings could trigger a bass sound, while the upper four strings trigger an organ sound.
Channels are also important for multitimbral hardware and virtual instruments, which can play back many different sounds simultaneously — for example, to provide a complete backing track, with multiple instrument sounds, for a singer/songwriter (fig. 7). These are popular instrument choices to use along with MIDI sequencers, because you can record different channels of data into the MIDI sequencer, assign the sounds in the multitimbral instrument to their corresponding channels, and play back a complete composition.
Figure 7: IK Multimedia’s SampleTank 4 can play 16 different sounds simultaneously. The eight sounds shown here respond to MIDI data coming in over their individual channels.
When the MIDI spec was created, 16 channels seemed like enough — after all, how many people could afford 16 hardware synthesizers? Over time, though, people wanted to be able to sequence more sounds, use channels for triggering lights as well as music, and so on. The solution was hardware interfaces with multiple MIDI ports, with each port handling 16 channels (fig. 8).
Figure 8: The iConnectMIDI4+ from iConnectivity is a 4-port MIDI interface (the other port is on the front panel) for Mac, Windows, and iOS. It provides 64 MIDI channels, is expandable for more ports, and can even be part of a computer network.
For example, a MIDI hardware interface with four ports could send data over 16 x 4 = 64 channels. MIDI also spawned accessories like MIDI Mergers (so multiple players with different controllers could “jam” with a single sound generator — see fig. 9), MIDI Splitters to send a single MIDI input to multiple MIDI outputs, and the like.
Figure 9: The Quadra Merge, from MIDI Solutions, can merge four individual MIDI streams and distribute the merged streams over two MIDI outputs.
Matching channels is often the first step when working with MIDI — you want to make sure that the controller is transmitting on the same channel as the device you want to hear. If the channels aren’t matched, you won’t hear anything.
Controller Message Numbers
Controller messages are also associated with MIDI channels, but because you have the option to control multiple parameters, these messages are also associated with one of 128 unique controller message numbers — after all, you wouldn’t want your tone generator to get confused and control volume when you want to control vibrato. Although controller number assignments aren’t set in stone, some have acquired standard defaults over the years: 1 for modulation like vibrato, 7 for volume, 4 for footpedal, 64 for sustain pedal, etc. Pitch bend is considered important enough that it has its own dedicated pitch-bend messages.
As with channels, controller number assignments for what’s transmitting the data and what’s receiving the data need to match to obtain the expected results. MIDI instruments and signal processors associate controller numbers with specific parameters. Some of these assignments are fixed; for example, a virtual synthesizer may have the filter cutoff fixed at controller number 74. So if you want to control filter cutoff with a footpedal, you need to assign the footpedal to transmit controller 74 messages. If the assignments are fixed, there will be documentation as to which controller messages affect which parameters (fig. 10).
Figure 10: This is an excerpt from the controller assignment chart for Propellerhead Software’s Reason. For example, when controlling the Subtractor virtual synthesizer, MIDI controller #14 controls the Filter Envelope Attack.
Alternately, the synthesizer may let you assign a parameter to any controller number. So if you wanted to use a footpedal that defaults to controller number 4 to control the filter cutoff, you could assign the filter cutoff parameter to 4 — now the footpedal will control the filter cutoff.
Hold On — Let’s Make Life Easier!
If your eyes are glazing over at this point — keep calm and carry on. Manufacturers recognize that this whole process of identifying and assigning controllers can be daunting, so they’ve taken three approaches to simplify the process.
MIDI Learn. This involves selecting the parameter you want to control (e.g., filter resonance), and instructing it to “MIDI Learn.” Often, you invoke this with virtual instruments by right-clicking on the software control you want to associate with a hardware control (although this isn’t a standard — you may need to shift-click, call up a menu, or something else). Once you’ve selected MIDI Learn, the parameter waits until you move the hardware controller you want to use, like a mod wheel, footpedal, etc. That’s all there is to it — the assignment is done (fig. 11). You can similarly select “MIDI Forget” to de-assign the parameter.
Figure 11: In MOTU’s MX4 virtual instrument (which is also included with Digital Performer), the LFO 1 Delay parameter is being set to MIDI Learn. The display (outlined in orange) shows that the parameter is learning. As soon as you touch the desired hardware control, the assignment will have been learned.
MIDI Mapping. Many keyboard controllers include faders, rotary controls, switches, and other physical controllers that you can assign to parameters in a DAW, synthesizer, or effects processor. Although you can assign these yourself, to simplify matters some keyboard controllers include templates that map (assign) the hardware controllers to specific software parameters in selected programs. This kind of integration is a moving target as products evolve, so you’ll need to ask a Sweetwater Sales Engineer whether templates are available for the particular DAW or effects you use.
The most advanced version of MIDI mapping to date is Native Instruments’ NKS 2.0 (Native Kontrol System) protocol, available in their Komplete Kontrol keyboards. For an overview, refer to the article What NKS 2.0 Will Mean to You. Its genesis was in providing hands-on control, using primarily banks of eight touch-sensitive rotary controls and eight buttons, to control parameters in Native Instruments’ synthesizers (particularly those in their Komplete package). These came preassigned, with the keyboard’s display showing which knobs controlled which parameters, so you didn’t need to know about MIDI or assignments in order to tweak controls with knobs and buttons. Later, the spec was opened up to other instrument and effects developers, like Waves, Arturia, and Applied Acoustic Systems (fig. 12).
Figure 12: The NKS-compatible Chromaphone, from Applied Acoustics Systems, is loaded into the Komplete Kontrol host software (along with Waves’ Abbey Road Plates effect, and IK Multimedia’s TR5 Tape Echo plug-in). You can then edit parameters for any of these plug-ins from the Komplete Kontrol keyboard’s knobs and buttons.
With the Komplete Kontrol system, you load the Komplete Kontrol plug-in host, which shows all available NKS-compatible plug-ins, into your computer’s recording software. You can load plug-ins into Komplete Kontrol using either its on-screen user interface or the hardware Komplete Kontrol keyboard. Once loaded, you can then control parameters from the keyboard.
Note that the Komplete Kontrol keyboards are also general-purpose MIDI controllers, but they really come into their own when used with NKS-compatible plug-ins.
Make MIDI transparent.
PreSonus Studio One uses their Control Link system, which does assignments with drag and drop. The premise is you define the hardware controller you’re using (i.e., its knobs and switches) in a graphic. Then when you want to assign a software parameter to a hardware control, you click on the parameter, its name shows up in a window, and you drag the name onto the hardware knob that you want to control the parameter.
Wrapping Up
We’ve covered some of the fundamental aspects of the MIDI language and how you can use it to exchange information between MIDI devices, whether hardware or software. However, there’s much more to the MIDI story than this — which we’ll explore in future inSync articles. Meanwhile, to keep up to date with the latest MIDI developments, you can join the MIDI Association for free.

To learn more about MPE, check out this Daniel Fisher article about MIDI Polyphonic Expression.

The post What is MIDI? The Essential Guide appeared first on inSync.
https://www.sweetwater.com/insync/midi-essential-guide/
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
126 sources (21 en français)
Date Actuelle
mer. 24 avril - 11:48 CEST