Navigation
Recherche
|
A Grid-Based Lifestyle: Sound Experiments 003
dimanche 17 février 2019, 20:54 , par Analog Industries
Yeah, it's been a minute since an AI video, but we're gonna get back to that now. Readers may remember a series of experiments I did back in 2011/2012 with touchscreen-based control paradigms (here, here, here, and here, with some absolutely stellar discussions about usability in the comments.) Those were admittedly somewhat early days for the entire concept; the iPad had only been out a couple months when I started those experiments, and the idea of an app-based control paradigm was a fairly new thing.Fast forward to 2018, and shit has progressed a bit, and people are generally used to using touchscreens for control. The reason for the video above isn't really about experimenting with the control paradigm, since that's pretty well-trod territory by now. I'm coming at things from a different angle. I've used a monome for years now, and I have a Max4Live step sequencer for that platform I've written that is pretty much only useful for me, and that I'm very happy with. However, I was using it last weekend, and I got to thinking that it would be dope if I could record control gestures along with the beats. Obviously, the monome itself is kind of shit for that sort of thing, so I first 'ported' the control logic for the monome to a JUCE app, so I could run it full screen on a touchscreen monitor. When I did this, I was able to break out all the unlabeled control buttons to dedicated buttons, and improve the pattern memory and such. After that, I gave each lane a four-bar gesture recorder; there are three gestures in all, and the X, Y, and Z planes can be assigned to any parameter in Live. (In the quick demo above, I generally have them going to effects sends and suchlike.) The sequence memory and control is hosted in the M4L patch, but the gesture recording and playback is hosted in the JUCE app. Note this is running on a separate computer entirely from the one hosting the Live session. (It is, in point of fact, that little Intel NUC, stuck to the back of the monitor with double-stick tape. It is communicating with M4L on the host computer via OSC over my home wi-fi network.) There are actually 10 lanes of gesture recording; in the video above, if you look closely, you can see them labeled D1 - D6 (the drum lanes), Bass, and S1 - S3. I don't actually use the non-drum ones in the video, but they're there and working. There are 8 banks of 8 patterns, and each pattern has its own gesture memory. I could easily add more buttons to where I could control the session entirely from the app, and not have a Push2 there, but there's no sense re-inventing the wheel. The purpose of the experiment was to proof-of-concept fast, intuitive real-time control of a Live session from a separate computer's touchscreen, and I'm pretty happy with things so far. The next step is to try to put together a whole song (or several songs) to perform; this isn't great for writing as it requires a lot of prior preparation. But for performing, I think there's a lot of potential to be explored. Side note: the two synth pads I play towards the end are both Quanta. That fairly major undertaking is reaching its final stages, and the synth is perfectly usable in a session now. So yay for that!
feedproxy.google.com/~r/AnalogIndustries/~3/Ymr16jaqrt0/
|
126 sources (21 en français)
Date Actuelle
ven. 22 nov. - 19:08 CET
|