Technisches Universität Berlin WFS residence (2014)

 IMG_6233 IMG_6231  

The WFS Electronishes Studio of TU-Berlin http://www.ak.tu-berlin.de/

On the invitation of Pr. Stefan Weinzierl, I came for a residency at the studio from 4/07/ to 23/07/2014

Monday 4 August 2014

First trends of WFS in Elektronisches Studio, Fachgebiet Audiokommunikation, Technische Universität Berlin.
Met Andreas Pisiewicz, Marc Voigt, David and Doris

Test controlling sources positions from my laptop via OSC with wacomtouch.
It works easily. Some strange artefacts appear. Maybe saturation.
! the interpolation time is in sec. and not in msec.

At the moment I am using Max Or IanniX on my Laptop connected to the main MP computer through OSC. Another machine MP is dedicated to matrixing audio to WFS. A group of powerful PCs control the WFS process itself.

I produce the sound on the main MP computer. I will be able to record the sequences and edit some of it on the same machine

 

Issues :
complexity of synchronizing all data : motion, sound, polyphony, routing

How can I record the whole experience and allow some editing ?
Maybe in Pro Tools or any other sequencer using as many tracks as voices an midi controllers (two per track for the position xy, or 4 in order to have better precision 16 bytes per position axis)
That would imply using a max patch to play the sound and send it to PT (via Rewire and internal MIDI)
When performing, the same thing should work from PT to the WFS system

All my tests are interesting but they all more or less sound the same. I should be able to do very simple and patterns in order to differentiate them by hearing and not through the visualization.

Patterns should definitely be associated with sound parameters. I am not quite sure it is possible to differentiate figures with continuous sounds.

I should try more pulses and noise.

Pro Tools is closed, Midi controllers are not precise enough.

*************

Tuesday 5 August

I could also use FTM editor or MuBu to record audio + spat on the same tool and edit, but I am afraid it will be a long appropriation process and I will not easily manage it, as it is not intuitive and not properly documented for beginners.

I build a config with 16 point sources and 8 plane wave sources, in order to create a cluster of sources and a background

In the afternoon Marc, David, Andreas and I have been trying in vain to connect Max audio outputs to PT audio Inputs using various ways such as Analog matrix, Madi matrix, Soundflower…

The main idea is to be able to record the sound, typicially 16 tracks in sync with the sources mouvements in order to be able to perform simple edition such as trim, cut, paste to compose a piece.

I wanted to record everything into pro tools 16 audio tracks + 16 MIDI tracks for a sound. “Un marteau pour écraser une mouche” ? certainly, but difficult to find the right solution.

 Other options to be test quickly :

– PT plugin created at TUB Electonic Studio by a PhD student allowing to send position commands directly from PT 10

– All in Max_MSP : 16 tracks .aiff or wave files + Data file such as mtr text/xml…

=> create a playlist editor to compose and replay audio+data files with several voices

– Using FTM editor and sdif or Mubu. (no answer from the ftm mailinglist)
(good solution especially with spatSDIF, but risky and limited for edition and long term lasting)

Another approach could be chosen : using PT as master, sending MMC to Max.

This would mean designing and performing sound for shapes on the PT and controlling Max on which the movement will run. IanniX sequences could be then recorded in Max

IanniX is a very good tool to create trajectories and to communicate them to another machine. Though it is not possible to synchronize it to pro tools.

There would be a way to do so in calling a series of different files on the run, but it is not meant to do so.

 

Saturday 9 August
I have finally succeeded in creating a plugin in Max for Live and to optimise it to be quite efficient. Andreas had the idea to use Live, which I am not familiar with, but I have for a long time wanted to improve my use of this soft. The plugin is called IanniXWonder. It allow to record Iannix movements in Live, to edit them and to replay them on xWonder, the WFS tool ot the TU. It uses two components : a hub allowing to set the OSC (adresses, port and data protocol) and parameters such as the size of the room (virtual and real), the smoothing,etc.

 This morning I went on a bike to Grunenwald, it is incredible, because it is a real forest inside the city. I hardly met anyone for km on the far end aereas.

I thought of the word plastic to define the sound work I am trying to do. It is really a question of sound and music plasticity.

I should really improve the system in order to use it as a generic OSC recorder control. Basically, I would like to control with the wacom tablet the movements of the cursors generated by IanniX and to record the result, but at the moment I can’t, unless I create the hole control within Live, but the process would be a too heavy duty which would certainly produce deadly lags.

The solution would be to allow other message input than IanniX /cursor in order to reinterpret them or produce OSC controls using any other software (TouchOSC, MaxMSP, etc.)

The working chain should be :
Computer 1 : Iannix -> OSC -> max MSP 6.1.8 (wacom control…)
-> OSC ->
Computer 2 : MaxForLive (plugin IanniXWonder) + AbletonLiveSuite9 -> xWonder + WFS routing

[-> MADI ->
Computer 3 : WFS Matrix
-> DANTE ->
Computer 4 : WFS driver
Ethersound System amplifiers and speakers]

 

Sunday 10 August
I nearly abandoned the idea of filtering IanniX OSC messages through Wacom controlled max patch. (I could try to send Iannix on local host only, then reformat the data in IanniX mode). Nevertheless I added “Custom” input to the plug which must be tested properly.I can control the output scale.
One solution would be to add a new plug for any kind of OSC input using MaxForLive.
The lags I have got on the visualization are probably due to the deferlow object I added to put all non active visualisation under low priority.

There should be a solution to debut the nodes visualization:
1-add a clic to activate or deactivate them
2-use a peak to count the max voice allocation and deactivate the higher numbers

For music creation, I should either use IanniX, or use max and the wacom.

In order to obtain a straight correspondence between IanniX screen and the sound scene. check mapped on global score bounding rectangle in INFOS/MESSAGES tab.
Bug: menu Room limits has to be reset. The loadbang does not seem to work.

Tuesday 12 August
I thought that different geometry could make great difference, but it seems they don’t, unless the sound itself reinforces the geometry. Nevertheless, it is not the geometry itself that you listen to, but the movement. In addition, the movement is not realy movement, it is rather a kind of reduced motion sens.
I have tried to put the voices plugs directly on the audio track but it did not work. It produced heavy cracks. I did nont understand why.

Improvements to make:
Allowing to chose the source controlled by each control voice (adding a numbox called source number). This would allow to have a different source number than the default voice number.
I am trying to have 2 times 8 tracks with automation and sound, but at the moment I still have many technical issues.
AbletonLive does not like having too many automation data, so when I try to delete some parts, it never finishes the work.

 

Friday 15 August 

Visited Hörsaal H 104. Very impressive sound system something like 300 sound sources all around the venue at ear height. I performed some of my sound trends.

 

IMG_6245IMG_6251

Listening to the resuts of what I recorded in the studio, I have noticed some:

Good points in my work for:
– very slow motion objects
– mechanical movements event fast with sound elements marking the movements main points
– synchronization and desynchronization of movements and sound patterns

Bad points in my work for:
– unperfectly mastered sound source figures
– fast motion sources
– narrow points of sound sources (too concentrated on a single point)
– transition “in the room” <->”out of the room” (avoid speakers position)

Notes:
Sometimes I could hear a kind of noise, very nice but it should be considered as a sound in the composition and I should add more noisy sounds
Birds: ok in the middle of SpatSeqs (becomes machines at the end Ok)
Composition of wings : slow passages individual, then figures then adding shouts (can une touch wacom with long line for a goup of 8 sources). After some time they become more agressive; faster nearer and their sound change.

 

Other sequences:
    – Mechanical machines
– Diversity variation/modulation
– Unfolded piano
– Insects
– Single curve figures: simple one sho curves repeated with variation
– Stopmotion: movements step walk

IMG_6243

We have listened to 3 other pieces:

A recording in Köln Cathedral of an organist playing Messiaen
The sound is really great. It gives an impression of being in the place and the feeling of the body of the organ. Basses are ample and well distributed, pipes are spread and treble clusters sound diversified in space.

 The second work was by Hans Tutschku. A beautiful electroacoustic music piece. Hans has also created a space sources manager in max allowing to record and replay sound sources. He makes a great use of diversity and you can hear the sound sources moving around. But the impression is not very different from a diffusion on a traditional speaker ensemble. There are either to many source movement to identify them or static WFS sources and other kinds of spat, probably ambisonic but producing more or less the same impression but less precise.

The third work was by Robert Henke. Very nice ambient effective composition throwing us in the middle of a storm. Here again a beautiful spatial sound, magnificent bass noises all around, you’re there. But once more, no hearable sound position and no movement, just a blurred impression of agitated spatial immersive sound. I think I was doing this kind of things already in the 1986 with Espace Musical in my octophonic piece “Intermezzo”, creating 8 tracks immersive sound ambiance of feasting crowds in the street for new year’s eve, natural sounds and others.

The specificity of wfs allows much more interesting sound kinds. Playing with absolute positions of a small number of objects, changing these position according to hearable patterns, Synchronizing these positions and changes with sound content, creating motion patterns, repeated movements and variations of the movement rhythms, etc. That is the kind of choreography or kinetics I a working on. The aim being to give it’s body to music.

The studio  should be turned on and off within a certain order to reduce the risk of crash and disfunctionning:
1 – general on
2 – Turn on the two computers ( Taunus & Matrix) in the control room
3 – Mac Taunus:
Launch WFS in launcher app
Open WFS status and wait 5 min. untill the window only leaves two lines after n101 & n102
Then
Launch xWonder
Open (cmd o) the project “roland” or create a new one with with 32 punctual sources using colour groups of 8 corresponding to Live groups of tracks colour
Launch Ableton Live
Open project
4 – Mac Matrix:
on the blue button on the left select 44,100 Hz.
on menu WFS choose the source “Mac-RME 64ch.”
5 – turn on WFS all amplifiers and speakers
6 – Mac Matrix:
Launch Dante Controller (old) (Audinate) app
Check all pannels/nodes. if not working open and double click on the extension icon or restart the module

 When finishing:
– Turn off WFS app on Taunus.
– Then turn off the two computers.
– Don’t forget to turn off all amp and speakers
– Shut the window
– Turn general off
– Range and clean
– Lights off
– Doors closed

 

Friday 22
Last day

At the end I manage to make lots of things, but I met lots of technical issues.
I had to learn two new softwares: IanniX and AbeltonLive and both are very clever but not totally adapted to what I am doing.
I had much more trouble with Live which I was demanding more of course.
The most important issue, was the OSC automation speed.

I had to reduce drastically to as little as possible the frequency of the automation recording and sometimes to reduce manually the number of points on each clip and each track, which was terribly long.

My final cession has got 32 tracks and about 20 clips per track with automation of x, y, volume and some other parameters.

The hub which control all voices on a MIDI track allows to choose OSC in and outs, but also to filter the input, to smooth the output and other important features. One of the most interesting being to resample and modify: the general scale and the x y offsets. This allows to create ins and outs.

I should actually allow to do this for groups of tracks, but at the moment I do not master very well the grouping and I find it really stupid compare to Pro Tools and other DAWs grouping features.

 

There are other very stupid thing about Live:
– If you move an audio clip from a track to another, the automation does no follow the audio, ad to copy it, it’s a hell of a business: you have to create in exactly the same order automation subtracks on the source and destination tracks and copy the automation in sync track by track (if it is well organised you can copy several subtracks at a time, but you have to repeat this fastidious operation for each track and each clip). It took me days and days.
– It is impossible to snap automation to grid, the only correction you can do is to reduce the quantity of points if the adjacents have near values. To do this you must select all points within a subtrack automation selection and make them all blue, then move them up or down. First, it necessarily changes their value, which is odd, second; the proximity threshold is the track vertical size, which mean that if you want to make a good filter, you cannot see anything and you necessarily modify your automation values. This is really really dumb.
In addition to all that when live has got too many automation points it lags and crashes often. For example to move a group of clips with there automation on the same tracks or worse to delete or copy/paste them, it can take 5 minutes or never finishes on a super big MacPro.
It seems a little better on Mavericks on my laptop.

 

 

Let’s now write about music:
I have been working on several music subjects, not all of the ones I wanted but only three of them:
The first part of the composition is an attempt to model bird flocks. It works quite well, but there is still some work on the composition: at the moment they fly in and out slower and faster separately and together and they become more intense and threatening, but I would like to improve the scenario and the sounds. The fly is mostly a wings beat made with a filtered noise and an envelope. I control 8 voice at a time and can modify the speed, the filtering, but it must not change too much, the envelope according to the speed. I would like to add the noise of the fly in the air, a bass sound of a whoosh when the bird pass nearer, but this would imply a more intelligent generator. If I manage to do it, I may be able to use this bass sound to control the threat and develop a contrapoint with the wings sound.
I found a good title for the second part: “la patrouille en vadrouille”. The idea is to use punctual sounds in rows and rhythm at a position which changes from one occurrence to the next one. Here as well, I use sets of 8 voices and create decay rhythms repeating each of the sound on each voice at it’s own rate. I use an old algorithm created at Montbéliar with the help of Arnaud Gendre, which allows to define the number of cycle of the slower pace before all the cycle resynchronise with one another. The sounds are tonic or noise. I have also used simple clicks, which are quite violent and very interesting because they can create interesting rhythms and become clouds of clicks.
Ableton Live offers a very interesting control; the tempo, because you can actually change it dynamically in the master track and create accelerations, bounces effects, ralentiendi and global rate changes without changing the sounds to badly. It adds a lot to final result of the mastering expression.
The last part is a kind of infernal machine of the size of the room with mechanical element moving across the room. It is a sort of printer, but also a war machine, hitting, sliding, banging, rolling… It should be very impressive and expressive, but at the moment, I only have some elements on top of one another. There is a serious job of selection, editing and re-construction still to be done.
I feel frustrated. I have to go back to Paris without finishing the job because I have been stuck with technical issues all these days.
The big picture is somehow at the same time good and bad. The good thing is that my idea of kinetic, spatial, plastic music seems to work. But I did much too complicated things with too many voices, too complex movements, usually incomprehensible, even if they are quite expressive. I which I could be more econome, applied and methodical and I would work with much simpler musical patterns and sounds. I always want to play 8 rôles at the same time instead of one.

Would I be one day able to workout simply like a good musician and not like a short cutting freak ?

Roland Cahen

To download IanniXWonder : http://www.rolandcahen.eu/DEVELOPPEMENT/IanniXWonder_012.zip
The archive contains the plug-ins, explanations, an example (help patches within the plug-ins)

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s