Marleen Wynants talks to Marc Leman, Godfried-Willem Raes, Moniek Darge.


What do prehistoric fish and acoustic processors have in common?
A sensitivity for tones and sound...
In recent years, biology and musical science, as well as musical practice and theory, have yielded fundamental new insights into the human perception of music, the musical ear and the desire for physical contact with an object that produces sound. The field of reserach on audio and sound has recently been completely transformed with the shift from information processing to the processing of content. Two central aspects in this are the techniques of modelling using computer programs and the human desire for musical expression.
On the artistic stage, Godfried-Willem Raes and Moniek Darge remain pioneers in the search for new interfaces between humans and music.
Meanwhile, scientific researchers such as Marc Leman are trying to understand the mechanisms that effectivelyturn information into content. In other words, they wish to unravel the problems of musical universals, i.e., the I was governing the way we perceive structure in the constant stream of audio that surrounds us and interpret it as a meaningful whole.

between humans and music

















































Marleen Wynants interviewed Marc Leman, director of IPEM and audio researcher, and Godfried-WiIlem Raes and Moniek Darge, founders of LOGOS, the pioneering experimental contemporary music ensemble.

Leman: In the past, we concentrated largely on the syntax, on the formal structures underlying audio, but now we are increasingly moving onto semantics, to the associations of meaning that lie in the emotional, affective, and expressive domain. And that will probably be broadened to include locomotion. This theory is supported by computer modelling, with three areas of application: audio mining, interactive multimedia, and brain research.
We are very much into musical notation now, in which we are attempting to identify the meaningful elements. You can do that symbolically, but I strongly believe that models of emotionality and affect will have to focus on individuals, because the intersubjective component is so very strong. Secondly, the visual arts, the performing arts, and the whole world of multimedia need to be more closely involved.

FROM SYNTHESIZING SOUND TO MODELLING THE PHYSICS OF A MUSICAL INSTRUMENT

Has the development of technology been of decisive importance?
Leman: I would not put it that strongly.
Raes But it has certainly played a part. However, theoretically speaking, I could have made the machines I am making now thirty years ago.

Why didn’t you?
Raes At the time, it was the in thing to use electronics. We made electronic music. We all built synthesizers and acoustic music was considered passé. But even today, there are still - and Marc may differ with me here - no models of synthesis that do not immediately betray that they are electronic. From granular synthesis and FM right down to acoustic modelling... Except if you fake it, of course! I call using samples faking it. But taking a sample of a real sound does not solve the problem of synthesis.
Leman: In these last few years, though, physical modelling has been at the forefront, that is, the modelling of the physics of a musical instrument. It is not the sounds that are synthesized and brought together, but instead, the musical instrument itself is directly simulated on a computer, and with the simulation, you automatically produce the sound.
Raes It calls for empirical research, to investigate whether or not these sounds are recognised as synthetic. I may be deceived with short sounds and percussion, but not with longer sounds.
Leman: It’s easy these days to model isolated sounds quite well, the sound of a guitar string, for instance, or of a saxophone. But it’s true that when you incorporate things in a musical context, you also have to incorporate that whole context of expressiveness.

Does it make a difference to the brain whether you hear the sound of a real saxophone or a simulation of the sound of a saxophone?
Leman: If your perception does not distinguish between the two, it won’t make any difference at the level of brain activity either. But when you use it in a musical context, the difference will be audible and visible. (cont. top right)

Raes Or in a performance context. When you see musicians play, I am certain that you activate different brain centres than when you can only hear them through a loudspeaker. Even if only because you anticipate: you can see the musician blowing or pushing a valve and the effect is a sound. You don’t get that with a loudspeaker. You are completely dissociated from the cause-effect relation in sound. And that’s why I have been turning back, these last ten years, to acoustic music, where you can see the causal connection, and nothing of the virtualisation of the sound source.

MUSIC AND LOCOMOTION

Darge: Both in the spectator and in the musician who produces the sound, there is an enormous difference in perception and anticipation. We have been performing with Godfried’s invisible instrument, the holosound, for years. The holosound uses sonar technology. We’ve used it both for A Book of Moves and for Songbook. It’s played by moving about. The performer’s movements cause sounds to come out of the speakers. That is a different kind of reality, almost a 'lesser' reality than when you work with a visible and tangible machine, like we are doing now with the <M&M> robot orchestra. As a performer, initially, there’s rather a lot of emotional tension you have to deal with, it’s like having to do battle, because you have this real physical object that you think you can control through your movements, but then suddenly you find you are engaged in a dialogue and that the thing is no longer doing what you want it to do! That never happened to me with electronic sounds...
Raes The reason I build these automatons is rooted in my belief that there is absolutely no future for manual, reproductive musical rendition. Take a violinist who has to practice for eight years before he can play his instrument acceptably well. In fifty years’ time, that will have died out.

Come on!
Raes All right, with the exception of amateurs, of course. The cost-benefit analysis is a complete anachronism. I don’t enjoy the prospect any more than you do, but I’m convinced that it is an inescapable sociological development, a cultural evolution. What I’m interested in is automating the directing sources of those sounds. That way, we will be able to gain expressive control without having to acquire that mechanical mastery of an instrument. My automatic piano plays better than Jan Michiels, you know! Machines keep on getting better; people don’t.
Darge: Mind you, it will be replaced by a new craft, as I’ve said before. Not everybody is able to build an instrument like that, unless they can muster endless patience, perhaps even more than it takes to learn to play the violin, and that’s not even considering the software you need to write for it...
Raes It’s only a shift of level. Machines have become better than people at performing routines that require fine motor skills. So if you can be rid of the rigours of acquiring those fine skills, you can focus on developing your expressive competence! A good modern artist is more than a craftsman. Art reflects a search process, and the form in which that process manifests itself is only secondary. You learn to play music in order to be capable of rendering a particular text for an audience. In classical music, musicians play only in order to be compared to others. (cont. below left)



Marc Leman, Godfried-Willem Raes and Moniek Darge
photo by Marleen Wijnants

Leman: Still, there will always be people who will want physical contact with an object that produces sound.
Darge: The one does not necessarily exclude the other. People who do not feel the urge to learn to play one instrument to perfection may be able to play an entirely different arsenal of music, for instance, by merely moving about in space and getting music as a response. The only difference is that you no longer hold the instrument in your hands.
Raes A physical object, an instrument, always remains a prosthesis. But since all instruments are played with your body, why not make sensors that pick up your movements directly and take that as the input for directing sound sources? It seemed a logical step to me.

THE EVOLUTIONARY EFFECT OF HAIR CELLS IN THE VESTIBULAR SYSTEM OF FISH

Leman: It’s that higher level, those contents, that we are both working on. They are in fact patterns that are not subject to that direct causality, though they do allow for predictability. You can capture the movement of a person, for instance, a rather brusque gesture, and that is translated into the particular affective association most people will make. And you link that affective association back to sound, with no direct causal action between the movement and the sound. We conduct experiments, we make computer models of it, and these models can then be used by artists, if they will. How they use them is no concern of ours. Brain sciences are studying the localisation of functions. Well, however interesting that may be, it’s not immediately useful.
Raes Except if you wanted to develop an instrument with a direct effect on the prefrontal cortex. Because neuroscience leads to the induction of emotions directly in the brain. What else does XTC do? What does cocaine do? It acts directly on what goes on in the brain.

Do certain sounds or sequences of sound have a universal effect?
Leman: Universal tonalities are a case in point. This happens in the prefrontal cortex but also in the temporal lobes of the brain, because there is another sensory centre there. Meanwhile, we also know that rhythm and pitch are processed separately.
Raes But when you start filling that in with emotional effects, with meaning, you are working with highly cultural things. I think that music is common to the entire species of homo sapiens, otherwise we would not use it as a means of expression. Underneath the cultural layer, there has to be a biological foundation that is culture-independent.
Darge: What strikes me here is that we have been having this lengthy discussion on music and art as expressions of emotions, whereas I think that a whole series of works have been created since the beginning of the 20th century using sounds and audio that were deliberately detached from human emotions.(cont. up right)

Leman: Yes, but that link with emotions has resurfaced in recent years.
Darge: Don’t forget that the expressionists and the neo-expressionists also gave emotions pride of place. Certain avant-garde artists and certain surrealists wanted to move away from that, but in fact they were only a small minority. Leman: That minority wasn’t all that small, though, because cognition is a part of those emotions too. I prefer to speak of affect, a more loosely-defined quality that is much more abstract, but it can be expressed.
Darge: You can’t reduce everything to the concept of expression...
Raes The classic example in music is the fugue. It’s formal and abstract. It has nothing to do with those affects. I am fascinated by the device, the way it develops over time. That’s all I see in it.

What about the specific sound then?
Raes The sound is the tool you use to make the device run! Without sound, I can’t hear that fugue, or I would have to read it.
Darge: There is still another form of experiencing music. There are those sound experiences that give you the impression that time is standing still, or is tilted at an angle. That’s not an individual expression, it’s not algorithmic, it’s yet another type of musical experience.
Leman: That’s another phenomenon that’s being studied. It’s called ‘arousal’. It takes place at the level of one’s affective connection with one’s surroundings, which makes one disconnect from time completely. Still, I think that emotion and affect have a very strong cognitive component and that this component plays a major role in communication.

Is that a part of the research into the origins of music?
Leman: The research into those origins of music is largely situated in evolutionary neuropsychology and neurophysiology, with studies of the auditory system of species preceding the evolution of homo sapiens. In this way, researchers hope to discover which mechanisms have existed for a very long time. Mechanisms that lie deep in our brains and to some extent determine our perception of music, among other things.

Are we in for another procession of rats and mice?
Leman: No, this time, it’s fish.
Raes, Darge and Wynants in unison: Fish!
Leman: Yes. How would you explain the hedonism of young people, for instance? They go to pop concerts and want loud music. That’s not just because they want to hear that music. In fact, it’s played far too loud to be heard properly. But it stimulates certain nerves, hair cells to be precise, cells that do not actually belong to the auditory system, but to the vestibular system, a system that can be traced back to fish. Intact, what is stimulated here is a very primitive organ. People are conditioned in that respect. Why else would they do it, seeing that it ruins their ears? (cont. below left)



WHISPERING TYPISTS AND VIBRATING SUBWAYS

But don’t you also get the effect of the vibrations you can’t hear?
Raes I think you can get the same effect in practice without that kind of volume. Right here, under the stage, we’ve installed subwoofers. They go to nearly 0 hertz. I could create a strongly unpleasant atmosphere here, inaudibly. If I put a sinus on it, say of 6 or 7 hertz, you won’t hear it, but you’ll feel it. You’ll start to feel unwell.
Leman: Certainly. It reminds me of those relax armchairs that are used in therapy. They vibrate, and that seems to have a great impact.
It’s similar to what they are doing in subway stations, putting certain molecules in the floor cleaning products that give off a certain inconspicuous odour when people walk over them for fifteen minutes. It’s hardly noticeable, but most people seem to find it pleasant. Can you do that with sound too?
Raes That’s being done. That’s what muzak is. It’s manipulative and detestable.

But could you do it the other way around too, by taking those infrasonic vibrations away?
Raes The subway is so noisy you can’t mask anything. In the visual arts, or in a visual environment, you can always cover things up, so to speak. You can whitewash a wall or paint it in a certain colour. But you can’t subtract noise from noise.
Darge: Theoretically, it would help to take those infrasonic vibrations away. But a subway will always reverberate, of course.
Raes The whole of New York shakes. You can even measure it, anywhere in the city. It goes up to 20 hertz. The subway goes through rock and it vibrates. You can feel those infrasonic vibrations everywhere.
Leman: There hasn’t been any real scientific research into it.
Raes The military have studied it though, because the Americans have tried to use sound as a weapon. At 7 or 8 hertz, you get a differential tone with your heart frequency and the result is an arterial rupture. But you cannot focus such low vibrations. In other words, you could kill an entire village by aiming an enormous cone at it, but you’d better make sure you get the hell out of there yourself.



(cont. up right)

Implants are being developed for people to improve their hearing for certain sounds. Could the opposite be done too?
Raes There’s this known phenomenon. A while ago, research was done on typists who spent all day surrounded by their rattling machines. It consisted of taking audiograms, that is, graphs showing a person’s hearing set out against axes of loudness and frequency, of people who had worked as typists for ten years. These typists could typically hold entirely whispered conversations without being overheard by others, because they used a whisper that was inaudible to people with normal hearing. How was that possible? Simply because their audiogram showed, objectively and permanently, a dip - indicating a loss of hearing - at 20 decibel. They had become insensitive to typewriter noises. It was permanent damage though.
Leman: The great advantage of an artificial cochlea would be the ability to decide that you didn’t like a particular frequency, and to simply switch it off.
Raes Perhaps. But the theory of anti-noise is mostly a load of rubbish. That’s the theory behind attempts to generate noise in counterphase in order to suppress undesirable noise. Manufacturers are now selling so-called ‘noise cancellation’ in-ear headphones. They pick up environment sounds and send them back into your auditory duct in counterphase. And they’re supposed to keep out all the disturbing noises. Well.., it’s feasible in theory.

THE SOUND PERCEPTION OF PEOPLE AND MACHINES

Raes There is also the problem of ‘other minds’. How on earth could anyone possibly know how someone else perceives something? It’s a philosophical problem, in my opinion. Because my mind is not your mind. And that goes for pitch too. With my robots or automatons here, I’ve got a similar problem of perception. They ‘see’ movement, motion, and they ‘recognise’ gestures to a certain extent, but they do something else besides, and that’s listening. One of the experiments we do here is to have those machines play together with musicians. Moniek plays the electric violin and the robots have to play along. For them to be able to do that, it is necessary for their computers to pick up and analyse that violin signal in realtime. It’s still primitive, but it would certainly not be sufficient to simply go and map the electric violin signal on the parameters of those machines. That would be rubbish. It might work, but it would still be fooling about. If you want to do it for real, you have to conduct a serious pitch analysis, you have to know the timbrous evolution of that violin, and it has to work polyphonically. It’s not a simple matter.
Darge: Another thing is that such machines often react to the overtones produced by the violin instead otto the fundamentals you are playing. The timbre and colour of a violin is much richer and much broader... And so absolute pitch, yes, for a piano perhaps, but for a violin, it’s a great challenge. You can tell that from those machines, when they don’t know what to do with a particular tone.
Raes I insist that it is purely acquired behaviour to identify it as a particular note. But a machine can’t do that. Not yet. It needs knowledge and the ability to learn in order to be able to do it. (cont. below left)

Leman: There could be a great collaboration here between artistic research and scientific research, because scientists are currently very interested in this too, in the recognition of tones and other qualities of music. The applications include audio mining, for instance, that is, the mining of audio in large databases. With audio mining, the idea is to have your entire CD collection in one little box. And how can you retrieve your music then? You could enter the name of a composer or the title of a song, but perhaps you could also hum the tune of the particular piece of music you want to listen to.
Raes I’m afraid a system like that will NEVER retrieve my music. After all, what kind of sound are you going to make then? ‘Grschh! Krstjjjchchch!’? It reminds me of an idea that SABAM launched as long ago as the twenties. The idea was to compile a library containing the first 20 notes of every piece of music, in order to know whether or not it had been copied. And they recorded only the rhythm and the pitch. Tatata Taaa! Tatata Taaa! Right, as if that was enough to identify the piece and see whether it had been written before!
Leman: But surely technology has advanced a lot since then, with fingerprinting and so on. That’s a term used for a spectral analysis that enables you to find out very quickly whether some piece of music you hear on the radio is in the database or not. In Barcelona, there’s a database of about 70,000 compositions, and it’s very fast.

What about copyright?
Raes All my work is in the public domain, by definition. The idea that one can own information is nonsense. Information is form, and as such, it is transferable to any substrate whatsoever, without loss. You can’t steal information. Theft of information is no more than a stupid metaphor.
Darge: That’s why we have issued a CD to support those composers who take the ‘copyleft’ view. Information is free, though that does not mean that authorship is not important.
Raes Of course, the whole discussion on copyright has picked up a lot recently, with all that sampling that’s being done. From my experience with computers and software, I can tell you that it is very easy to crash that kind of database. Any attempt to protect information can be hacked.
Leman: That’s why programmers are progressing from fingerprinting to representing contents, and then things get a bit harder, I agree.
Raes Okay, so I want a copyright on all sad music! Or look out, you’ve written a few sad bars followed by a few cheerful ones, and that’s been done before! Really, that kind of thing takes us to a fascist-style society! I subscribe to the ideas of the Age of Reason, I believe in openness of information.
Leman: It’s a fact that the development of ‘physical modelling’ is being held back by all kinds of patents. There are more than 400 patents on all sorts of minor details that stop you from making headway, because you simply cant keep on circumventing things or paying for everything... It’s a big problem. (cont. up right)


Moniek Darge



THE PERCEPTION OF SPACE AND MUSIC

One lost question: where does research into space and sound stand now?
Raes We have built this new hall here, in the shape of a tetrahedron. It is an acoustically ideal shape. Because the space in which you play something is very important. It just won’t do to have a concert of contemporary or experimental music in St Bavo’s Cathedral.
Darge: Space and sounds can be linked together in different ways. You can see the space as the inner space where the music originates, you can see it as a music box that encompasses the event, but you can also say, space and music, that’s outside, that’s the universe, that’s the world around us in which all sounds together become one great mega composition to which each individual can contribute an element with their audio. Raes When we’re discussing music, we tend to define it narrowly as music played in a hall, but there are composers in the broad sense of the word who create entirely different things, like performances or city events, in which the acoustic event can no longer be localised at one point. Take my symphony for singing bicycles: you can’t perform that in a hall. The audience would only see and hear it for a few seconds. It has to be performed outdoors, on a site.
Darge: You can give it a worldwide interpretation too. In the eighties, we organised an International Solstice Event. Here in Ghent,we played together with other musicians in Central Park in New York and on the Oakland volcanoes in New-Zealand, at the same time. We were connected via satellite.
Leman: Now there’s a similar new development in laptop art.
Raes We’ve done that here too, with someone in Austria sending input for a machine here during a concert... You can to that kind of thing, but it’s got a very high conceptual content, don’t you think? Because for an audience, it doesn’t make the slightest bit of difference that an instrument does ‘cling clang’ as the result of a person pressing a key in San Francisco or Paris.
Leman: But you could,for instance, mix the acoustics of a particular hall with your work. As a matter of fact, there are some very good auditory models in 3D.
Raes But there’s a site that’s much more fun, one you can send music to and then you get it back with the transformed sound of that specific space: www.silophone.net
That’s no simulation!