Logos Foundation

Flanders Robotics

<Bakla>

an automated bass clarinet

Godfried-Willem RAES

2021- ...?

 

Version 1.0


<Bakla>


Tuning:

The clarinet was automated using the same mechanism as developed earlier for robots such as <Horny>, <So> and <Autosax>. Thus it uses a membrane compressor and an acoustic impedance convertor with a capillary to drive the air column into resonance.

The diapason can be selected again, using controller 20. By default the clarinet plays in equal temperament with a diapason set to A = 440Hz.

Excitation:

Acoustic researchers for a long time have been looking into methods to capture the source of the vibration in wind instrument mouth pieces. Practical methods to measure and record the vibration of lips or reeds on wind instruments directly, never lead to convincing results, as the transducers required for measurement influence normal sound production to a great extend. Hence our idea to derive the vibration of the excitation source indirectly. A now verified method to generate the required wave-form lookup tables for driving the membrane compressor coupled to the impedance convertor we developed and tested thoroughly in 2020 for robots such as <Flut>, <Autosax>, <So>, <Hunt> and now also <Bakla> consists of the following steps:

1. Excite the membrane compressor with a waveform (at least 4 periods are required and these must be looped in the firmware) corresponding to what you would like the robot to sound like. Lets call it WavIn(). This waveform must be without any modulation and recorded in an anechoic chamber using high quality microphones at a distance not larger than the size of the sound source. This signal can best be derived from a recording of the instrument, played in the traditional way. So, it should be recorded prior to modifications required to build the actual robotic instrument. Make sure you record sound samples for a large series of different notes in different dynamics and registers as excitation waveforms differ greatly in function of these parameters.

2.- Record the sound of the robot, using a high quality condenser microphone, with this excitation and convert it to a format suitable for the microprocessor selected. Lets call this waveform WavOut() . Make sure the sizes of WavIN() and WavOut() are the same and take care to align the phase as well as possible. Normalization is also required. This is a quite tedious job, in particular for instruments where the contribution of the instrument to the sound result is relatively small as compared to that of the playing style, the mouthpiece etc. For the saxophone this is noticeably the case, whereas we had less problems in this respect with the oboe and the flute. The clarinet comes in somewhere in-between.

3.- Calculate the required excitation waveform as: WavEx() = (2 * WavIn()) - WavOut(), in the time domain. Normalize this wave and remove any DC components. This wave now is a model of the excitation wave deprived from the influence of the instrument. Of course this cannot be fully true, as it doesn't take into account the mutual coupling of excitation and instrument. However, the model does work quite well on practical robots if enough waves are prepared to cover the different registers and dynamic levels.

4.- Reprogram the microprocessor to use WavEx() as an excitation waveform for as many notes and dynamics as the microprocessor can cope with.

This method was also applied in the construction of the <Flut> and version 3 of the <So> robot in 2020. Of course, the procedure ought to be performed for a note in each register the instrument is supposed to sound. It would be ideal -but tedious- to follow this procedure for each individual note. However, the microprocessor used should than have a very large memory. The 16 bit 24EP256MC202 type used for the <Bakla> robot, is limited to 32kBytes, enough for a maximum of 20 wavetables, 1024 bytes each. Note that with the standard sampling rate of 44.1kS/s, the highest note wherefore our method can be applied is midi note 64 (329 Hz). With the much higher sampling rate of 192 kS/s it ought to be possible to calculate lookup tables useful for notes up to 90 (1480 Hz).

The theory behind this approach is that the excitation-wave should correspond as much as possible with the vibration of the lips or reeds that cause the vibration in the instrument. As it is nearly impossible to capture this vibration by direct methods, we reason that the sound produced by the instrument is the sum of the excitation and whatever the instrument adds (or omits) to it. Thus, by sending a sample of the normally produced sound to the membrane compressor, we should get the excitation wave plus twofold the contribution of the instrument. By calculation of WavEx() = (2 * WavIn()) - WavOut() we get a model of the excitation wave. When studying and analyzing waveforms produced by real instruments, you will notice that in fact no two periods are the same, neither in shape, neither in length. That's why we take a minimum of four full periods. Do not use more than say 16 periods though, because it may introduce subharmonics, if not even rhythmical pulsation in the sound on long sustained notes. With four periods, you get a very soft subharmonic two octaves below the sounding pitch. For this reason we always add a tiny amount of jitter to the sampling rate. In theory it should be a Gaussean, but in practice straight random jitter over a narrow range leeds to very acceptable results. No two periods have exactly the same length, just as in humanly played wind instruments.

Construction:

This bass clarinet can move slowly up and down. It is mounted on a 3-wheel base.



Midi implementation and mapping:


The midi channel <Bakla> listens to is X (If counting from 1, this would be channel 3).

Lights:

note 120:
note 121:
note 122:
note 123:
note 124:
note 125:.
note 126-127: not yet mounted lights, reserved for future uses.

 

Controllers:

#1: controller 1: Wind noise in the sound of the clarinet [default setting 48]
#3: controller 3: Vibrato depth for the horn [default setting 8]
#4: controller 4: vibrato speed for the horn [default setting 94]
#5: controller 5: tremolo depth (amplitude modulation) for the horn [default setting 4]
#6: controller 6: tremolo speed for the horn. [default setting 20]
#7: controller 7: volume control - global volume controller for the horn.[default setting: ]
#15: controller 15 - ADSR time scaling for the horn [default setting: 114]
#16: controller 16 - attack time controller for the horn [default setting: 32]
#17: controller 17 - attack level controller [default setting 127]
#18: controller 18 - decay time controller [default setting 91]
#19: controller 19 - release time controller for the horn (release time can also be controlled with the release byte of a note-0ff command)[default setting 100]
The interdepencies for the controllers 7, 15, 16, 17, 18 and 19 are shown in the graph below:

#20: controller 20 - tuning for the clarinet. By default equal temperament and A = 440 Hz for value 64.

#40: This controller selects the waveform lookup for the register 41 to 52. Possible values are 0 to 6. Default value is 6.
#41: This controller selects the waveform lookup for the register 53 to 70. Possible values are 0 to 6. Default value is 5.
#42: This controller selects the waveform lookup for the register 71 to 81. Possible values are 0 to 6. Default value is 5.
#43: This controller selects the waveform lookup for the register 82 to 91. Possible values are 0 to 6. Default value is 2.
#66: Power on / off. This command also resets all controllers to their default cold-boot values. Power on recalibrates the horn and brings it back to a central position.
#69: Enable or disable automation of the lights. Default value : > 0, ON. To switch this off, send controller with value = 0.
#80: Dynamic range controller. Default is 32 for 30 dB dynamic range.
#123: All notes off

pitch bend: range 1 semitone (-50 to + 50 cents) [note that pitch-bend must follow a note-on].

  subject to changes during the building process 



Technical specifications:

Design and construction: dr.Godfried-Willem Raes (2020- ...?)

Collaborators on the construction of this robot:



Music composed for <Bakla>:
none so far

 

This robot is projected to be ready by the end of 2025, if Godfried's health permits and if we can get some subsidy again to continue our research.

 

Back to Logos-Projects page : projects.html Back to Main Logos page:index.html To Godfried-Willem Raes personal homepage... To Instrument catalogue Naar Godfried-Willem Raes' homepage

Construction diary:

15.02.2020: First sketches and designs.


 

 

 


Maintenance information:

 

 

 



Last update: 2024-03-05

by Godfried-Willem Raes

Further reading on this topic (some in dutch):


Technical data sheet, design calculations and maintenance instructions:

 


References: