Categories
Specialising and Exhibiting E1

Pure Data Recorder

Another method to record in Pure Data is record through writesf~ object. To do so, it have to send three command into writes~ object, open(create) a wav file, start and stop recording. The problems is if I just use open_[filename], then I would overwrite the last recording. In order to record multiple tracks, I have to make a system that will change file name every time I press open. In order to use one bottom to record and stop (which is the toggle), I used number box and moses object to trigger the record system. When the toggle on, it would be one in number box, and that will trigger the bang in the right to create file and record; when it off, it will trigger the bang in the left and the bang will send signal into message stop. By using integer object to store and recall number and feed +1 object next to it, it will create a loop, and every have bang in, it will +1 to the number that store in integer object. Then the number go into a makefilename object. In the message section, use $ to indicate the various number. As a result, every time when press record, the file name will increase in one from zero. That overcome the overwrite file problem.

Categories
Specialising and Exhibiting E1

Wavetable Looper

As my project is to do recording with Bela, I started to explorer how can record in Pure Data. One of the method is to use the array to capture sound wave. Using ads~ object to convert analog audio signal into digital then use *~ object to control the input signal. In here, I found that can use env~(an envelop follow) with vu meter to track the input signal. Then using tabwirte~ object to send and write data into a table. In order to record one second sound, I have to set the table size into 44100 which is same as the input sample rate (as sample rate in the sample past thought per second).

On the other side, it start with a toggle to run the metro which its speed is control by a Hsilder. Then it send in to a bang that would trigger tabwirte object instantly and send it into a delay object (which I found later that have to type in argument, the patch in above didn’t have delay effect at all) and it trigger the message, and it send 0 and the sample rate and time in to a vline~ object. And the line object send argument into tabread object to play the sound. When I was trying to play this patch with my computer microphone and speaker, it create a feedback loop which create an interesting sound. The speed of the metro affect the sound as it trigger both record and play sample which adding like a gate that chopping the feedback loop. It also create a short attack effect which almost like a kick drum sounds. It was fun, but I found that using this method are good for do short sampling, not suitable for field recording.

Categories
Specialising and Exhibiting E1

Field Recording Kit

At first, when I thought to capture more information like temperature and humanity in the field, my first instinct is build a microphone by changing a mic capsule. Using a mic like objects to do field recording seems to be the straightforward way to do that as field recordist so used to pointing or setting up a mic/mics in the sound source or specific point. However, after I chat with Milo and Joanne, I found that it would be so difficult to make something like that. First of all, mic capsules work different form those sensor. Mic capsules transferring the vibration from the air to electric signal using the magnetic diaphragm, but the sensor is sending out electronic signal. Then I thought, is it possible to use the sensor electronic signal to modulate the audio input (a mic signal). It is possible to do that, but then need a lot of try and error, as Joanne suggested that I can not be done in just one or two months time. Both Milo and Joanne suggested me to use computer like Bela or Arduino. If I use computer, then I can map the sensor data into anything using coding. Working in the way is more accessible and have more control of the data from the sensor. To choose between Arduino and Bela, seems Bela is the better option. As I want those data of the sensor affect the audio in real time, Bela seems is a better choose for processing audio, plus Bela can run PureData which is the only programming language that I have more knowledge with.

Categories
Specialising and Exhibiting E1

PD wavetable synth

To explore more in pure data, I follow the tutorial in archive.flossmanuals.net/pure-data/. In pure data, they only have two type of oscillator, osc~ object create sine wave and phasor~ object create sawtooth wave. Although with some mathematical equation, we can transform those wave into other type of waveform. I was trying to make triangle wave, but the math part really so difficult for me. When I was doing research on how to do that, I found there have another way to create different type of wave which is using array, a table in Pure Data. In the exercise in class, we used it as a wave from monitor to see what wave we have created. However, we can actually read the table and use it as a wavetable synth.

This time using Hslider to control the in put and convert it with mtof object to frequency. Then the function of the phasor~ object is sent out audio signal from 0 to 1 and multiply 2051 which is the length of the table. Then the most important part is tabread object, which can play the table as audio signal. Then I come cross with new box which call Message. Message box can send a number or a command into object box. In the tabread~ object, I send three different message called set waveform, then my pressing it, I can change the tabread object to read different table. The trick of using array is set a common in a message box: first type the table name, then can set it to sinesum with the length and the point. This message can create a desire graphic in the array and use the common normalise 1 to make sure the graph won’t clip and stay inside the table. Then finally sent the tabread signal into *~ 0.5 to low down the volume and sent it into dac~. In addition, I can also use the mouse to draw the graph (waveform array) to create interesting sound as well.

Categories
Specialising and Exhibiting E1

Pure Data Basic

I made my first PD (Pure Data) patch. By doing this, I learned the basic of pure data. Firstly, object: The Rectangle boxes that can type in command and different command works differently. For example, osc~ is oscillator, mtof is midi converter, key is the signal receiver from the keyboard. Object with a ~ signal is the audio signal which have a thicker line with connect to and other object and the thin line is message or number. Object box can also can do math such as * for multiply and it can be use as VCA in pure data to amplify signal. Another type of box is number box. It can show the number of the inlet (signal in) or can change the number in play mode (command + E to toggle edit and play mode). Finally the most important object is the dac~ (digital to analog converter) which basically work as an output.

The patch I made is an additive synth. Firstly used the key object to control changes and used mtof object to convert signal. Then the main first oscillator set in 440 hz. The first number box sent number into four extra oscillator, and use * object to modify the number that can create a harmonic sequence. Then every sub oscillator have different volumes by using *~ object. All the signal go into a *~ 0.25 to limited the final output value to prevent clipping and send that into dac~.

Categories
Aural Culture

Tom Fisher

(guest lecture)

Tom Fisher, working under the name Action Pyramid. His practice involved composition to facilitate a reconsideration of our surroundings, examining the relationship between ourselves and the nonhuman, and our part in the wider ecologies of landscapes. During the lecture, he shared a lot of field recording that he did. It amplified the tiny or unheard sound in the nature. For example, Hover flies and yellow water lily. Firstly, how the water lily and hover flies sounds are fascinating. I didn’t know how water lily or aquatic plants made ‘weird’ sounds (it almost like alien or old school sci-fi synth), but yet, we are just kind used to the stereotypical nature sounds such as birdsongs, wind, moving leafs, or certain insect.

He pointed out that in the visual dominant world, a lot of the sound of different species are often missed in scientific studies. It stills has a lot of sound are undiscovered. He also part of a bioacoustics study where he record a pond and it shows the sonic cycle of it. Aquatic plants dominate under daytime, and insects for the night.

Since he also work in scientific research, a lot of people asked him (which I also kept thinking that during his shareing) about realism in field recording. For him, field recording can’t be 100% realistic. Therefore, his work always fictional and he called ‘realistic’ illusion. He stated that he is creating a space which more closer to natural environment, hence, he like to use quadraphonic or other spatial setup in this composition. His goal is direct audience to a certain sounds, an unheard nature.

Categories
Specialising and Exhibiting E1

Margaret Tait

Margaret Tait, a poet and filmmaker, who born in Scotland. At the age of 8, she was sent to Edinburgh to be educated. She studied medicine at Edinburgh University. Then she became a doctor at some point till she went to Rome to study filmmaking. Her scientific background shaped some of her art work, which doubts about reality and existence. It shows in some of her poems and films.

Litmus - Margaret Tait
I don’t know why is that acid turns blue litmus pink,
    And don’t tell me you know
    Because I’m sure you don’t
    – ‘Because it’s acid.’ –
    That’s no reason.
    Acid is pink.
    Nonsense.
    With other reagents the pink’s on the other side.
    Why pink?
    Why blue?
    Why change?

The short film Aerial combine different footages of rural city, nature, birds, the sky, rain drops, etc, alone with single piano sound and nature sounds. According to Tait, it ‘touches on elemental images. Air, water (and snow), earth, fire (and smoke), all come into it. For sound theres a drawn out musical sound, single piano notes and some neutral sounds.’ She stated that it is ‘just as a sort of song or even a kind of nursery rhyme‘. Four difference elements just intermingle and it didn’t have narrative and argument. It just shows the reality in a poetic way.

For me, I like use field recording as only material to compose and what is reality is the main argument in this field for long time. There is not reality in field recording as it transfers vibration into electronic and invert it back to vibration in the speaker later. In this process, it lost reality. The recording we listen it back is difference from what the recorder hear in the recording environment. The recorder can’t capture everything, thus it is the recorder chooses. People wouldn’t choose listen to the same things in the same environment. The other problem is other sensory. When we are, for example in the forest, sensation other than vision and hearing, for example temperature and humidity, would change our experience. Those sensory experience can also shape the way of listening.

Therefore, I am thinking is it possible to capture more elements while doing field recording, and by doing so, can field recoding practice can be more accurate and more close to the reality?

Categories
CSP E2

Automation

I used a lot of automation in volume as I want to fade in and out or cross fade different tracks. For example, all the drones in the second half of the piece are gradually coming in because I want to slowly increase the tensity. I also use it to turn on some of the effect such as the distortion and echo effect in the deer’s barking track. Another use of that is panning. Some of the tracks I used spatial effects but sometime I just want simply panning and it is the panning automation come in places. I can control the panning patten easily but drawing in automation and I can fine tone it very easily. The new technique that I started to use is live record automation. As I mention in the spatialisation blog, I used DearVR plug-in, and by live record the automation of the panning, I can control it directly and create more organic panning, and also in that way, I can use the direct instant as I was improvising rather sit the the automation mode, and trying to figure out how much should I put in the parameters in the specific time. I also use the live recoding automation on the Grain delay in the XY pad alone with the automation of the wet dry signal. I feel I can be more content to the track by record the live automation.

Panning
Fading in and out with panning automation
Grain Delay XY pad
Categories
CSP E2

Spatialisation

I used four different tools for spatialise tracks both for the three-dimensional effects and help for mixing as well. Firstly, I used DearVR plug-in for the fire sound and deer’s barking. The best thing about DearVR is the elevation. It is not just left and right, front and back, but it can also simulate up and down sounds. Therefore, I put all the campfire down a little bit to simulate the campfire position which also below on the ground. For the deer’s barking, I used XY pad in Ableton’s effect rack and set one to Azimuth and Elevation. Then I can record the automation of the effect and change it with curser. As a result, every barks are in different places which simulate the deer were everywhere.

Barking track

The second plugin is CircularDoppler. The benefit of using this plug-in is the effect of fading in and out with the physical simulation of the pitch shift and also moving in circular motion where I can choose the listener position. I used in the one of the drone track to create the spinning effects. I don’t want it in the same clockwise motion, so I put a LFO of modulate shift parameter and it would become back and forth rather spinning in the same direction. (But I found there is a bug in this version where it may lost all the setup and even not working sometime when I reopen Ableton)

PaulXStretch drone

The third one I used is Ableton Auto-Pan. The good thing about this is easy to control the amount of panning, the pattern and the speed. I also can choose different wave form. I used it in some of the drone and Grain delay birdsongs. And the final one is the most basic automation panning. It can do the precise panning as I can draw the panning.

Categories
CSP E2

Film Sound

For the first exercise, I used Ableton Operator and midi map to the fine tune on every oscillator and filter in the first attempt and I found that were very droney because I don’t want any silence moment. For the first exercise, I didn’t quite follow the instruction. Instead of try three separate attempts, I layer up three together. The second attempt I used Collision and midi map to Inharmony of both resonator separately. The second track were in higher pitch and shorter note which is contrasting the low drone in the first attempts. For the third attempt, I used Collision again but this time, It has more dissonance and the second resonator have even high pitch. Surprisingly, the original soundtrack are a bit similar to mine one and they both have similar feeling but mine one is more chaotic.

First exercise

The second exercise, I followed the instruction and did three attempts. The first attempt, I used Operator only as I have prepared enough.

2nd exercise, first attempt

For the second attempt, I custom B to D oscillator waveform, and midi map the fine tuning and level of those three oscillator and also the frequency knob of the low pass filter. I found that quite useful to change the texture of the sound by twisting the level of every oscillator as I set the route of three oscillator go into oscillator A. Finally, I add the phaser afterward.

2nd exercise, second attempt (cut a little bit intro)

The final attempts were based on the second one, but I also midi map the phaser wet/dry. This time I am more familiar this setting, as different modified oscillator have different characteristic and a small twist of the volume knobs can have completely different texture. By having the ability to control the phaser, I can have more dreamy sounds and can cut out straight when changing sense. I also add the attack time of oscillator A.

2nd exercise, third attempt (cut a little bit intro)

The original sound design of the second exercise stocks me which have completely different feeling as I imagine. And I found the sound design can change audience’s emotion completely even virally are the same.