📅 Nov 2003 | Music Technology Projects
Given the task to create music suitable for a video that already had music, I took upon the task by writing in a style known as metonymy where sound takes on the representation of movement(s). What made this project so demanding was that every sound in this piece has been triggered by the actions of the characters onscreen.
The first solid approach assigned a musical note too each individual; in this scenario the three main characters would eventually produce a chord of G major, noticeable when all are present onscreen. This directive approach seemed to work well until the main characters began to show excessive movement, leading to results that were more of a montage of sound, than the relationship with the characters I had hoped for. Rethinking my original approach I decided to use a pre defined drum loop for each of the main characters, while introducing a main themed percussion loop that would be dominate throughout the duration of the video. The only exception to this would be when all three characters are present on screen. At this point the drum loop is removed, allowing for the three characters percussion parts to montage as one.
When choosing these parts I followed a set of rules in order to add a narrative approach to the piece. Firstly each individual's main percussion sound would not collide tonally with any of the others in relation to their position on the frequency registry. With this in mind I chose to give male one a high snare sound, female one is designated a low zap type drum while female two is given middle range timbre bongos. Essentially when placed together a new percussion loop will be created, at the same time placing a degree of motion within the piece as different parts of the hearing registry are emphasized.
From this point I began to follow the video through carefully, mapping each individual's parts and where appropriate removing percussion. During this stage I also created the main thematic percussion that is dominant throughout, removing when all three characters are present on screen. Removing also parts from the main thematic percussion loop at the beginning of the video; this was to help create an overall timbre difference when the first major scene change takes place as the view moves from a facial close up, to a street bench scene.
Next the issue of the characters positioning and the extent of their gestures, although used for some of their movements, percussion was not very useful in fulfilling this roll. To compensate for this I took sounds from the VST instruments Reactor. Rather than looking for melody based sounds (as is the case with the bass line in the final version), I chose to look for sounds that had motion as part of their timbre feel. This was essential so as to cater for the actions I had in mind, one of these being the yawn of Male I. Once all four of the drum loops were finished and placed accordingly within the arrangement, I began the task of breaking the loops down and adding emphasis were the characters dominate within a scene. For example when Male I moves closest to the Camera his snare is increased to create a drum roll timbre; however when Male I is seen to be moving quickly from a close foreground position to a background position, velocity is also considered. This approach has worked best when all three characters are making bodily gestures/movement in the final bench scene; this section also helps in explaining the motives behind the thematic parts, for the roll of each percussion part, now becomes almost self narrative.
Originally when approaching a narrative for the background images (shoppers in a high street) I had hoped to use pre recorded street scenes manipulated by means of convolution, using shop related sounds (Tills, jingle of money). Unfortunately this was very unsuccessful leading to the perception of the piece that could be almost viewed as two separate arrangements juxtaposed against each other. To compensate for this I began to play around with the idea of adding speech via a text to speech engine. When contemplating the narrative of the speech it seemed sensible to use words that where related to the title, but also elaborated further on what the title actually aims to say about the video.
Although passable the generated speech was far from acceptable therefore I re processed the speech through the virtual instrument Vokator. This led me to use convolution, layering the speech with an organ timbre sound. By morphing the two sounds together within an instrument I was able to use tonal qualities with my speech creating several chorded versions of the original speech.
For the end sequence with the tramp I was confronted with the issue of the original background music being present throughout the dialogue, from both tramp and camera man. In an attempt to eliminate the music I processed the desired audio section of the video through a noise reduction tool. Unfortunately it became very difficult to eliminate all of the music without affecting the speech. When reaching the point where I felt I had eliminated as much as acceptable possible, I exported to a new file. Importing this file to my sequencer and layering several times from here use various settings of EQ on each track, creating a final file that would be mixed within the video arrangement. I then took a 0.1sec sample from this file and time stretched by 900% creating the sound that is heard first.
© 2019 http://www.ntmusic.org.uk/ All Rights Reserved. All Trademarks Recognised.