Source Filmmaker program

Over the past few weeks I have been experimenting with the Source Filmmaker program. The program is made by Valve, who is famous for making computer games such as Counter-Strike and Portal.

Layout of SFM

The layout of SFM is similar to many other filmmaking programs such as Premiere Pro. The window is divided into four with two viewports – similar to Premiere’s film preview and source preview. The difference of SFM is that it focuses more on the movement of camera angle, lighting and animation of game models to create storylines – mainly because the program is made to create promotional content in the forms of short films and posters.

What I’ve been doing with SFM

I’ve been learning the program most of the time, watching tutorials on YouTube and example videos to see what it can do. Last week I managed to export 2 quick videos, at low render settings and high FPS (50 and 100, respectively).

How I will use this program in my final project

There is a sequence in my film where a teenager plays an online game and makes friends with people online. With SFM, it allows me to explore bigger options to create this sequence creatively, especially with animation (since I can’t animate in other programs and draw anyway). SFM also allows lip-syncing of game models, but this is very complicated and takes a lot of practice.

Script and research methodology update

I recently finished the script for the final project, which is a short film. It’s about a father telling a story to his son. Because my research topic is about the elements in a film, the story that the father tells his son will be made into different versions, where the elements will change in each version. The story is about a kid who befriends another “kid” (who is actually a grown man) on the Internet, gives away his home address and ends up with his parents murdered by the “kid”, and the kid’s newborn brother is taken away by the “kid”.

In the original script, the story is told using a narrator’s voice (the father), and as he tells the story, the character inside the story is shown and Internet-related bits (such as video games and messaging to depict the conversation between the character and the “kid”) are animations that pop up or blended into the scene. The theme of the original script is horror, therefore ‘creepy’ music will accompany the scene, along with horror-themed animation. So, if the story is broken down into elements:

  • Speech: narrator’s voice, with standard story-telling voice and normal language
  • Music: creepy, horror-themed
  • Sounds: relevant sounds such as creaking doors, rainfall in background, static
  • Visual image: Horror-themed animation mixed with reality
  • Lighting: dark, nighttime

As a result, for the purpose of the research, each of these small elements can possibly be changed in different versions of the story.

Speech:

  • Remove narrator’s voice, depict everything using the character’s action
  • Change tone of voice
  • Change dialogue to poetry (?)

Music:

  • Change genre from horror to happy, comedic – positive in general

Sound:

  • Replace ‘creepy’ sound effects to comedic

Visual:

  • Change everything to animation (comic) and remove reality
  • Change creepy themed to cartoon

Lighting:

  • Nighttime to daytime

Sandwich analogy: The Bread

If a scene in a film is a sandwich and we are consuming it as a whole, what would the bread be? The image? The frame? The screen?

In the book Sound on Screen, Michel Chion discussed:

“Why in the cinema do we speak of “the image” in the singular, when a film has thousands of them? The reason is that even if there were millions, there would still be only one container for them the frame. What “the image” designates in the cinema is not content but container: the frame.

The frame can start out black and empty for a few seconds, or even for several minutes. But it nevertheless remains perceivable and present for the spectator as the visible, rectangular, delimited place of the projection. The frame thus affirms itself as a preexisting container, which was there before the images came on and which can remain after the images disappear.

What is specific to film is that it has just one place for images – as opposed to video installations, slide shows, sound and light shows, and other multimedia genres, which can have several.”

Like the frame, the bread is the “container” for the components within.

References:

Chion, M. (1994). Audio-vision: sound on screen. Columbia University Press.

Juxtaposition of music, sound and image; and audiovisual dissonance

Michel Chion developed the idea that there are two ways for music in film to create a specific emotion in relation to the situation depicted on the screen (Chion, 1985). On one hand, music can directly express its participation in the feeling of the scene, by taking on the scene’s rhythm, tone and phrasing; obviously such music participates in cultural codes for things like sadness, happiness and movement. In this case we can speak of empathetic music, from the word empathy, the ability to feel the feelings of others (Chion, 1994). On the other hand, music can also exhibit conspicuous in difference to the situation, by progressing in a steady, undaunted, and ineluctable manner: the scene takes place against this very backdrop of “indifference”. This juxtaposition of scene with indifferent music has the effect not of freezing emotion but rather intensifying it, by inscribing it on a cosmic background. Chion calls this second kind of music anempathetic. The anempathetic impulse in the cinema produces those countless musical bits from player pianos, celestas, music boxes, and dance bands, whose studied frivolity and naivete reinfonrce the individual emotion of the character and of the spectator, even as the music pretends not to notice them. There also exist cases of music that is neither empathetic of anempathetic, which has either an abstract meaning, or simple function of presence, a value as signpost: at any rate, no precise emotional resonance. The anempathetic effect can also occur with noise – when, for example, in a very violent scene after the death of a character some sonic process continues, like the noise of a machine, the hum of a fan, a shower running, as if nothing had happened. Examples of these can be found in Hitchcock’s Psycho and Antonioni’s The Passenger.

Sound can also influence the perception of movement and perception of speed, and perception of time in the image.

Audiovisual dissonance

Audiovisual dissonance is when image and sound follow two totally different tracks. It is not enough if the sound and image differ in nature (their respective content, spatial characteristics). Audiovisual counterpoint will be noticed only if it sets up an opposition between sound and image on a precise point of meaning. This kind of counterpoint influences our reading, in postulating a certain linear interpretation of the meaning of the sounds. Take for example, the moment in Godard’s First Name Carmen when we see the Paris metro and hear the cries of seagulls. Critics identified this as counterpoint, because the seagulls were considered as signifiers of “seashore setting” and the metro image as a signifier of “urban setting”. This reduces the audio and visual elements to abstractions at the expense of their multiple concrete particularities, which are much richer and full of ambiguity. Thus this counterpoint reduces our reading to a stereotyped meaning of the sounds, drawing on their codedness (seagulls = seashore) rather than their own sonic substance, their specific characteristics in the passage in question.

So the problem of counterpoint-as-contradiction, or rather of audiovisual dissonance, is that counterpoint or dissonance implies a prereading of the relation between sound of image. It forces us to attribute simple, one-way meanings, since it is based on an opposition of a rhetorical nature (“I should hear X, but I hear Y”).

There exists hundreds of possible ways to add sound to any given image. Of this vast array of choices, some are wholly conventional. Others, without formally contradiction or “negating” the image, carry the perception of the image to another level. And audiovisual dissonance is merely the inverse of convention, and thus pays homage to it, imprisoning us in a binary logic that has only remotely to do with how cinema works.

References:

Chion, M. (1985). Le son au cinéma. Vol. 5. Cahiers du cinéma.

Chion, M. (1994). Audio-vision: sound on screen. Columbia University Press.

Montage (cont.) – the Kuleshov experiment and Eisensetin’s theory of montage

In the 1920s, a filmmaker named Lev Kuleshov took three identical shots of the well-known prerevolutionary actor Moszhukin and intercut them with shots of a plate of soup, a woman in a coffin, and a little girl. According to V. I. Podovkin (a filmmaker, who is Kuleshov’s student), who later described the results of the experiment, audiences exclaimed at Moszhukin’s subtle and affective ability to convey such varied emotions: hunger, sadness and affection. In his two major works, Pudovkin developed from the basic root of the his experiments with Kuleshov a varied theory of cinema centered on what he called “relational editing”. For Pudovkin, montage was “the method which controls the ‘psychological guidance’ of the spectator”. In this respect, his theory was simply Expressionist – that is, mainly concerned with how the filmmaker can affect the observer. But he identified five separate and distinct types of montage: contrast, parallelism, symbolism, simultaneity, and leitmotif. He saw montage as the complex, pumping heart of film, but he also felt that its purpose was to support narrative rather than to alter it.

Eisenstein set up his own theory of montage – as collision rather than linkage – in apposition to Pudovkin’s theory. For Eisenstein, montage has as its aim the creation of ideas, of a new reality, rather than the support of narrative, the old reality of experience. As a student, he had been fascinated by Oriental ideograms that combined elements of widely different meaning in order to create entirely new meanings, and he regarded the ideogram as a model of cinematic montage. Taking an idea from the literary Formalists, he conceived of the elements of a film being “decomposed” or “neutralised” so that they could serve as fresh material for dialectic montage.

Eisenstein extended this concept of dialectics even to the shot itself. As shots related to each other dialectically, so the basic elements of a single shot – which he called its “attractions” – could interrelate to produce new meanings. Attractions as he defined them included “every aggressive moment … every element … that brings to light in the spectator those senses or that psychology that influence his experience – every element that can be verified and mathematically calculated to produce certain emotional shots in a proper order within the totality …” [Film Sense, p. 231].

Because attractions existed within the framework of that totality, a further extension of montage was suggested: a montage of attractions. “Instead of a static ‘reflection’ of event with all possibilities for activity within the limits of the event’s logical action, we advance to a new plane – free montage of arbitrarily selected, independent … attractions …” [p. 232].

Later, Eisenstein developed a more elaborate view of the system of attractions in which one was always dominant while others were subsidiary. The problem here was that the idea of the dominant seemed to conflict with the concept of neutralisation, which supposedly prepared all the elements to be used with equal ease by the filmmaker.

Possibly the most important ramification of Eisenstein’s system of attractions, dominants and dialectic collision montage lies in its implication for the observer of film. Whereas Pudovkin had seen the techniques of montage as an aid to narrative, Eisenstein reconstructed montage in opposition to straight narrative. If shot A and B were to form an entirely new idea C, the the audience had to become directly involved. It was necessary that they work to understand the inherent meaning of the montage. Eisenstein, in suggesting an extreme Formalism in which photographed reality ceased to be itself and became instead simply a stock of raw material – attractions, or “shocks” – for the filmmaker to rearrange as he saw fit.

References:

Monaco, J. (2013), How to Read a Film: Movies, Media, and Beyond. Oxford University Press, 2013, pp. 448-56.

Eisenstein, S (1943). The film sense. Ed. Jay Leyda. London: Faber & Faber.

Montage and Idea – Associative Montage

The content of this entry is taken from here .
The great formula of montage:
1 + 1 > 2
(Following the logic of dialects (thesis, anti-thesis and synthesis), the sum of two parts is bigger, if they are connected.
Soviet montage theory is an approach to understanding and creating cinema that relies heavily upon editing (montage is French for “putting together”). Although Soviet filmmakers in the 1920s disagreed about how exactly to view montage, Sergei Eisenstein marked a note of accord in “A Dialectic Approach to Film Form” when he noted that montage is “the nerve of cinema,” and that “to determine the nature of montage is to solve the specific problem of cinema.”
While several Soviet filmmakers, such as Lev Kuleshov, Dziga Vertov, and Vsevolod Pudovkin put forth explanations of what constitutes the montage effect, Eisenstein’s view that “montage is an idea that arises from the collision of independent shots” wherein “each sequential element is perceived not next to the other, but on top of the other” has become most widely accepted.
In formal terms, this style of editing offers discontinuity in graphic qualities, violations of the 180 degree rule, and the creation of impossible spatial matches. It is not concerned with the depiction of a comprehensible spatial or temporal continuity as is found in the classical Hollywood continuity system. It draws attention to temporal ellipses because changes between shots are obvious, less fluid, and non-seamless.
Eisenstein’s montage theories are based on the idea that montage originates in the “collision” between different shots in an illustration of the idea of thesis and antithesis. This basis allowed him to argue that montage is inherently dialectical, thus it should be considered a demonstration of Marxism and Hegelian philosophy. His collisions of shots were based on conflicts of scale, volume, rhythm, motion (speed, as well as direction of movement within the frame), as well as more conceptual values such as class.
Idea – Associative Montage

Idea – Associative Montage is one of the few types of montage. Here two unrelated events are juxtaposed to create a third meaning – developed in the days of silent film era to express ideas and concepts that that could not be shown in a narrative picture sequence. These fall under two categories:

Comparison montage

  • These comprise of shots that are juxtaposed to thematically related events to reinforce a basic theme or idea.
  • Silent films often would juxtapose a shot of a political leader with preening of a peacock’s shot to depict politician’s vanity.
  • Comparison montage acts like an optical illusion to influence perception of the main event.

The Russian filmmaker, Kuleshov, conducted several experiments on the aesthetics of montages: to show the impact of juxtaposition and context – he interspersed the expressionless face of an actor with unrelated shots of emotional value like a child playing, a plate of soup, and a dead woman – the viewers thought that they were seeing the actor’s reaction to the event.

The television advertisements often use this technique to send forth complex messages quickly across to the viewers, e.g. a running tiger dissolves into a car gliding on the road – a hyperbole signifying car having the strength, agility, and grace of a tiger.

Collision montage

Two events collide to enforce a concept feeling or idea. The conflict creates tension.

Comparison Montage: These comprise of shots that are juxtaposed to thematically related events to rein enforce a basic theme or idea. Thematic related events are compared to reinforce a general theme.

In olden days these were used in silent films for example they would show a shot of a political leader juxtaposed with a shot of preening of a peacock to show that the man was very vain.

References:

R. N. S. (2008), Introduction to Montage, [online] available from < http://mediaelectron.blogspot.co.uk/2008/10/introduction-to-montage.html > [Last accessed 13/4/2015]

Montage and Juxtaposition

Montage is the European term for putting together the shots of a film, whereas the American term is “cutting” or “editing”. Montage suggests that a film is constructed rather than edited (Monaco, 2013).

Montage is used in a number different ways. While maintaining its basic meaning, it also has the more specific usages of:

  • A dialectical process that creates a third meaning out of the original two meanings of the adjacent shots; and
  • A process in which a number of short shots are woven together to communicate a great deal of information in short time

Montage literally translated from French is assembly, the process by which an editor takes two pieces of film of tape and combines them to emphasise their meaning (Azia, 2015). Visualise, for example, shot A which is a pumpkin and shot B which is a hammer going down. Mix both shots together and you get a meaning, C. By placing the two shots together, the pumpkin is assumed to be destroyed by the hammer.

Sergei Eisenstein is an important individual within the world of editing because he developed “The Film Sense” with fast editing and juxtaposition. The school of thought at the time was that shots complemented each other; if you showed a person walking, then the next shot should help continue the action. Eisenstein developed the idea of juxtaposition. Juxtaposition is the process of showing two things which are unrelated and through combining, they create a new meaning.

References:

Monaco, J. (2013), How to Read a Film: Movies, Media, and Beyond. Oxford University Press, 2013, pp. 239-49.

Azia, R. (2015), Montage theory, [online] available from < http://www.main-vision.com/richard/montage.shtml > [Last accessed 13/4/2015]

5 channels of information in film

In my previous posts, I talked about how movie scene is split into different elements. This has been discussed and defined by Christian Metz, who identified five channels of information in film (Monaco, 2013):

  1. The visual image
  2. Print and other graphics
  3. Speech
  4. Music
  5. Noise (sound effects)

Interestingly, the majority of these channels are auditory rather than visual. Examining these channels with regard to the manner in which they communicate, we discover that only two of them are continuous – the first and the fifth. The rest are intermittent – they are switched on and off – and it is easy to conceive of a film without either print, speech or music. The two continuous channels themselves communicate in distinctly separate ways. We “read” images by directing our attention; we do not read sound, at least not in the same conscious way. Sound is not only omnipresent but also omnidirectional. Because it is so pervasive, we tend to discount it. Images can be manipulated in many different ways, and the manipulation is relatively obvious; we sound, even the limited manipulation that does occur is vague and tends to be ignored.

It is the pervasiveness of sound that is its most attractive quality. It acts to realise both space and time. It is essential to the creation of a locale; the “room tone”, based on the reverberation time, harmonics, and so forth of a particular location, is its signature. A still image comes alive when a soundtrack is added that can create a sense of the passage of time. In a utilitarian sense, sound shows its value by creating a ground base of continuity to support the images, which usually receive more conscious attention. Speech and music naturally receive attention because they have specific meaning. But the “noise” of the soundtrack – “sound effects” – is paramount. This is where the real construction of the sound environment takes place.

References:

Monaco, J. (2013), How to Read a Film: Movies, Media, and Beyond. Oxford University Press, 2013, pp. 235-6.

The Sandwich anology

In my previous post, I talked about the comparison between a movie scene and a sandwich, as a way of simplifying and complicating the concept of elements within a movie scene at the same time. The components of a sandwich (filling [meat, egg, cheese, etc.], dressing [vegetables] and seasoning [sauces and spices]) work together within two pieces of bread in order to compliment each other and merge into one item of food that is consumed altogether in one bite. The sandwich analogy proposes that the elements of a movie scene work in the same way – they compliment each other in order to generate a movie scene on the screen towards various possibilities to enhance the movie-goer’s experience: to create emotion, tension, deeper levels of meaning, dissonance, etc. This proposes a number of questions: if one element does not compliment the others, will it affect the overall experience? If a scene already works with its existing elements, will adding or taking another element(s) affect the overall experience? If an element is dominant over all the other elements, will it affect the overall experience?

With that in mind, this comparison can have possible weaknesses as it is merely a conceptual proposal.

Learning agreement change – Juxtaposition – “the Sandwich”

1. From the first module, I took on my subject of study as “usage of music in building tension”. As I researched and watched more films, the research question still seems vague and hard to answer, and exploration seems difficult. Watching a film is one thing, but remembering it and recalling its effect on a particular scene, then try to discuss its effect is a difficult task. Additionally, it depends on my memory and music taste, and most of the time the music only contribute partly to a scene, not determine the entire mood of the scene. Before I encounter more obstacles and waste more time on this subject, I decided to change the learning agreement, while maintaining the usefulness what I have read and discussed in previous blog entries.

2. So, instead of music, my new subject of study will be juxtaposition. Juxtaposition in film is defined as “the contiguous positioning of either two images, characters, objects, or two scenes in sequence, in order to compare and contrast them, or establish a relationship between them” (filmsite.org, 2015). I will go deeper into discussing this subject in my new learning agreement, and later blog entries. Previous entries about music can be interpreted as the juxtaposition of music/sound and visual elements.

3. “the Sandwich”

The sandwich is something that I came up with that basically compares a movie scene with a sandwich. Like a sandwich, a scene is divided into a large number of elements as we try to break it down. With visual elements, we have action, characters, colours, set design, costumes, etc. With aural elements, we have dialogue, music, and sounds. If the bread – the thing that holds all these elements together – is the picture, we have a very accurate comparison. When we eat a sandwich/watch a scene, we take in all of these elements at once. So, my question would then be, if any of these elements are out of place, what effect would it have on the audience / If any of the sandwich ingredients taste different/bad, what effect would it have on the sandwich. Also, if each of the elements each give a different effect, what would the audience experience? By “effect on the audience”, it can mean emotion, understanding, expectations, etc.

4. This is complicated, but it can be explored in much deeper details than my previous subject, and a lot of theories and tests can be made. I need to refine my list of films, or just scenes to watch.