Retour au sommet

Soundbarrier_Soundscrolls

  • AV Performance
  • 6790 Vues
  • Aime
Soundbarrier_Soundscrolls
[Texte disponible uniquement en anglais] Sound Scrolls is a project carried out from some considerations on design horizons opened by the possibility to compose and manipulate sounds and images through a direct action on the code. Inspired primarily by the concept of "transcoding", developed under the theory of the media by Lev Manovich (2004), and research on synesthesias design communication and audiovisual kinetics of Dina Riccò (1999 and 2007) as well as from studies on ' "audiovision" by Michel Chion (1999), Sound Scrolls aims to explore some of the new connection between moving images and sound made possible by the fact that both, digital media, can be handled and processed by similar processes.

The concept of transcoding refers specifically to this last point, namely the fact that the digital sound and image are composed essentially of the same "matter", ie sequences of electrical pulses in a coded language. This proximity allows material a series of transactions "translated" directly managed mathematically, between the two media: a sound can just be "transcoded" into an image, and vice versa.

Starting from this "figure" could technically become a large amount of design directions. In the case of Sound Scrolls the starting point is constituted by a video stream "classic" experiments that Oskar Fischinger at the end of the 30s, it is time predigitale was already pursuing with foresight in the field of hybridisation of sound and image.

The "sound scrolls" were the graphical abstract, which mimicked the shape of a sound wave, which Fischinger composed with the explicit intention of creating, from the "signal", for an audiovisual work consistently. Sound Scrolls The project proposes to re-invent the process by Fischinger potential opened up by the digital medium.

A series of film sequences of the original work of Fischinger are digitized and analyzed in real time by software, these images are extracted, using software specially created for the project, the figures for some visual aspects, such as average brightness of the frame , amount of movement, shape recognition, etc..

These data are then converted into a set of parameters that can control the sound synthesis algorithms. The whole process occurs in real-time, so that the image is literally "played" as it runs.

At the same data is also used to control the graphical views, superimposed images, agree to "see in action the process of analysis. A computer, which is put into running the software, receives the video signal from a standard DVD player. The signal is analyzed and "sorted" into two channels: one, graphically developed, leads to a video projector, the other, "translated" into MIDI signals, it is sent to another computer which process the sound.

The methods of analysis of the signal source are different, and each is a graphically different and a different mode of generation of sound. The switching between a mode and the other is activated rapidly and in real time through simple keyboard commands. In this way the range of audio-visual processing obtained is multiplied and can also be managed in a performative context.

The project involves the use of two projections side by side. The first with the original, unaltered images of the films of departure, the second with the graphical views.

The assistance allows the viewer to follow in real time the process of analysis and to perceive more clearly the connection of this process with the flow of events as sound activated, result in the production of highly immersive experience.

Durée (minutes)

20

Ce qui est necessaire

- PROJECTOR / 1

- DVD PLAYER

- PROJECTOR / 2

- AUDIO MIXER

- SPEAKERS

- DVD PLAYER

- COMPUTER 1

- COMPUTER 2: VIDEO PROCESSING

- AUDIO GENERATOR

- MIDI HUB

- FRAME GRABBER

  • AV Performance

Auteurs

SoundBarrier
SoundBarrier

Italy Casamarciano, Caserta

Événements

Vidéos (1)