+.x-i.net
Private Multi-Verse Perception
Interactive 'molecules' of 'audiovisual space and time'

Technology

The developed technological solutions facilitates and emphasises the main formal concept - multi-modal inter-translation of image, sound, space and movement, allowing control of (almost) every pixel, tone, and mili-second.

The use of database for description of scenes, object properties and relationships organises the playground, where the specific behaviors mediate the user's interaction. Objects carry their identity-type, accumulate 'infection' with other 'molecules', interaction history during the session.

Extensive use (and-abuse) of non-standard/multi-layer rendering effects e.g. multiple camera views translated to texture, Z-depth traces, etc.

The technological platform is based on mass-consumer-rate equipment with few specific add-ons. At the time of the production (2002/2003), the utilised computer parameters were 2.6 GHz processor, 1 GB RAM and a graphic card with 128 MB memory.

Self-(co-)developed technical solutions as VRML and C++ :
- overlay of abstract computer graphics on stereoscopic photos and movie files.
- Translation of object gemoetry, coordinates, movements from VRML to MIDI signals and vice versa (MIDI-to-VRML).
- Scene, Object and Animation Database.
- Touch-screen additonal navigation/control interface (also written in VRML/MPEG-4).

The work structure and software components allow to expand (scale-up) the work up to a very large set-up, when the on-site accessible hardware resources allow that (e.g. HDTV resolution projectors, CAVE-like setup, more sound speakers, etc.)

Another direction is to implement it as 'VR-organ' for multimedia/VR 'orchestra' performances.

 
Project by Jaanis Garancs © 2003 garancs.net