Evaluation of the
Specialist Project

BH Digital Media Production

Ilze Kavi Briede
2011

Through the course of my Specialist Project I learned a new set of skills preparing me for the final stage in my BA degree education, namely Extended Major Project. I had to follow strict time management planning in order to meet every small deadline and ensure that my work was up to date and fully progressing. At the start of this project I created a rather small but concise planning sheet with tasks for each week and I succeeded to complete every single one. My online blog is a reflection of the development stages while my sketchbook shows more of my research work and ideas sketching.

For my Specialist Project I have produced a prototype version of a stage installation art piece in conjunction with sound. I have created a 3-dimensional sculptural object consisting of a few pieces suspended in the air and presented using a set design approach:

The idea behind this piece is to create an unusual and artistic solution for projections with prospects to be used on a larger scale for music bands, DJs or an orchestra. This piece should be viewed as a form of art with possibilities to be incorporated in different music events as a visual element. My approach while making it has it’s roots in a necessity to provide a new platform in visual art, using digital tools and to push the boundaries of already existing practice in this medium.

This project consists of a few elements: form, technology and generative visuals. The form is an aspect of the face, technology is explored by use of sensors and the generative side is supported by different software operating in real time and processing data received from sensors. The outcome is presented in the form of a digital canvas mapped onto pieces of a face.

I set out to create a sound responsive visual sculpture. My work consisted of three main stages: exploring the idea of the face, materialising it and working on the style and technicalities of the projections.

Exploring the idea about the face was deepened along with my research subject for my essay. I was looking into aspects of human cognitive faculties of recognising a face, the characteristics of human behaviour in perceiving faces as well as my extensive interest and many years of praxis in drawing faces. All of this contributed to the subject. I came up with very abstract shapes purposely to trigger new sensations in people’s perception which is the underlying purpose of my work. My research was geared towards understanding the science behind the human perception of faces and how I could use these findings to influence the design of my set piece.

Turning my idea into a physical object went quite smoothly. I started off turning the final design into vector graphics and printed it with a laser cutter onto a perspex sheet. Then I added to each piece a relief with the help of a heat gun and sprayed it white to maximise the quality of projection on them. I was very lucky to observe a friend of mine who is a set design student in Costume department and received some valuable advice on how to build a platform of set design of my own. It was my first attempt and I think I have done very well. The idea to use this approach came from looking at different set designs and how clearly they allow a spectator envision a real scene in real theatre set up just by looking at the smaller version of it. This display technique is perfect in being professional and at the same time suitable for showing my idea. I used several sheets of thick black foam-board to build the main structure and PVA glue and metal pins to secure the walls and small copper poles from which I hang the various pieces that make up the face.

The last stage involved aesthetics of the digital generative visuals and programming. I am using four programs which run simultaneously and communicate to each other to satisfy the needs of my project. The first program (Arduino along with the micro controller board) reads analog input (sensors), parses data to a second piece of software called Processing which is generating visual graphics in real time and taking incoming readings from Arduino. The third piece of software called Syphon grabs the screen of visuals and makes it accessible to the fourth piece of software called VPT (Visual Projection Tool) to project these graphical images onto mapped surfaces of the face. VPT is also used to mask the visuals so that they only show up on the face.

The only difficulty I have faced is working with sensors. In order to ensure smooth workflow with electronic components I have to understand the basics of circuit boards and specifications for each sensor. I managed to get the analog readings and implement them into my work but I had difficulty dealing with some errors in readings when the values started to fluctuate on a regular pattern and I couldn’t find the real cause for it. I tried to troubleshoot and rewire the circuit and test for the source of the error and it seemed to be pointing to the arduino board or sensors where my knowledge isn’t broad enough to solve. To overcome this I altered the arduino code to smooth the flow of data coming in to the computer in order to stop the data fluctuating so erratically.

The stylistics of the visuals is minimal and based on simple geometric forms. The reason why I chose working with such shapes is because of inspiration from the artist Bridget Louise Riley and various works from early Op Art (Optical Art) which is rooted in Bauhaus. I found it very exciting and challenging to find the right emphasis on the form using geometrical shapes. In Op Art lines are usually presented on flat surfaces and by specific arrangement they might suggest an optical illusion of 3-dimensions. My work is 3-dimensional already and the reason why I project lines is not to contribute to its shape but to do the opposite. I alter the shape of the line. Straight lines projected on relief surface become curved and it is an optical lie.

From all three stages of my project I found the work with sensors the most challenging. I concluded that in order to achieve an organic union between the sound and the visual sensors had to play a very significant role. It is essential to pair the right sensor with the right musical instrument in order to best represent the instrument through the visuals. Depending upon which sensor is used the musical instrument can be read and interpreted differently by the computer in a number of ways. I found that working with vibration and sound sensors along with a drum kit was not enough. If I will be working with a music band or an orchestra which uses many diverse instruments I will need to have an extensive knowledge in sensor technologies and what is available on the market as well as how practical and reliable they are. If I reflect on my work with the sensors I must say that is is one of the hardest medium I have chosen. I found it hard finding the right sensor to fulfil my exact requirements. This excites me because they are like a representation of the physical world encoded in numbers and it is up to us how we interpret it.

The skills I have learned throughout this unit will allow me to pursue my career as an installation artist. With this piece I am challenging myself to find a form of collaboration with musician or a band which can push my work to a higher level. It will allow me to sell my work and represent it to creative industries as a product with strong artistic value. But to gain such recognition I have to work very hard in order to know the technical side of it. I can see myself working as an artistic director with strong conceptual approach backed up with understanding in technologies and range of possibilities they can be applied to. 

Sensors

I would like to explain the sensors i am using for my installation.

At this stage of development I didn’t plan to do many experiments with sensors with a thought to tackle this subject much deeper in my Extended major project. I decided to use just few in order to get the feel of them and how they work, and what sort of coding is needed to extract the data.  I obtained two different sensors.

Vibration sensor:

This sensor is called Mini Sense100, it is lead free horizontal vibration sensor. A low-cost cantilever-type vibration sensor loaded by a mass to offer high sensitivity at low frequencies. The pins are designed for easy installation and are solderable. Sensor has excellent linearity and dynamic range, and may be used for detecting either continuous vibration or impacts.
Reason why i picked this sensor is because it was relatively cheap and versatile. It seemed to be easy attachable to different instruments and it didn’t seem to be very complicated in coding either.

Sound/voice sensor:

Arduino series MIC voice sensor plug and play A belt of the amplifier circuit miniature MIC (microphone) voice sensor.The voice of the sensor output can be connected to any of the micro controller with AD converter, especially suitable for Arduino controller can accomplish the perception, use the interaction of the sound environment works.

I chose this sensor because it would let me to gather different source of data. Comparing to the vibration sensor which needs a physical surface to be functional, the sound sensor can be placed anywhere and is still capable to read and send data. Also having two different types of sourcing data will make the piece more diverse and extravert.

Here are some images of soldering sensors and wiring them to the arduino breadboard:

Pedro helped me to solder these sensors to small cables and then i wired them into a bread board. I followed a tutorial on the Arduino website which describes how to get data from analog sensors.

In order to read data i have to upload on my arduino board a small program written in code which allows to monitor analog readings in external monitor. Here is a code which i use for reading 3 analog sensors and send these readings to serial port. Serial port is used by Processing which picks up these reading and implements them in another code which deals with visual generation.

/* Knock Sensor

This sketch reads a piezo element to detect a knocking sound.
It reads an analog pin and compares the result to a set threshold.
If the result is greater than the threshold, it writes
“knock” to the serial port, and toggles the LED on pin 13.

The circuit:
* + connection of the piezo attached to analog in 0
* – connection of the piezo attached to ground
* 1-megohm resistor attached from analog in 0 to ground

http://www.arduino.cc/en/Tutorial/Knock

created 25 Mar 2007
by David Cuartielles <http://www.0j0.org&gt;
modified 30 Aug 2011
by Tom Igoe

This example code is in the public domain.

*/

// these constants won’t change:
const int ledPin = 13; // led connected to digital pin 13
const int knockSensorred = A0;
const int knockSensorgreen = A1;
const int knockSensorblack = A2;// the piezo is connected to analog pin 0
const int threshold = 100; // threshold value to decide when the detected sound is a knock or not

const int numReadings = 3;

int readings1[numReadings];
int readings2[numReadings];
int index = 0;
int total = 0;
int total2 = 0;
int average = 0;
int average2 = 0;
// these variables will change:
int sensorReading = 0;
int sensorReading1 = 0;
int sensorReading2 = 0;
// variable to store the value read from the sensor pin
int ledState = LOW; // variable used to store the last LED status, to toggle the light

void setup() {
pinMode(ledPin, OUTPUT); // declare the ledPin as as OUTPUT
Serial.begin(9600); // use the serial port

for (int thisReading = 0; thisReading < numReadings; thisReading++){
readings1[thisReading] = 0;
readings2[thisReading] = 0;
}
}

void loop() {

total = total – readings1[index];
total2 = total2 – readings2[index];

// read the sensor and store it in the variable sensorReading:
sensorReading = analogRead(knockSensorred);
readings1[index] = analogRead(sensorReading1);
//sensorReading1 = analogRead(knockSensorgreen);
readings2[index] = analogRead(sensorReading2);
//sensorReading2 = analogRead(knockSensorblack);

total = total + readings1[index];
total2 = total2 + readings2[index];
index = index +1;

if (index >= numReadings) {
index = 0;
}

average = total / numReadings;
average2 = total2 / numReadings;

//Serial.println(sensorReading);
// if the sensor reading is greater than the threshold:
//if (sensorReading > threshold) {
// toggle the status of the ledPin:
//ledState = !ledState;
// update the LED pin itself:
//digitalWrite(ledPin, ledState);
// send the string “Knock!” back to the computer, followed by newline
Serial.print(sensorReading);
Serial.print(“,”);
//Serial.print(sensorReading1);
Serial.print(average);
Serial.print(“,”);
Serial.println(average2);
//Serial.println(sensorReading2);
//Serial.print(“,”);
//}
delay(100); // delay to avoid overloading the serial port buffer
}

————

I am using 3 analog pins located on arduino board and receive raw data from analog sensors. I am using rather long cable and no resistors which is not very desirable condition. As shorter the connections as better, because with long cables the current can alter readings unexpectedly. Recently i have been facing the issue of value fluctuation, i haven’t detected the reason yet, but that might be connected with having a long cable (i could shorten it up), Power interference from laptop via usb (i could solve it by using external power supply such as battery power) or no resistor usage.
Nevertheless i have managed to even the value fluctuation by calculating the average value of 10 readings and sending only the average to Processing. The only problem with this is that sharp detections are smoothened out and it seems that sensor is very “slow” and not momentarily responsive.

In future i will spend more time learning about sensors, wiring them up properly with resistors and understanding the written schemes what is the current flow through them. In order to achieve very organic and flexible audio response into visuals i have to be very careful choosing sensors for particular instruments. Different sensors can either expose or neglect the possibilities of each instrument. The physicality and function has to match the specification of the sensor (vibration, sound, distance, temperature, pressure, etc) to achieve perfect match and smooth translation into visuals. My examples of this project are very crude and at the very beginners stage.

Here is a small video showing how vibration sensor changes some attributes of the visuals:

[vimeo vimeo.com/33082294]

Creating masks.)

As i mentioned before my project involves projecting an overlaying mask on a correspondent shape and it is not technically 3D mapping even though i use 3-dimensional shapes and 3D mapping software.

In order to acquire a perfect mask i have to ensure that the footage taken for editing masks perfectly matches the projection. The angle of the projector lens has to be identical to the lens of the camera. It is not possible to take exact picture by holding camera on same perspective track and in needed angle but i can do some slight adjustments in mapping software. Still the result will not be as perfect.

Adam suggested me a method in which i draw a mask directly on top of the face. I overlaid the Illustrator working area on top of the face and using the pen tool i sketched out the exact shape. Here are some pictures of how i did it:

When i finished tracing all of the shapes i used a grid layout to adjust each anchor point individually to match the physical face object. Then i saved this mask as a whole and individually each piece as a pdf black and white images. I will need to place them in designated folder inside mapping software in order to access them and use them.

There shouldn’t be any projections shining on the back wall of my set box. Only face should have projections on it and no projection should overlap the edges of the shapes. I used a global mask which makes it harder to adjust a perfect mask, which leads me to conclusion that in final project i will need to use each mask individually and adjust it to corresponding shape.
Here are some images showing projections with mask on:

It needs more perfection and more experimentations in visual aesthetics.

Exploring plastic::

After successful workshop in laser cutting with cardboard i had a confidence to execute shape in my chosen material- a plastic perspex. This material can be widely obtained and is relatively cheap. I bought 4 A4 sheets of 2mm thick clear acrylic perspex for £1 each from e-Bay. I chose to use this material because it is friendly with laser cutter, lightweight and with possibility to be altered in shape.

Laser cutting session went very smoothly. I used my knowledge gained in previous workshop and even managed to finish everything without teacher’s help. He noted that i am a very fast learner. This image shows a laser cutting in progress on the perspex sheet. It still has protective layer of coating on which is in green colour. The sheet it self is transparent.

 I cut two exemplars of the face, one for testing and one for the final work. It crucial to do as many tests as possible to acquire accomplished results. My tests will involve shape and coating and possibly some modifications of the surface to see which one looks better with projections on it.

I treated plastic pieces with the heat gun and altered their shape.

The reason why is because it would give the overall 3-dimension feel to the face. Even though by looking at it from the front it would give the feel of the face, added relief would make the projection look more interesting rather than when projected on a flat surface. So i had two objectives for making these pieces 3D, firstly to emphasise the real characteristics of the face and to achieve interesting platform for visuals.

Before covering them with paint I overlaid them on the stage floor.

I wanted to run a small test where i project on transparent shapes. It was no doubt that projection would not stay on the surface but i was just curious how would it look anyway. The results were eye watering and i absolutely adored the blurry morph-like light projection which reached the wall through the pieces of plastic. Suddenly an idea appeared into my head to project on whole face and keep it transparent. But then it would not be my original idea at first place which was about projections on the face, not through the face even though they looked very appealing:

It was rather clear to me that i have to cover these plastic pieces with white paint in order to catch the projection. But i wanted to experiment first.
I decided to scratch the surface of one side, and both, scratch and rub white paint into those engravings and experiment with that, i made few different examples by spraying pieces with white paint in different techniques and from different sides, i wanted to cover as many possibilities as i could. Here are some test pieces without projections:

— and with projections:

The piece above has only partly been sprayed with white paint and the piece underneath has a solid spray from one side. The projection side i left unsprayed and i scratched it with sharp tool.

After analysing all of these examples i came to certain observations:

[: :] All pieces which had insufficient paint sprayed didn’t prove clear visuals. Places where it was left untouched let the projection reach the back wall. Nice effect but not intended.

[: :] Where spray paint had been applied from one side and hung so that unsprayed side was facing projection, again, proved to be unsuccessful. The unsprayed side was too shiny and therefore reflective and made visuals look poorly, unclear and bit blurry.

[: :] Scratched pieces looked best when have been treated with paint. It looked very nice from the close up but in large scale they trapped the clarity of the visuals. Again, as a light effect they looked very amusing but was useless in my aim to achieve distinctive visual pattern.

The conclusion was:

[X] Spray piece with matte white paint and let the projection shine straight onto it, avoid reflection or obscurity caused by damaged surface. Simple as that.

[vimeo vimeo.com/32931187]
The whole face become blunt as uncanny sheet of white paper.


why not a stage design///

After tutorial with Liam he made me think more deeper about the purpose of my current work. I presented it as a prototype version of a ‘stage design’ whereas it doesn’t even fall into particular category. Few reasons why:

1) My work is inspired and purely based on my personal imagination and interpretation,

2) I doesn’t have a potential client,

3) There are no restrictions to consider which might be occurring if i would be working for the industry.

Therefore i am working purely in artistic practice and creating a work of art which will be presented within stage environment. Reason why i made mistake at the first place by calling it a ‘stage design’ is that i wasn’t really researching at first place what it is to be a stage designer. Even though i visited my friend Kie who is a stage design student and we had long conversations about designing a stage prototype, i have seen her at her workplace and have gained lot of knowledge to help me build the set myself. But in reality i don’t have a purpose to my work apart from my imaginative vision that one day it will be on the stage.

There is nothing wrong with that. What i have to do is specify my work more detailed.

I am working on an experimental stage art. This piece is meant to be presented in collaboration with other musicians or band, or any sound source, to be precise. But it has artistic values of it’s own.

My idea is to create a piece which will help me to communicate across my style and a way of thinking, and creative realisation. I hope that it will help me to find people to collaborate with or at least show my creative skills while approaching the brief.

Theoretically this piece can be presented in various forms. As a stage piece it has a potential to be viewed with a live music band or orchestra, or a live music producer, a dj.

I would like to concentrate on the piece itself, express the meaning behind it and show what it means to me as a creator and what it is what i am trying to say.

———–

idea:

Face is a most complex part of a human body. It has ability to communicate messages effectively verbally as well as visually. Face is an identity, something unique which stays with you for a whole life. We tend to remember particular person looking at his/her face. It is a very strong medium, we humans, utilise every single day.
I like drawing faces. Any kids of. Imaginative, anatomically improbable, strange, scary, inhuman. But using humanistic features such as eyes and mouth. It gives the face credibility that from whatever corner of the universe this face might come it has the main tools of communications such as eyes and mouth. It feels familiar even though looking very unfamiliar.
I am interested in live visual projections. In traditional manner those visuals are projected on solid surface either flat or curved or 3-Dimensional (3D mapping). My aim is to brake the screen into pieces, reshape them and give another meaning. Projection area by itself already forms a message ‘a face’. Then by projecting on it i can give it some additional live features.
My main challenge it to make people stop and look at it, identify with the face and realise how different it is. My interest lies in this moment of realisation, which leads to self-realisation, investigating different faculties and learning about personal ones. It is like tackling consciousness in different level. The brain will depict some faculties and define the object as a face. Live visuals on top of it will be constantly changing this perception into another one, into something new.

I want to create a new thread, new stimuli, which will broaden the ability to encounter something new and encourage person to trip their heads out.

————

Laser cutting workshop

I haven’t decided on final design of the face but I have made more drawings and i feel that i need more experimentations regarding my design. I decided to make one test design which i wanted to cut out using a laser and get to know more laser cutting possibilities and hopefully that would lead me to my final piece knowing what i can achieve with this wonderful technique. I went back to my original drawings and created new model in illustrator. I still don’t know what material i will use for the final piece whether it will be a cardboard, wood or acrylics, i was expecting that Edward from workshop will give me a rough idea what materials go under laser beam and how wide is my field of experimentation.

I managed to look at the wide range of materials which can be treated by laser, some of them are made out of 2D images while others can be modelled in different 3D softwares such as Rhino, Maya and even Google SketchUp. The materials include: leather, resin, cardboard, thin wood, thick wood, fiberboard, acrylic and many more. You can do two things with laser: cut and engrave. Engraving can be done is different colours and will evaporate only a thin layer of the material and can be used to produce stunning effect. Here are some examples laying around in the workshop:

All these examples gave me the material knowledge and some new inspirational ideas. At the moment I have a very simple basic shape but I can definitely expand it into more complex shape after this session.

I arrived at the workshop without material and to accomplish my project I was given some left over cardboard just to get started and learn basic steps how to operate laser machine. I would like to reflect on the whole progress step by step i went through from the sketch to the final piece.

1) I picked a drawing which i will turn into vector shape. I decided to work on this one:

2) I imported it into Illustrator and using a Pen tool traced it and created a black and white image. I did some alterations and experimented little bit with the design:

3) I was asked prior starting laser tutorial to upload my design onto SkyDrive storage provided by AUCB. When we opened my file in the classroom it was obvious it had a slight issue. It is a requirement to make sure pieces which laser is supposed to cut out are outlined in blue path line, and it should be one continuous path which laser follows in process of cutting. My teacher Edward had to spend some time correcting my piece and here is the final file we fed to laser printer:

4) Before pressing “Play” button which instantiates laser we had to go through some setting parameters and ensure we set laser job appropriate. Firstly we had to make sure the file is in RGB, and Blue channel is set to max, also the thickness of the stroke must be 0.01mm. We also had to select the power laser will use to cut and the speed. There is a special sheet dedicated to explain these two important elements depending on the kind of material. If i use a cardboard material it suggests to use 50 of laser power and speed number is 2. It is also important to align material in laser cutting machine and measure the distance between laser head and surface of the material. We used this specially designed little plastic tool to measure the distance:

5 ) When distance is measured i have to align paper edge to the laser tracking line, i can do it by positioning laser beam strictly on top left corner and then using a control button guide laser head all the way to the right till it reaches papers edge:

if paper is little bit askew, i have to press the top left corner down with one finger and adjust top right corner directly on lasers track line. Then last part is to connect computer to the laser by pressing a special button and then laser beam head will be automatically synchronised on computer screen. I just have to snap the printing file to this beam indicator and start curving the shape on the laser:

[vimeo 31101904]

Here is my happily cut out (et little bit chargrilled) pieces of the face.P

My first experiment

My first experiment involves mapping a video onto a 2D plane using 3D video mapping software. My final project will use 3D mapping to project onto a three dimensional shape. This first experiment will strictly involve projecting onto a 2D plane in order to familiarise myself with the various mapping techniques within my chosen software.

3D video mapping is the process of taking a video source and mapping it onto a 2D plane or 3D object creating a false perception of 3 dimensions and kinetics attributed to that particular piece of projection.  I will explain each stage what and how i accomplished this task. I downloaded two versions of VPT (Video Projection Tools) version 4 and 3 and started to familiarise myself with the software. There is extensive tutorial material explaining usability and functions of the application and by following step by step guidelines I managed to create a mapped surface on my design with the help of the masking function.

I would like to specify why this experiment isn’t considered 3D mapping and what the differences between 3D mapping and 2D masking are. Firstly, i am using a drawing which is situated on a 2D plane. I am projecting on it as if it would be any other flat surface a.k.a. traditional screen in normal VJing practice. Secondly, in order to map the video on specific areas of my drawing i have to use a mask to mask out unwanted areas for a video projection. If 3D mapping is prone to use a 3D object and map video on the object’s planes then in my case i am using solemnly a 2D surface in which case there is no third dimension or surface to instantiate therefore it cant be regarded as a 3D mapping and consequently is a 2D video mapping only.

To start with here is the interface of the program (version 3):

The reason why i started with version 3 is because it has a very well explained straight forward manual and i thought by starting with a simpler version of this application would be better to understand the basic tools and logistics. I found tutorial videos (included in download package) very helpful and my previous experience with Max/MSP layout made me feel like a fish in the water. I was familiar with the controls and file uploading features as well as the navigation. For kick off I decided to do a very basic first test using my original stage design sketch and map a video on it.

I chose to use this drawing for the test because I think it is one of most satisfying designs I have come up with and it will be the main root sketch from which i will develop my final design:

Firstly, i thought of how can i map a video on the white surfaces of this face. In total i have 6 disjointed areas which require an individual masks. I had an option to draw mask straight into a program and adjust it but i found it very insufficient with rather clumsy outcome. Drawing a mask or distorting cornerpins of the layer inside the program proves to be successful only with rectangular shapes where adjustment of only 4 main cornerpoints is required. In my case i have a complex curved shape and it requires more sophisticated mapping tool. The ideal is to create a mask and use it to define specific areas. I have to compromise one feature though: i will lose a sense of borders.

If i have a video mapped onto a distorted rectangular the picture will be affected by strange re-coordination of the plane. If i would manage to stretch my video plane into this particular curved shape, the video content would still appear distorted and stretched in some areas. It can be a good thing and a bad thing. The good thing is: if I use a generative video tool where borders work as a part of my video and there are some elements bouncing of those borders. It would give an immanent feel of the area they are confined to. If i am using a video, lets say, ducks swimming in the pool, by morphing a video plane in order to adjust the curvature of the plane my ducks will become like alien zombies and will be totally unrecognisable! I intend to use generative mathematical visuals without any worldly associations to it so considering outline borders would be a cool feature. But it is very complex to achieve if I am using one software to project and other to generate the visuals. I will do more experiments in next stages of the development and will update the accordingly.

Conclusively I chose to generate a mask and to obtain a perfect mask is to work with the source file per se. I imported this image into Photoshop and created a black and white image where black will be masked out areas of video and white areas will let video come through. Here are some production stages and resulting image (please, excuse my sloppiness):

This is my final mask which i imported in Mask folder and dragged it into VPT operating window.

Then i chose only one image plane and applied a mask on it which resulted in this:

[vimeo 30914772]

This is only a test and these are just first steps to get myself familiarised and comfortable with the program. I succeeded in mapping on dedicated fields of my design using a single mask in its very crude basic version. I created a single mask for all pieces which limited me to only a single video use at a time. The next step is to finalise a sketch which i intend to use for an actual project and generate separate masks for each piece of the face individually. That will allow me to introduce more video sources for separate parts of the face and make it overall more dynamic and diverse. I still need to do more experiments in implementing a live video generative feed from Processing happening live and only then i will expand on more complex structures and solutions based what is realistically achievable.

This video demonstrates the main idea of the video mapping using a mask. Next video is little bit longer with the same concept but using few different video sources. This is only a test and doesn’t reflect on the final visual aesthetics of the piece, i was using a default built in footage from the program just to do basic tests with different types of footage. My initial aim is to create black and white visuals only.

[vimeo 30915866]

At this stage of the project it is not a 3D mapping praxis and I have to admit it is not quiet a simple video projection either. It is a specific video projection where I want to affect only particular areas of the design. With the help of projecting onto 2D planes I am bringing out the overall shape of the object and optically separating it from the background. Everything that doesn’t have a projection mapped onto it’s surface instantly becomes a background. Parts of the face which have projection mapped onto it virtually extrudes from the flat surface. I love the idea of tricking the eye and consequently the brain to perceive this unusual light distribution on a flat surface and test how it reacts. It doesn’t look 3D if i try to observe this projection from different angles, but it looks a little unusual when viewed from directly in front without moving. It gives a strange feel to the faces surface and instantly obtains some qualities of being “alive” and constantly changing. This is a realm I want to explore deeper by adding 3D qualities to this face design and project a live video “skin” onto it, constantly changing and evolving almost like a separate entity of life inhabited on planes resembling a face. This is a creepy subject and I hope people will have as much fun enjoying it as i do making it.)