In terms of my final piece i was planning to show all of the elements at the critique. That includes: showing and explaining all of the software used in this project, demonstrate the usage of the headset and how i will detect brain activity with it, ask someone to wear EPOC headset and then show the final visualization triggered by that person’s brain readings. My main concern would be some programs crashing at the time of presentation because i have lot of them running at the same time!

In creating visual style i took into consideration the TEST PROGRAMME which i carried out in the previous sessions. It was important to test it with different people in order to understand the extremes of situations and the brain output during those situations. I knew only roughly what sort of responses would be, for example, if person closes eyes or if person is singing. there were particular readings with higher indications in one case and another readings higher or lower in another case. I concluded in order to show this device in action i will need to ask person to do small tasks to make the happening changes in brain visible. but i have to stress this again, the TEST PROGRAMME was solely created for my particular project and it wasn’t made for making any assumption on how people think. People think differently and each person output is totally individual and belonging to that person. There are some patterns emerging which i was hoping to reflect through the live visuals.

After lots of experimentation in style and geometry solutions i managed to find something which appealed to me and i thought that chosen visual style would explain my work the best. I was influenced by one patch which i found inside vvvv program support files and expanded from it. The patch did create a single spiral in 3D space.

I started to work with it and saw a potential of being turned into a more complex in geometry and beautiful in style final piece of work.

These images show the progress made to complete the final style and alterations i have made to the geometry to make it look better:

In the examples above i used a geometry rendered in WireFrame mesh. I decided to change it to Point look because WireFrame looked too rough and uncompleted. It didn’t look as a finished piece and it was too heavy in graphics. I wanted to choose something more airy and intricate so i looked into different options in mesh display:

The Point option was exactly what i was looking for. It changed the whole feel about the piece and I knew instantly that would be my final style. The background also would have to be balck in order to have a high contrast projection image. The next step was working more with the geometry and see how it all comes together when played together with Emotiv EPOC headset. The final test was crucial to ensure everything was working fine. Here are some screenshots displaying the progression of the geometry shape and how i want the final piece to look like:

Here is the video showing and explaining in full detail my whole project:

I think as a project it went very well and i am very satisfied with the final outcome. I would like to present this piece at galleries and shows, maybe even festivals, but i need to refine the presentation method and think of something very interesting projection wise. I will be updating this blog with further movements on this project, starting with our London show in July.

I created TEST PROGRAMME exclusively by myself in order to investigate brainwave activity in different situations as well as familiarize myself with the EPOC headset. I was interested to test different subject brain responses to logical tasks as well as tasks which involved imagination and creative thinking. My aim was to create a set of different situations and by using a vague scientific method approach make observations and draw conclusions based on the gathered data. I am not following strictly scientific method but using some elements of it, such as: running tests (experiments), gathering and analyzing data, drawing conclusions and reporting results. My tasks purely consisted of testing EPOC headset with few individuals and then sum up the results.

I used available software (EPOC Control Panel, Mind Your OSCs) in order to gather and read data. I also created a custom built programs (Max/MSP, Processing) in order to transmit data using an OSC protocol and display it into visual graphs. The brainwave electric current data was transmitting continuously in real time therefore capturing out-coming data into still visual layout was crucial in order to conduct any further analysis. The graph displays brain activity recorded over 1 min period of time.

There are total number of 5 different emotional responses such as:

  • Engaged/Bored
  • Excitement
  • Excitement Long Term
  • Meditation
  • Frustration

The software I am using is registering a brain activity into these 5 categories mentioned above. I am measuring and recording each individual’s brain response to TEST PROGRAMME across these 5 categories defined by developers of the headset. An article on Wikipedia reports that “the names may not perfectly reflect exactly what the emotion is.” Wikipedia article, 2012. Emotiv Systems. [online] Available at: <http://libweb.anglia.ac.uk/referencing/harvard.htm&gt; [Accessed 20 March 2012].I will be taking it into consideration when summarizing results and focusing more on detecting brain activity patterns rather than defining what they could signify in any expressive form.

I did alter original questions a little bit and i decided to remove the last question due to its inappropriate nature. It sounded discriminating about sexuality and intimidating as a whole task so i decided to get rid of it.

Here are the updated questions i used throughout the TEST PROGRAMME:

TEST PROGRAMME

Questions and tasks.

  1. Sit calm and relaxed with eyes closed. No stimuli.

  2. Sit calm and relaxed with eyes opened.

  3. Let the subject answer these questions:

    1. What is your name and surname?

    2. How old are you?

    3. What year is it now?

    4. Where are we?

    5. What colour is sugar?

  4. Let the subject answer these questions:

    1. How would you describe the taste of the concrete?

    2. Describe the feeling of having a tail?

  5. Ask the subject to imagine following situations with eyes closed:

    1. Swimming in the ocean at night.

    2. Continuously falling down from a tall building.

    3. Sitting in the field full of flowers.

  6. Ask the subject to imagine following situations with eyes closed:

    1. Being a huge colourful giant with long legs and flying.

    2. Being in the horror story and fighting zombies.

    3. Imagine pure bright light, nothing else.

  7. Ask the subject to perform mathematical calculations on the paper:

    1. 2+2=

    2. 32-12=

    3. 40/7=

    4. (1378/3)+(6382/12)=

  8. Ask the subject to sing a chosen song. Record it.

  9. Ask the subject to listen to the song (Bob Marley “Jammin’”)

  10. Playback recorded song to the subject he/she just sang.

  11. Ask the subject to draw:

    1. An apple.

    2. A death.

    3. A self-portrait.

12.  Ask the subject to stand up and fall back into someone’s arms with eyes closed.

13.  Dance to any chosen song.

I did compile a folder which contains all of the readings of my 6 testing subjects and it reflects on the whole TEST PROGRAMME. I also included conclusion section on each of them to explain what i have learned more from it.

The most important thing what i have learned out of this research is that every single person reacts to same situations totally different. There are some similarities in brain readings but very roughly sketched out. The individual inputs are always significant to particular person. In order to make any assumptions i would have to test it with much more people and then withdraw some general conclusions, and even then, i think that would be too daring of assumptions to make about people. it is impossible to put people in boxes and expect all of the belonging parties will react to a scripted brain activity. Every single individual reacts individually due to life experiences, condition of the brain and personality.

After conducting all of these tasks i have made another conclusion that it has helped me greatly to understand how the EPOC device works. That gave me important tips of how to wire the final visual projection. By knowing the type of emotions headset can detect i will be able to apply it on to generative visual body.

I started off getting familiarized with vvvv first by looking through the tutorials and other peoples work. It seemed to be complex and confusing at start not knowing what each node does, luckily there are explanatory help files available for each small node with accompanying objects which made my workflow more rapid.

After looking through different works and tutorials i decided to start from scratch and create a simple graphic first, and then add more elements to the scene to make it more complex. I started off with cubes.

I managed to create a simple cubes and stack them on top of each other. The most difficult thing i found at start was work with camera. When i added a camera into the scene it did alter the actual image quite lot. I had to read through help files as well as test things through trial and error. I managed to figure it out and in order to maintain the proportions along with camera i used a Aspect Ratio nod. I found it very strange that camera settings make the image to look very strange and by changing just a little bit the FOV settings the perspective seems to be stretched to extremes. Here are some step by step processes i took to create a cube animation in vvvv:

I enjoyed the look of the cubes and the animation i managed to create but what i was looking for would be more like a center piece. I wanted to create something which would hold viewers attention in the middle of the projection and have lot of things happening around the center. The cubes looked nice but they seemed to be too much filling the space. I also thought it might be strange as a brain reading visual due to complex arrangement. I decided to work with spheres.

here are step by step images from arranging spheres and then attaching different attributes to them and animating in the end:

Here is the vvvv patch for creating 5 3-dimensional spheres. That’s how they look rendered:

I wanted to have many different spheres overlaying each their and having different parameters applied to each of them. I started off by creating 5 identical spheres and distribute them side by side on x axis. Then i used a camera to change the angle of view:

This image looks like there is only one sphere, but actually, there are 5 spheres right behind each other. Next step, remaining on this angle of view, i changed the size of each sphere:

Next step was choosing colors and mesh types. here are proceeding results:

I experimented with spheres by altering the number of vertexes. I managed to get different shapes. Here is the final design i was pretty happy with:

Here is a little video showing these sphere animated::

 

I thought it was cool to create shapes in vvvv but then they didn’t give me as much freedom to change their meshes. I decided to try and create a simple sphere discs in Maya and import it into vvvv and see how i manage with those:

As one can see in this video the perspective is very extreme and when this sphere rotates int gives unnatural feel to it. It is little bit confusing to understand the whole object. Because i wanted to manipulate this sphere’s each segment separately, i had to import each slice individually. I succeeded doing that but, again, it looked very weird. I decided to drop this idea and see if i can create something inside vvvv.

Before i started to play around with different visual styles and geometry in vvvv, i tried to implement an OSC signal receiver inside vvvv. That was very important without which i would not be able to continue. To understand the whole path of data transfer from headset into vvvv i will explain the sequence of applications involved in it.

Firstly i am using a Control Panel software to detect all of the headsets sensors and their statuses. If all of the little circles are green, that means everything is working fine. This software came in bundle with the headset and is a good tool to see the sensor activities. If the communication fails the sensors will go black. Ans as the communications reestablishes, they will go from red to yellow, and eventually green. Sometimes it takes a time to let all of them turn green and one needs to constantly check on each of those by slightly moving to different spot or pressing down.I found this software crucial to proceed to the next steps.

Then i am using a Mind Your OSCs application. I am particularly interested to use these five readings:

The next step is make Max/MSP access them through the port 7400. Here is the Max/MSP patch where the values are being unpacked and then directed further through port 8080:

All of the received values from Mind Your OSCs i will make accessible to vvvv through the port 8080. Here is the updated patch:

Next step, create a new vvvv patch which will listen to port 8080 and pick up values. I will also need to unpack them and make available for further use. I looked into OSC nodes and with help of tutorial files on the website (http://vvvv.org/) i managed to build a patch here:

It managed to listen to port 8080 and receive incoming values. The problem i was facing was that it didn’t unpack successfully the data which only made the numbers come through from the first OSC Decoder node. I am using Max/MSP to parse messages further and it seems to me that vvvv is unable to unpack all of them in one bundle. I had to look for another options and i tried to create for each message a separate port. In total i needed 5 different ports for 5 different message bundles.I had to look into different way to receive OSC messages. I thought that vvvv is very fast. I thought maybe send values from Max/MSP through 5 different ports. I had to change little bit Max/MSP patch and here is the updated version:

Followed by vvvv patch:

It proved to work very well. I was happy i managed to solve this issue without too much of hassle. I was again happy for choosing vvvv because it does work fast and doesn’t use too much of the processing power. I also chose to work with vvvv and not other applications such as Processing or Flash due to different style of workflow. It was familiar to use because it is another object orientated programming software similar to Max/MSP and Pure Data. It has a massive community forum with loads of help and tutorials as well as 3D rendering is made very flawless and easy on processing power.

Here are few examples of what other people have done in vvvv.

Abstract Birds., 2011. Les Objects Impossibles – Objet 2. Available at:  <vimeo.com/15835540> [Accessed 10 May 2012].

Timpernagel, J., 2011. Skyence – INSCT. Available at:  <vimeo.com/16219591> [Accessed 10 May 2012].

Defetto, 2009. Moiré. Available at:  <vimeo.com/2525839> [Accessed 10 May 2012].

>Data visualisation or, in other words, information graphics is the way to display data into visually perceivable form. The main purpose underlying information graphics is to make it more understandable. Gerlinde Schuller (2008, p. 111) writes in his book about ‘Information design, the art and science of translating complex, unstructured data into useful information that can be used with efficiency and effectiveness. The discipline is always addressed to a broad target group and is mostly concerned with public and mass communication.’

During my research i have come across very fascinating data visualising systems and here i would like to enlist few which has left most of impact on how i perceive the information.

A truly artistic turn on visualising data by Thomas Briggs:

Briggs, T., 2005. Burst of Energy 3. [image online] Available at:  <http://www.salientimages.com/BurstOfEnergy3.htm&gt; [Accessed 20 April 2012].

Aesthetic piece of crunching numbers using algorithmic composition by Reza Ali:

Ali, R., 2010. LORMALIZED. Available at:  <http://www.syedrezaali.com/blog/?tag=graphics&gt; [Accessed 20 April 2012].

Another interesting record of exploding particles inside the Relativistic Heavy Ion Collider are displayed in this image:

Berger, J., n.d. [image online] Available at: <http://backreaction.blogspot.co.uk/2006_08_01_archive.html&gt; [Accessed 20 April 2012].

The Opte Project was created to make a visual representation of a space that is very much one-dimensional, a metaphysical universe, aiming to map every class C network on the Internet from a single computer and a single Internet connection with an overall goal of creating a map of the entire Internet.

The Opte Project., 2003. [image online] Available at: <http://www.opte.org/maps/&gt; [Accessed 20 April 2012].

I have researched enormous amount of visual graphic styles and i must say that there are many reoccurring styles which don’t seem to strike my mind any more due to repetition. What i was looking for is more original style visualisations which would amaze me with complexity of graphics and beauty of character.

I think i will learn to use more alternative styles to achieve the stylistics of my own piece. I would like to use a 3D space to position graphics and use movement and changing liveness to represent it.

I have added more research and imagery into my sketchbook to support my investigations into visualising data.

———————————–

References:

Schuller, G., 2008. Designing Universal Knowledge. Baden: Lars Müllers Publishers.

The graph structure I created previously needed some perfection. Firstly I wanted to create a rectangular segments underneath individual graphs so that they look confines on their own box as well as will ensure easier analysing knowing the limits of the bars. Also I had to move the names of the readings so that they relate to specific graph box. Here are the steps of the development:

I created a square rectangular underneath the bar. I had to do the major calculations and defining the coordinates of this black box which is a shape being drawn through 4 defined vertexes. This is how it is written in Processing code:

It is important to start drawing a shape and then after specifying vertexes (it can be as many as one pleases) close the shape by typing “endShape(CLOSE). Then it will be filled with colour one has defined before with line “fill”. In the screen capture like this i have used “fill (100, 30)”. That means the fill will be grey (100 is grey considering that 0 is black and 255 is white in RGB values) and second variable defines the transparency/alpha. After adjusting text places and using my new fill colour the final look of the graph is as follows:

How will this work?

I am going to use this graph to record the reaction of subject’s brain 1 min after i have read the question or given a task. The TEST PROGRAMME is my own creation of set of questions to test human brain activity. I have explained and published TEST PROGRAMME in one of the previous posts. If you want to read it now, press here.

I want to record the brain activities in such graphs and then organize them into one big folder which will serve as a main reference to the documentation of TEST PROGRAMME.

I also programmed Processing to save each test frame which i will be able to print out later. All of the files are automatically dumped in my sketch folder after pressing a spacebar button for saving individual frames.

“save” function let’s program to save one frame and add it to the rest of saved ones in the same folder.

Reason why i would like to make series of tests and have a visual display of graphs is that it will help me to understand the usage of the EPOC headset and when i can expect certain responses. My aim is to recognize particular patterns of behaviour and if it is enough to construct visuals from it or i will need to introduce some additional stimuli.

In order to present my rough plan of action i made this sketch explaining two major parts in my project. It consists of two stages, one is [UNDERSTANDING] through experiments and tests how device operates and how reaction feedback differs in different situations and second is [DRAWING CONSCIOUSNESS] based on the stage one:

It is very exciting process to learn about people emotions and neurological responses to different stimulus in empiric fashion using latest technology gadget. But in order to enjoy this fascinating project in its completed form i must structure very systematic and logical steps of production ladder and devote each stage my utmost care and knowledge.

My main challenge is creating a visualising graph and then, in latter stage, build my own visualising tool. These tasks will require lot of programming and overall understanding of the beauty of visualising data. This discipline has been tried and mastered for many decades and creating something original will demand knowledge and unique style.

Firstly i will break down systematic tasks and tools i will be using for the first part:

1) Prepare EPOC headset (charge, connections, testing).
2) Engage subjects and run all TEST PROGRAMME questions with each of them.
3) Run Mind Your OSC software tool to access values.
4) Use Processing or Max/MSP to access values through the specified port.
5) Feed values into Processing software to draw a graph
6) Export image which is a recording of 1min reaction time.
7) Compile data and make a folder for easy evaluation.

I am confident with first 3 tasks which are completed and ready to go. My main and hardest task is creating a graph and making cross communication between Mind Your OSC and Processing or Max/MSP. As i mentioned in previous post i have to ensure all programs run on same computer and are Windows compatible.

Establishing communication.

To begin with I started EPOC Control Panel to ensure all electrodes were green and in positions as well as indicated green, then I started Mind Your OSC application. I could see the values constantly fluctuating. Then i connected Mind Your OSC to the port and opened Processing sketch which would pick up these values streaming:

At this stage i was happy that connection was established and Processing could pick up the incoming messages and filter them. My first task was to unpack messages and get values streaming into Processing. By changing different print modes i was trying to get some numbers coming in but without success:

My main issue was that Processing was not picking up the number value. It was printing “typetag: f”.  Typetag “f” means a float of numbers but my patch was trying to display integer numbers. The difference between float and integer is that float displays a decimal value while integer is a fixed number. To avoid float or integer numbers I tried printing just data, thinking that it might make the situation better but it didn’t. In return I saw very strange data streaming and it seemed that it was not usable:

I believed the solution was somewhere near but my beginner’s programming skills and experience with Processing wasn’t helping me much. I was already giving up on Processing and finding it very difficult to extract the OSC message value using these sketches. My only hope was in Max/MSP and from my previous experiences i felt more confident using it.

My first attempt was creating a simple port reader and examine Max window to see if anything was streaming in:

it was a first time hit and i knew that i was on a right path. I not only got the right messages in but also according numeric values of those messages. My next step was try and unpack these values and direct them into separate streams of number boxes. Here are my first few attempts:

I was obviously doing something wrong. The Max windows was clearly showing that all incoming values were streaming in but i was unable to access and display them using “unpack” object. I did extensive research on Cycling 74 (official Max/MSP website) community forum and found an article which displayed similar problem to mine. The person in question suggested downloading a great library pack for Max/MSP/Jitter developed by people from The UC Berkeley Center for New Music and Audio Technologies (CNMAT). The full package of support tutorials, examples and patches can be downloaded for free from their website. This is a great site and i found it very inspirational and full of great stuff for learning!

I ended up using OSC-Route object and eventually i managed to write a new fully working patch:


Creating a graph.

My initial idea was to create a simple vertical line graph to record the changing values of different OSC message over 60 sec period of time. Here is the sketched idea:

I decided to create this graph in Processing because it was more of a drawing program than Max/MSP and it had more visual styles at hand to experiment with. I started off creating a simple vertical line drawing program and selected thickness of the lines, colour, and distance between them. Here is a small code which generates random length green lines:

The graphics look like this (in reality these lines are constantly changing and it looks like an animation)

Here are some different visual styles i was experimenting with:

I think i would like to go for a very simple and clear design. At this stage of visualising incoming values i don’t need to worry about stylistics or visual aesthetics. The graph has to be easy to understand and overall explicable in order to make a final comparisons and conclusions. I decided that each mood i will dedicate a different colour so it will be easier to differentiate between them as well as easier to do the comparisons across different test subjects and questions.

In all of these graph examples the height of the line was selected randomly. For my particular project i will need to define the height of the line according to the incoming value. But firstly i will try to make Max/MSP communicate with Processing and see if i can manage to send some float values over to the Processing. Here is an example of max patch  :

After routing signal from Mind Your OSC and navigating them into dedicated packets i can direct those values further down to Processing. I am using “udpsend” object for this task. Here is Processing sketch which does receive OSC values sent from Max/MSP:

Here are further development steps and completion steps on Max/MSP patch:

Here is the graph which has been drawn in Processing based on incoming values from Max/MSP. The height of each line is defined by the value showing in blue box. That’s how it looks written in code. The highlighted green area shows how to insert the data values stored into an array and use them to define the length of the bars:

I also want to do the colour matching on Max/MSP value boxes and apply those identical colours on each chart drawn by Processing for better understanding.

Here is the final edit of Max/MSP patch and test with 5 different colour charts.

These tests have been done manually without the EPOC device in action as i already tested the device beforehand and it worked fine. I had to establish flawless communication between Max/MSP and Processing and could do that without EPOC device. I have been adjusting changing values by myself in order to program things faster and without cluttering with saline liquids and firing up two more programs unnecessary. I will need to do the final test when i have eventually installed Windows on my machine so i can do it with everything working at once. Fingers crossed!

So far i am very pleased with the progress and that means that i can finally start my TEST PROGRAMME. The test subjects are confirmed and i will have few days of hectic testing and recording. I have already prepared all of them questions and folder to organise all my resulting data for further analysis. I am little bit behind the schedule but not much, i just will have to do more work in shorter period of time. My goal is to finish all of them tests and analysis by the end of this month and start on the final piece at the beginning of April.

For people who are interested to see the full body code for Processing, please click here .

Follow

Get every new post delivered to your Inbox.