The Final piece.)

In terms of my final piece i was planning to show all of the elements at the critique. That includes: showing and explaining all of the software used in this project, demonstrate the usage of the headset and how i will detect brain activity with it, ask someone to wear EPOC headset and then show the final visualization triggered by that person’s brain readings. My main concern would be some programs crashing at the time of presentation because i have lot of them running at the same time!

In creating visual style i took into consideration the TEST PROGRAMME which i carried out in the previous sessions. It was important to test it with different people in order to understand the extremes of situations and the brain output during those situations. I knew only roughly what sort of responses would be, for example, if person closes eyes or if person is singing. there were particular readings with higher indications in one case and another readings higher or lower in another case. I concluded in order to show this device in action i will need to ask person to do small tasks to make the happening changes in brain visible. but i have to stress this again, the TEST PROGRAMME was solely created for my particular project and it wasn’t made for making any assumption on how people think. People think differently and each person output is totally individual and belonging to that person. There are some patterns emerging which i was hoping to reflect through the live visuals.

After lots of experimentation in style and geometry solutions i managed to find something which appealed to me and i thought that chosen visual style would explain my work the best. I was influenced by one patch which i found inside vvvv program support files and expanded from it. The patch did create a single spiral in 3D space.

I started to work with it and saw a potential of being turned into a more complex in geometry and beautiful in style final piece of work.

These images show the progress made to complete the final style and alterations i have made to the geometry to make it look better:

In the examples above i used a geometry rendered in WireFrame mesh. I decided to change it to Point look because WireFrame looked too rough and uncompleted. It didn’t look as a finished piece and it was too heavy in graphics. I wanted to choose something more airy and intricate so i looked into different options in mesh display:

The Point option was exactly what i was looking for. It changed the whole feel about the piece and I knew instantly that would be my final style. The background also would have to be balck in order to have a high contrast projection image. The next step was working more with the geometry and see how it all comes together when played together with Emotiv EPOC headset. The final test was crucial to ensure everything was working fine. Here are some screenshots displaying the progression of the geometry shape and how i want the final piece to look like:

Here is the video showing and explaining in full detail my whole project:

I think as a project it went very well and i am very satisfied with the final outcome. I would like to present this piece at galleries and shows, maybe even festivals, but i need to refine the presentation method and think of something very interesting projection wise. I will be updating this blog with further movements on this project, starting with our London show in July.


I created TEST PROGRAMME exclusively by myself in order to investigate brainwave activity in different situations as well as familiarize myself with the EPOC headset. I was interested to test different subject brain responses to logical tasks as well as tasks which involved imagination and creative thinking. My aim was to create a set of different situations and by using a vague scientific method approach make observations and draw conclusions based on the gathered data. I am not following strictly scientific method but using some elements of it, such as: running tests (experiments), gathering and analyzing data, drawing conclusions and reporting results. My tasks purely consisted of testing EPOC headset with few individuals and then sum up the results.

I used available software (EPOC Control Panel, Mind Your OSCs) in order to gather and read data. I also created a custom built programs (Max/MSP, Processing) in order to transmit data using an OSC protocol and display it into visual graphs. The brainwave electric current data was transmitting continuously in real time therefore capturing out-coming data into still visual layout was crucial in order to conduct any further analysis. The graph displays brain activity recorded over 1 min period of time.

There are total number of 5 different emotional responses such as:

  • Engaged/Bored
  • Excitement
  • Excitement Long Term
  • Meditation
  • Frustration

The software I am using is registering a brain activity into these 5 categories mentioned above. I am measuring and recording each individual’s brain response to TEST PROGRAMME across these 5 categories defined by developers of the headset. An article on Wikipedia reports that “the names may not perfectly reflect exactly what the emotion is.” Wikipedia article, 2012. Emotiv Systems. [online] Available at: <; [Accessed 20 March 2012].I will be taking it into consideration when summarizing results and focusing more on detecting brain activity patterns rather than defining what they could signify in any expressive form.

I did alter original questions a little bit and i decided to remove the last question due to its inappropriate nature. It sounded discriminating about sexuality and intimidating as a whole task so i decided to get rid of it.

Here are the updated questions i used throughout the TEST PROGRAMME:


Questions and tasks.

  1. Sit calm and relaxed with eyes closed. No stimuli.

  2. Sit calm and relaxed with eyes opened.

  3. Let the subject answer these questions:

    1. What is your name and surname?

    2. How old are you?

    3. What year is it now?

    4. Where are we?

    5. What colour is sugar?

  4. Let the subject answer these questions:

    1. How would you describe the taste of the concrete?

    2. Describe the feeling of having a tail?

  5. Ask the subject to imagine following situations with eyes closed:

    1. Swimming in the ocean at night.

    2. Continuously falling down from a tall building.

    3. Sitting in the field full of flowers.

  6. Ask the subject to imagine following situations with eyes closed:

    1. Being a huge colourful giant with long legs and flying.

    2. Being in the horror story and fighting zombies.

    3. Imagine pure bright light, nothing else.

  7. Ask the subject to perform mathematical calculations on the paper:

    1. 2+2=

    2. 32-12=

    3. 40/7=

    4. (1378/3)+(6382/12)=

  8. Ask the subject to sing a chosen song. Record it.

  9. Ask the subject to listen to the song (Bob Marley “Jammin’”)

  10. Playback recorded song to the subject he/she just sang.

  11. Ask the subject to draw:

    1. An apple.

    2. A death.

    3. A self-portrait.

12.  Ask the subject to stand up and fall back into someone’s arms with eyes closed.

13.  Dance to any chosen song.

I did compile a folder which contains all of the readings of my 6 testing subjects and it reflects on the whole TEST PROGRAMME. I also included conclusion section on each of them to explain what i have learned more from it.

The most important thing what i have learned out of this research is that every single person reacts to same situations totally different. There are some similarities in brain readings but very roughly sketched out. The individual inputs are always significant to particular person. In order to make any assumptions i would have to test it with much more people and then withdraw some general conclusions, and even then, i think that would be too daring of assumptions to make about people. it is impossible to put people in boxes and expect all of the belonging parties will react to a scripted brain activity. Every single individual reacts individually due to life experiences, condition of the brain and personality.

After conducting all of these tasks i have made another conclusion that it has helped me greatly to understand how the EPOC device works. That gave me important tips of how to wire the final visual projection. By knowing the type of emotions headset can detect i will be able to apply it on to generative visual body.

Experiments in vvvv

I started off getting familiarized with vvvv first by looking through the tutorials and other peoples work. It seemed to be complex and confusing at start not knowing what each node does, luckily there are explanatory help files available for each small node with accompanying objects which made my workflow more rapid.

After looking through different works and tutorials i decided to start from scratch and create a simple graphic first, and then add more elements to the scene to make it more complex. I started off with cubes.

I managed to create a simple cubes and stack them on top of each other. The most difficult thing i found at start was work with camera. When i added a camera into the scene it did alter the actual image quite lot. I had to read through help files as well as test things through trial and error. I managed to figure it out and in order to maintain the proportions along with camera i used a Aspect Ratio nod. I found it very strange that camera settings make the image to look very strange and by changing just a little bit the FOV settings the perspective seems to be stretched to extremes. Here are some step by step processes i took to create a cube animation in vvvv:

I enjoyed the look of the cubes and the animation i managed to create but what i was looking for would be more like a center piece. I wanted to create something which would hold viewers attention in the middle of the projection and have lot of things happening around the center. The cubes looked nice but they seemed to be too much filling the space. I also thought it might be strange as a brain reading visual due to complex arrangement. I decided to work with spheres.

here are step by step images from arranging spheres and then attaching different attributes to them and animating in the end:

Here is the vvvv patch for creating 5 3-dimensional spheres. That’s how they look rendered:

I wanted to have many different spheres overlaying each their and having different parameters applied to each of them. I started off by creating 5 identical spheres and distribute them side by side on x axis. Then i used a camera to change the angle of view:

This image looks like there is only one sphere, but actually, there are 5 spheres right behind each other. Next step, remaining on this angle of view, i changed the size of each sphere:

Next step was choosing colors and mesh types. here are proceeding results:

I experimented with spheres by altering the number of vertexes. I managed to get different shapes. Here is the final design i was pretty happy with:

Here is a little video showing these sphere animated::


I thought it was cool to create shapes in vvvv but then they didn’t give me as much freedom to change their meshes. I decided to try and create a simple sphere discs in Maya and import it into vvvv and see how i manage with those:

As one can see in this video the perspective is very extreme and when this sphere rotates int gives unnatural feel to it. It is little bit confusing to understand the whole object. Because i wanted to manipulate this sphere’s each segment separately, i had to import each slice individually. I succeeded doing that but, again, it looked very weird. I decided to drop this idea and see if i can create something inside vvvv.

Data transferring using OSC

Before i started to play around with different visual styles and geometry in vvvv, i tried to implement an OSC signal receiver inside vvvv. That was very important without which i would not be able to continue. To understand the whole path of data transfer from headset into vvvv i will explain the sequence of applications involved in it.

Firstly i am using a Control Panel software to detect all of the headsets sensors and their statuses. If all of the little circles are green, that means everything is working fine. This software came in bundle with the headset and is a good tool to see the sensor activities. If the communication fails the sensors will go black. Ans as the communications reestablishes, they will go from red to yellow, and eventually green. Sometimes it takes a time to let all of them turn green and one needs to constantly check on each of those by slightly moving to different spot or pressing down.I found this software crucial to proceed to the next steps.

Then i am using a Mind Your OSCs application. I am particularly interested to use these five readings:

The next step is make Max/MSP access them through the port 7400. Here is the Max/MSP patch where the values are being unpacked and then directed further through port 8080:

All of the received values from Mind Your OSCs i will make accessible to vvvv through the port 8080. Here is the updated patch:

Next step, create a new vvvv patch which will listen to port 8080 and pick up values. I will also need to unpack them and make available for further use. I looked into OSC nodes and with help of tutorial files on the website ( i managed to build a patch here:

It managed to listen to port 8080 and receive incoming values. The problem i was facing was that it didn’t unpack successfully the data which only made the numbers come through from the first OSC Decoder node. I am using Max/MSP to parse messages further and it seems to me that vvvv is unable to unpack all of them in one bundle. I had to look for another options and i tried to create for each message a separate port. In total i needed 5 different ports for 5 different message bundles.I had to look into different way to receive OSC messages. I thought that vvvv is very fast. I thought maybe send values from Max/MSP through 5 different ports. I had to change little bit Max/MSP patch and here is the updated version:

Followed by vvvv patch:

It proved to work very well. I was happy i managed to solve this issue without too much of hassle. I was again happy for choosing vvvv because it does work fast and doesn’t use too much of the processing power. I also chose to work with vvvv and not other applications such as Processing or Flash due to different style of workflow. It was familiar to use because it is another object orientated programming software similar to Max/MSP and Pure Data. It has a massive community forum with loads of help and tutorials as well as 3D rendering is made very flawless and easy on processing power.

Here are few examples of what other people have done in vvvv.

Abstract Birds., 2011. Les Objects Impossibles – Objet 2. Available at:  <> [Accessed 10 May 2012].

Timpernagel, J., 2011. Skyence – INSCT. Available at:  <> [Accessed 10 May 2012].

Defetto, 2009. Moiré. Available at:  <> [Accessed 10 May 2012].

Data visualisation.

>Data visualisation or, in other words, information graphics is the way to display data into visually perceivable form. The main purpose underlying information graphics is to make it more understandable. Gerlinde Schuller (2008, p. 111) writes in his book about ‘Information design, the art and science of translating complex, unstructured data into useful information that can be used with efficiency and effectiveness. The discipline is always addressed to a broad target group and is mostly concerned with public and mass communication.’

During my research i have come across very fascinating data visualising systems and here i would like to enlist few which has left most of impact on how i perceive the information.

A truly artistic turn on visualising data by Thomas Briggs:

Briggs, T., 2005. Burst of Energy 3. [image online] Available at:  <; [Accessed 20 April 2012].

Aesthetic piece of crunching numbers using algorithmic composition by Reza Ali:

Ali, R., 2010. LORMALIZED. Available at:  <; [Accessed 20 April 2012].

Another interesting record of exploding particles inside the Relativistic Heavy Ion Collider are displayed in this image:

Berger, J., n.d. [image online] Available at: <; [Accessed 20 April 2012].

The Opte Project was created to make a visual representation of a space that is very much one-dimensional, a metaphysical universe, aiming to map every class C network on the Internet from a single computer and a single Internet connection with an overall goal of creating a map of the entire Internet.

The Opte Project., 2003. [image online] Available at: <; [Accessed 20 April 2012].

I have researched enormous amount of visual graphic styles and i must say that there are many reoccurring styles which don’t seem to strike my mind any more due to repetition. What i was looking for is more original style visualisations which would amaze me with complexity of graphics and beauty of character.

I think i will learn to use more alternative styles to achieve the stylistics of my own piece. I would like to use a 3D space to position graphics and use movement and changing liveness to represent it.

I have added more research and imagery into my sketchbook to support my investigations into visualising data.



Schuller, G., 2008. Designing Universal Knowledge. Baden: Lars Müllers Publishers.

Recent updates on graph.

The graph structure I created previously needed some perfection. Firstly I wanted to create a rectangular segments underneath individual graphs so that they look confines on their own box as well as will ensure easier analysing knowing the limits of the bars. Also I had to move the names of the readings so that they relate to specific graph box. Here are the steps of the development:

I created a square rectangular underneath the bar. I had to do the major calculations and defining the coordinates of this black box which is a shape being drawn through 4 defined vertexes. This is how it is written in Processing code:

It is important to start drawing a shape and then after specifying vertexes (it can be as many as one pleases) close the shape by typing “endShape(CLOSE). Then it will be filled with colour one has defined before with line “fill”. In the screen capture like this i have used “fill (100, 30)”. That means the fill will be grey (100 is grey considering that 0 is black and 255 is white in RGB values) and second variable defines the transparency/alpha. After adjusting text places and using my new fill colour the final look of the graph is as follows:

How will this work?

I am going to use this graph to record the reaction of subject’s brain 1 min after i have read the question or given a task. The TEST PROGRAMME is my own creation of set of questions to test human brain activity. I have explained and published TEST PROGRAMME in one of the previous posts. If you want to read it now, press here.

I want to record the brain activities in such graphs and then organize them into one big folder which will serve as a main reference to the documentation of TEST PROGRAMME.

I also programmed Processing to save each test frame which i will be able to print out later. All of the files are automatically dumped in my sketch folder after pressing a spacebar button for saving individual frames.

“save” function let’s program to save one frame and add it to the rest of saved ones in the same folder.

Reason why i would like to make series of tests and have a visual display of graphs is that it will help me to understand the usage of the EPOC headset and when i can expect certain responses. My aim is to recognize particular patterns of behaviour and if it is enough to construct visuals from it or i will need to introduce some additional stimuli.

Accessing and displaying EPOC headset values.

In order to present my rough plan of action i made this sketch explaining two major parts in my project. It consists of two stages, one is [UNDERSTANDING] through experiments and tests how device operates and how reaction feedback differs in different situations and second is [DRAWING CONSCIOUSNESS] based on the stage one:

It is very exciting process to learn about people emotions and neurological responses to different stimulus in empiric fashion using latest technology gadget. But in order to enjoy this fascinating project in its completed form i must structure very systematic and logical steps of production ladder and devote each stage my utmost care and knowledge.

My main challenge is creating a visualising graph and then, in latter stage, build my own visualising tool. These tasks will require lot of programming and overall understanding of the beauty of visualising data. This discipline has been tried and mastered for many decades and creating something original will demand knowledge and unique style.

Firstly i will break down systematic tasks and tools i will be using for the first part:

1) Prepare EPOC headset (charge, connections, testing).
2) Engage subjects and run all TEST PROGRAMME questions with each of them.
3) Run Mind Your OSC software tool to access values.
4) Use Processing or Max/MSP to access values through the specified port.
5) Feed values into Processing software to draw a graph
6) Export image which is a recording of 1min reaction time.
7) Compile data and make a folder for easy evaluation.

I am confident with first 3 tasks which are completed and ready to go. My main and hardest task is creating a graph and making cross communication between Mind Your OSC and Processing or Max/MSP. As i mentioned in previous post i have to ensure all programs run on same computer and are Windows compatible.

Establishing communication.

To begin with I started EPOC Control Panel to ensure all electrodes were green and in positions as well as indicated green, then I started Mind Your OSC application. I could see the values constantly fluctuating. Then i connected Mind Your OSC to the port and opened Processing sketch which would pick up these values streaming:

At this stage i was happy that connection was established and Processing could pick up the incoming messages and filter them. My first task was to unpack messages and get values streaming into Processing. By changing different print modes i was trying to get some numbers coming in but without success:

My main issue was that Processing was not picking up the number value. It was printing “typetag: f”.  Typetag “f” means a float of numbers but my patch was trying to display integer numbers. The difference between float and integer is that float displays a decimal value while integer is a fixed number. To avoid float or integer numbers I tried printing just data, thinking that it might make the situation better but it didn’t. In return I saw very strange data streaming and it seemed that it was not usable:

I believed the solution was somewhere near but my beginner’s programming skills and experience with Processing wasn’t helping me much. I was already giving up on Processing and finding it very difficult to extract the OSC message value using these sketches. My only hope was in Max/MSP and from my previous experiences i felt more confident using it.

My first attempt was creating a simple port reader and examine Max window to see if anything was streaming in:

it was a first time hit and i knew that i was on a right path. I not only got the right messages in but also according numeric values of those messages. My next step was try and unpack these values and direct them into separate streams of number boxes. Here are my first few attempts:

I was obviously doing something wrong. The Max windows was clearly showing that all incoming values were streaming in but i was unable to access and display them using “unpack” object. I did extensive research on Cycling 74 (official Max/MSP website) community forum and found an article which displayed similar problem to mine. The person in question suggested downloading a great library pack for Max/MSP/Jitter developed by people from The UC Berkeley Center for New Music and Audio Technologies (CNMAT). The full package of support tutorials, examples and patches can be downloaded for free from their website. This is a great site and i found it very inspirational and full of great stuff for learning!

I ended up using OSC-Route object and eventually i managed to write a new fully working patch:

Creating a graph.

My initial idea was to create a simple vertical line graph to record the changing values of different OSC message over 60 sec period of time. Here is the sketched idea:

I decided to create this graph in Processing because it was more of a drawing program than Max/MSP and it had more visual styles at hand to experiment with. I started off creating a simple vertical line drawing program and selected thickness of the lines, colour, and distance between them. Here is a small code which generates random length green lines:

The graphics look like this (in reality these lines are constantly changing and it looks like an animation)

Here are some different visual styles i was experimenting with:

I think i would like to go for a very simple and clear design. At this stage of visualising incoming values i don’t need to worry about stylistics or visual aesthetics. The graph has to be easy to understand and overall explicable in order to make a final comparisons and conclusions. I decided that each mood i will dedicate a different colour so it will be easier to differentiate between them as well as easier to do the comparisons across different test subjects and questions.

In all of these graph examples the height of the line was selected randomly. For my particular project i will need to define the height of the line according to the incoming value. But firstly i will try to make Max/MSP communicate with Processing and see if i can manage to send some float values over to the Processing. Here is an example of max patch  :

After routing signal from Mind Your OSC and navigating them into dedicated packets i can direct those values further down to Processing. I am using “udpsend” object for this task. Here is Processing sketch which does receive OSC values sent from Max/MSP:

Here are further development steps and completion steps on Max/MSP patch:

Here is the graph which has been drawn in Processing based on incoming values from Max/MSP. The height of each line is defined by the value showing in blue box. That’s how it looks written in code. The highlighted green area shows how to insert the data values stored into an array and use them to define the length of the bars:

I also want to do the colour matching on Max/MSP value boxes and apply those identical colours on each chart drawn by Processing for better understanding.

Here is the final edit of Max/MSP patch and test with 5 different colour charts.

These tests have been done manually without the EPOC device in action as i already tested the device beforehand and it worked fine. I had to establish flawless communication between Max/MSP and Processing and could do that without EPOC device. I have been adjusting changing values by myself in order to program things faster and without cluttering with saline liquids and firing up two more programs unnecessary. I will need to do the final test when i have eventually installed Windows on my machine so i can do it with everything working at once. Fingers crossed!

So far i am very pleased with the progress and that means that i can finally start my TEST PROGRAMME. The test subjects are confirmed and i will have few days of hectic testing and recording. I have already prepared all of them questions and folder to organise all my resulting data for further analysis. I am little bit behind the schedule but not much, i just will have to do more work in shorter period of time. My goal is to finish all of them tests and analysis by the end of this month and start on the final piece at the beginning of April.

For people who are interested to see the full body code for Processing, please click here .

First trial with EPOC headset device.

I was very excited to test the EPOC headset and to see what it is capable of. Firstly, I took it out of the box and read through the manual. Before I could start the headset had to be charged and sensors prepared by hydrating them in a special saline solution.

I charged the headset with a special usb cable provided. Here is the headset on charge (without sensors with red light on). When battery is full the green light will indicate so.

The sensors came packed in a plastic case which is suitable for soaking. In total there are 16 felt tip sensors with golden plates. They must be soaked wet and then mounted in the headset arms. It is important that they are not dry or it will result in insufficient conductivity and therefore weak brainwave detection. They are soaked in standard multipurpose contact lens solution which can be purchased in any local drugstore in case i will run out of it.

After preparing the headset I have to fit it correctly on someone’s head. There are few tips on how to place it on and get a good contact with sensors. It says on the manual that “Good contact of reference sensors is the key for a good signal.

In order to establish the connection between the headset and the computer I used a supplied USB Transceiver Dongle and placed it into one of the USB slots. Then I turned on the headset and checked the signal strength as well as sensor connectivity with the help of provided software tool called EPOC Control Panel which came with the purchase.

In this image I can see that all sensors apart from one are green. Green means good connection and there is a slight issue with one sensor on the forehead. There might be an issue with the wetness of the sensor or the position. The manual says that my “objective is to achieve as many green lights as possible by adjusting the position of the various arms on the headset. Note that the EPOC will still function with some sensor locations showing yellow or orange however the detections will be less reliable in this state. Often the contact quality will gradually improve after a few minutes use.”

I noticed that EPOC must be in field of USB dongle. The PC tower was underneath the table and at the start it was hard to get the connection, but when we placed the EPOC closer to the dongle, the connection established straight away. In order to maintain this connection it was important nothing blocked the signal sending, things like a table or a chair.

To test this head device i used two software tools:
1. EPOC Control Panel
2. Mind Your OSC

Here is a small video preview of Mind Your OSC software responding to the headset in use:

This software has a very good response in Affective mode and all subcategories such as: Engaged/Bored, Excitement, Excitement Long Term, Meditation, and Frustration are working fine. The value amplitude is ranging from 0 to 1. Other modes such as Cognitive and Expressive seems to fail picking up any numeric changes. There are spurts of values coming up inconsistently and therefore not reliable for my project.

What is good about this particular software is that all values can be made accessible through the port which is a main bridging element in case of using additional software to interpret these values. I can select the port number but for some odd reason i can’t change the IP address. That means that i HAVE to run this software on same computer to parse the values through the chosen port. I think this is very disappointing due to the fact that this software is for Windows only and should be used on same machine. From previous experience I have worked on several computers and linked them in order to communicate for a single task. In this instance i will need to install Windows onto my Mac and run all programs in it. It is not an issue for me but a unnecessary hassle and limitation provided by software development kit only compatible with Windows operating system and restricted development of Mind Your OSC. I am aware that for additional cost one can make this EPOC headset more suitable for research grounds and i am only confined by my choice of cheaper version. So i will need to go extra mile in order to make maximum use of it!

I think the reason why all values are not reading properly on Mind Your OSC is due to instability of transferring values via bluetooth set or poor programming performance. I had another chance to test Expressive values using a build in 3D face model within EPOC Control Panel and it seemed to work very good. That let’s me conclude on rather poor Mind Your OSC program performance.

Here is a video showing how actual facial expressions are being translated onto a 3D model. Please mind that the footage for both 3D model and actual person are filmed separately and are not in synch. But it gives a rough idea that software actually picks up different face expressions such as eye blinking, eyebrow frowns and mouth smiling:

I also tried a little app inside EPOC Control Panel which allows me to program different facial expressions to certain key strokes. These key strokes are global functions which can be picked up by any text related software and to show it i used a simple Text Editor:

This opportunity to use different programmable assets allows me to expand on device’s usability. I am intended to use Mind Your OSC as a main contributor for value parsing and detecting mood changes while EPOC Control Panel will allow me to map different facial expressions and assign some commands using software which will detect keystroke input. Both programs are supplementing each other and by using them simultaneously i will get more feedback coming from headset into my chosen software.


TEST PROGRAMME is specifically designed by myself in order to conduct multiple tests with EPOC headset in order to achieve understanding about how this device works and how i can succeed in creating the final installation.

TEST PROGRAMME is a set amount of different tests which will be conducted with each individual using Emotiv headset. Each test will be recorded on specifically designed chart. I will use charts to compare the results and withdraw conclusive elements necessary for developing aesthetics of my end product. It will also help me to understand the specifics of the device and how i can utilise certain body exercises to boost brain activity and get most of the feedback.

Firstly, i will look at the software i will be using with EPOC headset. This free app can be obtained from Emotiv website by clicking here. This application is for Windows operating system only. Here is the screenshot of how it looks and what it does:

As one can see from the picture above it does measure different emotional states and simple gestures. It is slightly different from traditional EEG devices which pick up different waves describing different states of the brain. This application is preprogrammed to pick up specific state of emotion. It seems to me very limited in depicting different emotional states, but maybe it because it is reduced to very basic brain patterns. Other half of the readings are purely based on face muscle movements and can obtain only two vales. It is either ticked or not. Those values can be used like a switch ‘on’ and ‘off’.

This application limits me as a user. I have only bought the headset and therefore my possibilities to use this device is reduced purely by amount of applications designed for it. A developers edition was too costly for me so I knew about the consequences buying just a headset on it’s own. There are other softwares developed for this device and i am intended to use as many as possible in order to get maximum data out of this headset and then interpret this data in visualising softwares.

I will start with this app first and see how it proceeds.

The main ideas for my TEST PROGRAMME I will drawn from medical tests performed with traditional EEG devices and from an OSC application I mentioned above.

When EEG is performed at the hospital the procedure may vary according to the technology used and type of diagnosis doctors are after. But most of the time EEG records brain in different emotional states. There are tests when patients are held in dark room for some time cutting out any unnecessary stimulus and recording brain activity at that state and there are times when patient is subjected to some tests with flashing light to investigate the chances of epilepsy, etc. I also found an interesting experiment where ‘people listen to tapes of spoken Danish played forward and backward. When tape is played forward, the listening and language centers are activated along with other relevant centres in order to understand the message in what is being said. But when the tape is played backward, the entire brain is activated!’ (Nørretranders, p.117)
This particular test shows how brain works to decode the information when it is familiar and unfamiliar. It is very fascinating to see how brain reacts when receiving unknown source of information, it does engage completely to solve the task. I can imagine this happens when a child is learning about this world from scratch, when nothing is stored in his memory and it has to comprehend and create a meaning/understanding to what it faces and pack this information so that it can be accessed later. As we grow older we establish rigid comprehension about this world and how it runs. We keep developing meaning in more deeper levels of consciousness through philosophy and religion but in terms of reason it seems perfectly clear to us. When we listen to something being played backwards it will cause immediate reaction of knowing that it is totally impossible to comprehend, but it doesn’t mean that brain is thinking the same. It will engage to find the meaning to what is being played and when it doesn’t happen it will keep working on it.

For me this moment of ‘learning’ new things is the most important aspect.


  1. Stay calm in dark, silent room. No stimuli.
  2. Stay calm in lit room.
  3. Let the subject answer these questions:
    -What is your name and surname?
    -How old are you?
    -What year is it now?
    -Where are we?
    -What colour is sugar?
  4. Let the subject answer these questions:
    -What is the taste of the concrete?
    -How doest it feel like to have a tail?
    -Which song is it? (Play a song backwards)
  5. Let the subject imagine these things being silent with eyes closed:
    -Swimming in the ocean at night. (after describe the feelings in words)
    -Falling from the 100th floor.
    -Sitting in the field of flowers.
  6. Let the subject imagine these things being silent with eyes closed:
    -Being a huge colourful giant with long legs and flying
    -Being in the horror story and fighting real zombies
    -Pure bright light, nothing else
  7. Ask the subject to do mathematical calculations on paper:
    :: 2+2=
    :: 32-12=
    :: 40:7=
    :: (1378:3) + (6382:12)=
  8. Ask the subject to sing a favourite song.
  9. Playback some very popular song.
  10. Playback recorded song which subject just sang.
  11. Ask subject to draw:
  12. Stand up, close eyes and fall back. (someone is catching the person from back)
  13. Dance to some catchy tune.
  14. Show the subject:
    -Disturbing image
    -Sexually arousing image of opposite sex or same sex depending on orientation
    -Funny image
* This test will be conducted with people aged 18+. I want to exclude younger children due to the characteristics of these questions as well as the necessity of parental permission. This test is designed by myself and the results will be used for my personal project only. I will ask participating subjects to sign a disclaimer that they are comfortable with questions and none of those questions/exercises cause any personal or sexual offence. I expect my chosen subjects will be open-minded and participate in this project based on interest and self-initiation to be part of it.



Nørretranders, T., 1999. The User Illusion. London: Penguin Books.

Brain activity and how to measure it.

Our brain consists of billions of cells called neurons. They use electric impulses when communicating and therefore emitting continuous electrical activity inside the brain. Here is a single nerve cell from the cerebellum of the brain:

[image online] Available at: <; [Accessed 28 February 2012].

[image online] Available at: <; [Accessed 28 February 2012].

Guyton (1971, pp.511-512) states that  ‘both the intensity and patterns of this electrical activity are determined to a great extent by the overall excitation of the brain resulting from functions in the reticular activation system. … Much of the time, the brain waves are irregular and no general pattern can be discerned in the EEG. However, at other times, distinct patterns do appear. Some of these are characteristic of specific abnormalities of the brain, such as epilepsy. Others occur even in normal persons and can be classified into alpha, beta, theta, and delta waves.‘ Here is an image showing these waves:

[image online] Available at: <; [Accessed 28 February 2012].

Alpha waves are rhythmic waves which are found in the EEG’s of almost all normal persons when they are awake in a quiet, resting state of cerebration. During sleep the alpha waves disappear entirely, and when the awake person’s attention is directed to some specific type of mental activity, the alpha waves are replaced by asynchronous higher frequency but lower voltage waves.’ (Guyton, pp.512)
This image demonstrates it efficiently:


Beta waves usually would appear during activation of the central nervous system. Maximum mind power. All five external senses, logical mind, memory from the five senses & logical thinking.

Theta waves will occur during emotional stress such as disappointment and frustration. Also deep meditation. Deep inward thought. This is associated with life-like imagination. High state of mental concentration. A magical mind. Internal pictures / visualisation. Intuition, inner guidance. Access to unconscious material. Dreaming.

Delta waves will occur mainly in deep sleep, in infancy or in serious organic brain disease. Deep dreamless sleep. Deep relaxation. State of oneness, whole body feeling.

Apart from EEG method of displaying brain activity in form of waves there are several other technologies which allows people to measure and display the activity of human brain. Functional magnetic resonance imaging or functional MRI (fMRI) us another way to visualise activity of neurons inside the brain by detecting the blood changes. It uses a very complex method to measure oxygen levels in blood cells after being treated by noise. These changes are displayed by different colours:

[image online] Available at: <; [Accessed 28 February 2012].

This technique is used more in research rather than clinical treatments but it can also be used to investigate ill patients. ‘Physicians use fMRI to assess how risky brain surgery or similar invasive treatment is for a patient and to learn how a normal, diseased or injured brain is functioning. They map the brain with fMRI to identify regions linked to critical functions such as speaking, moving, sensing, or planning. This is useful to plan for surgery and radiation therapy of the brain. Clinicians also use fMRI to anatomically map the brain and detect the effects of tumors, stroke, head and brain injury, or diseases such as Alzheimer’s.’ (wikipedia article on fMRI)

Diffusion spectrum imaging (DSI)

[image online] Available at: <; [Accessed 28 February 2012].

[image online] Available at: <; [Accessed 28 February 2012].


Guyton, A. C., 1971. Basic Human Physiology: Normal Function and Mechanisms of Disease. London: W. B. Saunders Company.

Huddleston N., 2008. Brain Wave States & How To Access Them. [online] Available at: <; [Accessed 28 February 2012].

Wikipedia, 2012. Functional magnetic resonance imaging. [online] Available at: <; [Accessed 28 February 2012].