Final outcome & Evaluation

To conclude, I feel as though my Interactive installation has taught me a lot in the field of design iterations. I was constantly required to improve and and develop my idea in order to achieve my wanted outcome through prototyping and research.

Music embodied Cognition was recognised by at least one person who noticed his feeling of embodiment with the piece, while others where experiencing embodiment through laughter and dance without even realising.

I also feel as if an Augmented Reality was achieved as the users were able to see themselves through a computer generated version, that also inspired laughter, dance and a positive mood.

My music choice also seemed to be effective, as everyone interacted through dance to some extent and one user even commented saying “Love the track”

Although I am happy with the outcome, this is still ultimately a prototype that can further be developed as people mentioned that it would be more vibrant and entertaining with the addition of colour and mabye more interactive graphics, while a friend of mine also commented that this installation could have potential in clubs and bars.

I feel like processing has taught me much about programming, even though I have only really scratched the surface.

Below here is my final code.

import SimpleOpenNI.*;
import ddf.minim.*;

Minim minim;
AudioPlayer player;
SimpleOpenNI context;
PImage img;
void setup(){

size(640, 480);
// we pass this to Minim so that it can load files from the data directory
minim = new Minim(this);

player = minim.loadFile(“Vanross_-_Horizon_Foundry_Louder_Master.mp3”);

player.play();

context = new SimpleOpenNI(this);

context.enableDepth();

context.enableUser();

context.setMirror(true);
img=createImage(640,480,RGB);
img.loadPixels();
}

void draw(){

background(255);

context.update();

PImage depthImage=context.depthImage();
depthImage.loadPixels();

int[] upix=context.userMap();

for(int i=0; i < upix.length; i++){
if(upix[i] > 0){

img.pixels[i]=color(255,255,255);
}else{
}
img.pixels[i]=color(0);//depthImage.pixels[i];
}
}
img.updatePixels();

image(img,0,0);

int[] users=context.getUsers();

ellipseMode(CENTER);

for(int i=0; i < users.length; i++){
int uid=users[i];

PVector realCoM=new PVector();

context.getCoM(uid,realCoM);
PVector projCoM=new PVector();

context.convertRealWorldToProjective(realCoM, projCoM);
fill(255,0,0);
ellipse(projCoM.x,projCoM.y,10,10);

if(context.isTrackingSkeleton(uid)){
//draw head
PVector realHead=new PVector();

/
context.getJointPositionSkeleton(uid,SimpleOpenNI.SKEL_HEAD,realHead);
PVector projHead=new PVector();
context.convertRealWorldToProjective(realHead, projHead);
fill(0,255,0);
ellipse(projHead.x,projHead.y,10,10);

PVector realLHand=new PVector();
context.getJointPositionSkeleton(uid,SimpleOpenNI.SKEL_LEFT_HAND,realLHand);
PVector projLHand=new PVector();
context.convertRealWorldToProjective(realLHand, projLHand);
fill(255,255,0);
ellipse(projLHand.x,projLHand.y,10,10);

float y1, y2, yA;
yA = height * 0.5;
y1 = height * 0.25;

for(int g = 0; g < player.bufferSize() – 1; g++)
{
float x1 = map( g, 0, player.bufferSize(), 0, width );
float x2 = map( g+1, 0, player.bufferSize(), 0, width );

line( x1, y1 + player.left.get(g)*yA, x2, y1 + player.left.get(g+1)*yA );
line( x1, 150 + player.right.get(g)*50, x2, 150 + player.right.get(g+1)*50 );
}

}
}

}

 

Media Foyer – Monitors

After receiving such strong feedback after displaying my installation in the television area, I debated over whether or not to even bother trying my work out on the monitors, however for the sake of having time to do so I tried it anyway.

Unfortunately I couldn’t figure out how to get the Kinect to go full screen on the monitors, and because I was one of few, if not the only person to use the Kinect with my installation, I was unable to receive help from my peers.

I also would of like to have had a busier foyer, however I was still able to test with the people present.

 

 IMG_2290IMG_2287

For future development I feel like the only adjustment I would need to make would be figuring how to resize the display when working with a Kinect.

Media foyer testing – Feedback

While I was fairly happy with interactive response users gave me during testing however, I was interested to find out their views on certain points so for future development I decided to ask a few to two of my participants, Tom (User A) and Sam (User B).

1) What did this piece represent to you
 (To find out whether this matched my concept of Embodied Music cognition)
2) Did this interaction make you feel ‘better’
(To see if the users mood increased)
3) What was your first natural instinct when interacting
 (To discover a good idea of how users initially react to my installation)
4) How do you feel this could be improved
(For ideas over future development)
Question 1)
User A: “I think it represented a sense of embodiment and connection with music”
User B: “This piece to me represented the need to dance whenever you can, as it is a positive outlet and shouldn’t be one to be ignored.”
 
Question 2)
User A: “in a sense, the piece was upbeat and cheerful and provided some laughter in testing its response”
User B: “when the waveforms started to get sharper, it was very soothing to watch. I also absolutely loved the mix in the piece.”
Question 3)
User A: “first natural instinct was to test the mechanism of interaction by moving around and see how the program responded in turn, potentially seeing how far the program will go in tracking movements once the way of interaction had been established.”
User B: “my first instinct was to see whether it picked up all of my movements while dancing, whether they be quick or slow. but while i was too busy figuring that out, i found myself dancing anyway.”
Question 4)
User A: “improvements could include development on a strong colour scheme or visual clarity”
User B: “possibly by adding more colour, as the song and the whole vibe of the piece triggers a very colourful imaginative thought but portrays no colour.”
 
I found the feedback very useful as User A spoke of Embodiment and “a connection to the music” which links into Embodied Music Cognition, while both also suggested that they felt happier after interacting, which was something I was trying to achieve.
Both interestingly also suggested to add in more colour, which in a prototype was something that I had already included, however due to time this is something that I wont change now, but will consider for future development.

Media Foyer testing

Initially today I planned on testing my installation out on the monitors, however I didn’t anticipate not being able to reach any plug sockets as in order to use the Kinect you need plug it in to a socket, as well as your laptop. Thinking on my feet, I decided use the television area platform. This ultimately turned out to be a great test as it gave users plenty of room to move and dance, while it also gave the Kinect a much further rang in terms of depth to pick up passers by.

IMG_2302IMG_2301

These are some of the results that I found today.

. People seemed interested by the sound coming from the area

. The installation dealt well with the high volume of people within the space as the people dancing where still visible without become to blurred due to the numbers

. The users who engaged with the installation seemed to be happy based on the smiles and laughter

. The platform acted as great position as it gave me slight leverage over everyone else, and it was highly visible.

. The installation encouraged social interaction as two users held hands and danced in time with the music through natural progression with the installation.

A lot of these findings link to my concept of Embodied Music Cognition and Augment Reality as the installation encouraged users to dance, and in turn become an extension of the installation through technology as a computer generated version of themselves were present with graphical additions.

Initial testing

Now that I have a working piece of code, I decided to test my work before trying it out in the media school. As the video shows, the tracking of the moving bodies work fine, however whether or the camera will be able to deal with more erratic movement, and a busier background is yet to be discovered.

 

I’ve also experienced slight flickering and loss of users being tracked, so this is something I will need to try and ammend

Song choice – Horizon

My Installation majorly will be viewed by students, so I feel as though when picking a song for my installation it should appeal to a young demographic.

House parties and night life are a big part of student culture, and arguably the most popular genre right now is House due to its widespread use. (Shawbel 2012) describes how House DJ’s have “taken over pop culture by producing for America’s most prominent music figure heads and by immersing themselves in clubs and at major venues.”

He goes on to explain how “house is the thing of the moment” while my point previously about how widespread it has become is back up by (Makarechi, 2011), who explains that “Suddenly, house is everywhere. It’s on the radio, it’s at the pool party, it’s at small, downtown bars and clubs.”

10437337_646487105432517_6062730667120209405_nDue to difficulty with copyright, I contacted one of my close friend’s who is a popular DJ under the alias of Vanross in Portsmouth who has granted me permission to use his track ‘Horizon’.

My logic behind using a house track is that if the passer’s by can relate to a familiar sound, they may feel more comfortable over participating.

Schawbel, D. (2012). House Music Has Become a Global Phenomenon. [online] Forbes. Available at: http://www.forbes.com/sites/danschawbel/2012/03/09/house-music-has-become-a-global-phenomenon/

The Huffington Post, (2015). House Invasion: How Club Music Sneaked Into Mainstream Pop. [online] Available at: http://www.huffingtonpost.com/2011/08/11/house-music-pop-music_n_922912.html

SoundCloud, (2014). Vanross. [online] Available at: https://soundcloud.com/vanross 

 

Prototype 5#

Introducing the Minim library was a fairly easy task in terms of integrating it within my code. I simply added the PlayAudioFile example to my work in order to give the impression that the music was flowing through the body of the user.

I mentioned previously in my ‘inspiration for ideas’ blog that I appreciated the ipod silhouette advert, although I needed something like actual ipod in the advert to draw in my audience. I feel as though the visual audio that runs through the participants could do this job.

Screen Shot 2015-01-21 at 14.02.50

 

 

Now where ever the user moves it will be accompanied by a sound wave.

This was the main piece of code from the example that I edited, which gave me my final result.

 

for(int g = 0; g < player.bufferSize() – 1; g++)
{
float x1 = map( g, 0, player.bufferSize(), 0, width );
float x2 = map( g+1, 0, player.bufferSize(), 0, width );

line( x1, y1 + player.left.get(g)*yA, x2, y1 + player.left.get(g+1)*yA );
line( x1, 150 + player.right.get(g)*50, x2, 150 + player.right.get(g+1)*50 );
}

Prototype 4#

As I begin to build a clear vision in my head as to how I want to incorporate my audio visuals, I feel as if the coloured silhouettes would be better if they were white. My reasoning behind this is because I want a wavelength to run through the bodies of the users to give them this impression that music is running through, as their is some sort of connection between them and music.

This I feel relates to my concept of Embodied Music Cognition as the users will have the impression that they are becoming an extension of the sound through interacting with the camera.

To remove clutter from the background, I’ve also decided to colour it black instead, while the tracking of movement far in the background will still be picked up.

I simply changed:

img.pixels[i]=color(0,0,255);

to

img.pixels[i]=color(255,255,255);

and I changed

img.pixels{i}=depthImage.pixels[i];

to

img.pixels[i]=color(0);

The final result of this is here.

Screen Shot 2015-01-25 at 22.48.49

 

Prototype 3#

As I continue to mess around with the code, I have been interested in adding in another dimension to my project. To give the user a better feel of interactivity I feel as if Skeleton tracking would be a nice feature to include.

In order for me to track the skeletons of my users I have added this method in order for it to be called.

context.startTrackingSkeleton(int userid)

If I want the tracking to stop I simply need to add

context.stopTrackingSkeleton(int userid)

This is necessary for me as if to many skeletons are tracked then it can lower the performance, so minimum users in each the scene are preferred.

After tampering around slightly I found to work when requesting the skeletons automatically.

void onNewUser(SimpleOpenNI curContext, int userID)

{

println(onNewUser – userId: ” + userId);

curContext.startTrackingSkeleton(userId);

}

void onLostUserId(SimpleOpenNI curContext, int userId)

{

println(“onLostUse – userId: ” + userId)

}

Using the ‘context.isTrackingSkeleton(int userid)‘ I can check if the user has a skeleton data.

For the position of the joint method I call in

context.getJointPositionSkeleton(int userid.int SKEL_TYPE.PVector position)

In order for me to project the real-world coordinates on a 2D canvas I have converted it to the method

context.convertRealWorldToProjective(PVector real. PVector proj).

As you can see below with the finished code, the silhouette now shows the positions of head, the left and the body. Screen Shot 2015-01-25 at 22.14.52

Prototype 2#

I found my experimenting to be a good ground point, so I began looking into ways that I could pick up users when they entered into camera shot.

The OpenNI library will do a lot of the work for me when identify moving objects in a scene. The scene analyser does this, when turn on.

context.enableScene( );

The same needs to be applied to the depth image in order for the scene to be analysed.

context.enableDepth( );

So that I can access the labelling data for each object in the foreground, i need to add in to the draw function

int[ ] map = context.sceneMap( )

This in turn relays an array of integers for each pixel. The value in each array is an integer that will label the various objects in my scene. If the pixel is analysed as being part of my background, it is ‘0’, otherwise it will be assigned to any other moving object that is picked up.

This code below is what I’ve used to identify if the camera has found someone. It will print ‘found person’ whenever someone has been picked up in the scene.

void draw()

{

context.update( );

image(context.sceneImage( ), 0, 0);

int[] map = context.sceneMap();

boolean foundPerson = false;

for (int i=0; i<map.length; i++){

if(map[i] > 0) {

foundPerson = true; } } if (foundPerson) println(“Found Person”);

}

With all the code this was the final result.

tracking