Tuesday 30 December 2014

Design Iterations - User Testing

User Testing 


As a part of the iterative design process, it is important that there is some user testing. This is to ensure that the product that is begin made is following the correct route and satisfying the needs of the brief. User testing is also important due to the fact that as mentioned in previous blog posts people simply do not act in the way designers think that they would, so where it is important to conduct research on an environment, such as Weymouth House, it is also incredibly important to conduct testing on the product itself. This is because people may not interact with the installation in the way that they are expected. 

Because of this I have conducted user testing on my installation. In my user testing a got a few people to test my installation. In doing this I was first watching to see if they would interact with the installation in the way that I expected and then quizzed them on what they thought on all aspects of the project, such as the design and the interaction element.



From the user testing I gained some very valuable feedback. The first was that people did interact with the installation the way that I had expected, which is a good because it means that people do firstly know how to use my installation. I also gained some feedback on the design aspect of the project. From the feedback it is clear that there will need to be something else within the installation because as someone said “it gets a bit boring”. I have previously mentioned that I would be creating a particle system to go with the installation and based of this feedback I believe that it will be very necessary as it does seem that there does need to be some other element to the installation. The particle system is also likely to make the installation more eye catching which will be needed because as seen through the poster brief and the information that I gained from that about the space, is that for something to get noticed it does need to be eye catching. 


Another piece of feedback that I received was that everyone was overly fond of the design of the main circle. One person stated “I don’t like the moving stuff in the circle”. The design of the installation is something that I have been recently looking at because I have not been particularly pleased with the design of the installation, and the fact that some people have also mentioned the design means that I will have to look at changing the design. The final piece of feedback that I received was that there should be more than one circle so that more people can use it at once. 

Overall this user testing has been very beneficial for many reasons. The first being is that people do interact with the installation in the way it has been designed to, which is a very big positive. The testing has also been beneficial because it has allowed me to identify areas of the installation that does need improving, such as the introduction of another visual aspect and the overall design of the main object. In the next few days I will begin to use this feedback and make changes to the installation. 

Sunday 21 December 2014

Design Iterations - Face Tracking Project

Face Tracking Project - 2


I have decided that this project will not include the punktiert behaviour attraction work. This is because despite me simply thinking that placing the tracking function that I have previously created (see blog Face Tracking - 1), into the work in place of the mouse tracking, it does not work. This might be down to the vector systems that have been used within the work and for some reason they do not seem to react to face tracking. This is unfortunate as I believe that it could have created a very interesting installation. 

However I will now push on with the particle system idea as I believe that this could a very interactive and interesting idea. In the mean time I have been playing around with the visual aspects of this piece of work, as a plain circle moving around a screen is not the most appealing thing in the world, and as I have found through the poster brief (see blog post ---), for something to get noticed within the fast paced area that is weymouth house it needs to be visually appealing and striking. 


Above shows an idea that I am currently going with, which is having movement within the circle to make it less of still, boring, ambient object. Instead with the circles within the main object moving and changing colour it does bring more visually to the installation. However this will not be the final design for the circle and I will continue to experiment with it to find a way to make it more visually striking. 

Thursday 18 December 2014

Design Iterations - Face Tracking Project

Face Tracking Project - 1


As discussed within a previous blog post, I have decided that I will be creating another installation. This installation will be using face detection to track the users location and then either have a circle following drawing in the environment with a particle system, or draw upon the be behaviour attraction work and have the objects within the environment become attracted to the user. I believe the latter will be the more interesting to do and use however the use of a particle system could also be a very interesting route to go do. 

However as both of the projects will involve me having to use face detection and tracking the users movement I have begun to experiment using face tracking. After talking with my lecturer he suggested that the best way would be to map the face and then use the faces x and y co-ordinates to determine the location of where the circle will be draw each frame. 



Above is the code that is being used to track the faces. essentially all code is doing is finding x and y location of the users face and then using this to draw the circle. However if it was left simply width-map(faces[i].width) the installation would not work correctly. This is because if it left like this the camera is opposite. This being if you were to move right, on the screen the circle would move left. To counter act this I had to mirror the camera which means adding the additional code to to the tracking. This means that the user will not become confused when using the installation that the objects are moving in a different way to what they are. 


The above video is a circle following my face through mapping. 

Wednesday 10 December 2014

Project Rethink

Punktiert / Circle Project 


I have recently found a library named punktiert. This liabary contains an number of different examples of processing drawings that have objects within them that have different behaviours. This one was called Behaviour Attraction. In terms of what it does it is very simple. What happens is that the circles are given an attractor, in this case the mouse, and when the mouse reaches a certain distance to the objects they begin to become attracted to the mouse and depending on how far away they are from the mouse they move at different speeds to it. 

Although this is a relatively simple drawing, it has given me inspiration for another installation. I am going to create an installation that does use face tracking to create a circle that follows the face. Once this has been created I will add either a particle system to the circle that will allow the user to draw or draw inspiration from this piece of work and use the circle as the attractor and attract the other circles. 


I am also not going to be continuing with the face swap project. This is because despite me in the previous blog post stating that the PImage possibly being the answer to me being able to store faces, I have not be able to find a way to make this work. Along with this I have never been completely sold on the idea of creating the face swap as my installation, so therefore I will only be focusing on this face tracking project.

The Code 

import punktiert.math.Vec;
import punktiert.physics.*;
VPhysics physics;
BAttraction attr;
int amount = 200;

public void setup() {
  size(800, 600);
  noStroke();
  physics = new VPhysics();
  physics.setfriction(.4f);
  attr = new BAttraction(new Vec(width * .5f, height * .5f), 400, .1f);
  physics.addBehavior(attr);
  for (int i = 0; i < amount; i++) {
    float rad = random(2, 20);
    Vec pos = new Vec(random(rad, width - rad), random(rad, height - rad));
    VParticle particle = new VParticle(pos, 4, rad);
    particle.addBehavior(new BCollision());
    physics.addParticle(particle);
  }
}
public void draw() {
  background(255);
  physics.update();
  noFill();
  stroke(200, 0, 0);
  attr.setAttractor(new Vec(mouseX, mouseY));
  ellipse(attr.getAttractor().x, attr.getAttractor().y, attr.getRadius(), attr.getRadius());
  noStroke();
  fill(0, 255);
  for (VParticle p : physics.particles) {
    ellipse(p.x, p.y, p.getRadius() * 2, p.getRadius() * 2);
  }
}

Reference List 

Koehler, D., 2013. Punktiert. Lab-Eds, Available from: http://www.lab-eds.org/punktiert [Accessed 10 December 2014].

Friday 5 December 2014

Processing

Colour Capture Cam



Within a recent processing workshop I was working with Declan Barry and Aaron Baker, and through playing with a piece of code that we were given we managed to create a simple yet quite an effective installation. Essentially what the installation is that, using the video processing library, it uses the camera to detect colours in the environment, the code then detects the colour and then updates the pixels on the screen to reflect the colours it sees. 

What the camera sees


What the code changes it to 


Although this is very simple it could work as an installation because as someone passes through the camera the colours that they are wearing will also be picked up and it will cause an update through the work, so that the person would be able to see them self, in colour form, move through the environment.

The Code 

import processing.video.*;
Capture cam;
float blocks=1024;
void setup(){
  size (1024,768);
  cam=new Capture(this,160,120);
  cam.start();
  frameRate(25);
}
void draw(){
  if(cam.available()==true){
    grabPixel();  
  }
}
void grabPixel(){
   cam.read();
   color c;
   float v;
   for (int i=0; i<blocks; i++){
      c=cam.get(int((cam.width*(0.5))/blocks) + int(i*cam.width/blocks),  int(cam.height*0.5));
      v=brightness(c);
      drawBlocks(i,c);
   }
}
void drawBlocks(int i, color c){
   fill (c);
   noStroke();
   rect(width-(i*(width/blocks)),0,0-width/blocks,height); 
}


Declan's blog - https://declanbarrydesign.wordpress.com/
Aaron's blog - http://abaker.co/blog/ 

Design Iterations - Face Swap Project

Face Swap Project - 5 


Having found a way of saving faces and storing, thanks to Shiffman's GitHub repository, I have since been experimenting ways to use these stored faces. I first began researching if there was anyway that an image could be saved, placed into a folder and then immediately be called upon again. Unfortunately I was not able to find any examples of people doing. This has made have to find another way of storing the faces because the use of a folder I do not believe to be the answer. 

However looking at Shiffman's code I believe the answer to storing the face is within it. This is because within the below piece of code he is creating a PImage called "cropped". This image has the width and height of the detected face and this is essentially storing whatever is within the rectangle that is drawn around the detected face. I believe that if I can find a way of having each detected face stored within in a different PImage then I would be able to draw upon these detected faces and then create the face swap by simply swapping the the detected faces. 




Tuesday 2 December 2014

Design Iterations - Face Swap Project

Face Swap Project - 4


Following on from the previous blog post (see blog post Face Swap - 3), I have found a GitHub repository from Daniel Shiffman which has been very helpful in creating a live face detection feed which is need for my project. However as stated in that previous blog post my next goal was to find a way of storing/saving the detected faces. After some research looking for ways to store a face I found my self back at the same GitHub repository. Within the OpenCV folder there is an example called "LiveFaceDetect_SaveImages". Essentially this piece of code does the same thing as the live face detection however when the mouse is pressed it will save an image of everything within the rectangle and that is drawn around the face. It then takes this image and places it within a folder named "faces". 


I believe that this code will be very valuable to me, because if I can find a way of declaring the code to save the image when it sees the face and can then find a way of calling upon these images, then I will have a way of creating the face swap by using the faces and overlying them over the current face position of another person. 

Reference List

Shiffman, D., 2013. Shiffman-FaceIt/LiveFaceDetect_SaveImages. GitHub, Available from: https://github.com/shiffman/Face-It/blob/master/OpenCV/LiveFaceDetect_saveimages/LiveFaceDetect_saveimages.pde [Accessed 2 December 2014].