Processing: Face Tracking

OpenCV is an open source computer vision library which mainly focuses on real-time image processing. The OpenCV library for Processing allows for more complex control over video than the basic library giving more ways to interpret a live video feed. The main example I was interested in was object detection; the ability to track certain objects or body parts.

One of the examples that came with the OpenCV library was for face tracking with a live video feed. The code loads a Haar Cascade for the frontal view of a face as the basis for recognition so it can track faces. Different ‘cascades’ can be loaded instead and have been included with the library such as tracking full-body, upper-body or even something more specific such as the right ear. The cascade files contain all the information OpenCV needs to recognise what a face or something looks like and how to detect it in an image.

here is the basic code from the face tracking example:

// import libraries
import gab.opencv.*;
import processing.video.*;
import java.awt.*;

// initiate video and openCV
Capture video;
OpenCV opencv;

void setup() {
  size(640, 480);
  //scale video down to make it run smoother
  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

  video.start();
}

void draw() {
  //scale everything back up to fit window
  scale(2);
  opencv.loadImage(video);

  image(video, 0, 0 );

  noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  //create array for faces
  Rectangle[] faces = opencv.detect();
  println(faces.length);

  for (int i = 0; i < faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
    //draw rectangle around faces 
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }
}

void captureEvent(Capture c) {
  c.read();
}

Here is the result of what that code can do. The sketch has an array called ‘faces’ which is used to store all the information about the detected face(s) such as coordinates and dimensions as well as the number of faces it detects. The code then uses a for loop to iterate through the faces in the array and draw a green rectangle around them at the coordinates and dimensions of each face it has detected.

Screen Shot 2014-11-17 at 14.07.07

With a basic understanding of how this face tracking sketch worked, it was time to start experimenting to see what potential it had by adjusting and manipulating the variables of each face. My first test was to make an adaptation of my sketch which changed an image based on the average location of movement. To do this, I used a series of if statements just as before to work out which horizontal third of the screen the tracked face was in (using the x coordinate) and then change the image appropriately. Using the face as a tracking point provides a much more accurate location of a person rather than just the average area of motion and works much better to do the tracking.

// only runs the code once it sees a face
if (faces.length >=1) {
    //reduce opacity of images to be able to see video behind, and scale the images down to fit the screen.
    tint(255, 127);
    scale(0.5);
    // tracks center point of face in x coordinate instead of top left corner
    // IF STATEMENTS FOR ONE FACE ONLY
    if (faces[0].x + faces[0].width/2 < width/6) {
       image(one, 0, 0);
     }
      if (faces[0].x + faces[0].width/2 > width/6) {
      if (faces[0].x + faces[0].width/2 < 2*width/6) {
         image(two, 0, 0);
       }
     }
     if (faces[0].x + faces[0].width/2 > 2*width/6) {
      image(three, 0, 0);
    }
  }

[gfycat data_id=”ShabbyFrenchClownanemonefish”]

The main problem with this is that it only works for one face at a time, only changing the image based on the location of the first face it sees and ignores any others. The sketch checks to see if there is one or more faces, then if there is it will then display an image over-layed on the video dependant on the coordinates of the first face in the array. Obviously this wouldn’t work too well in a public space as there is often more than one person to track in the space and it would only be affected by the first face it sees each frame.

Before starting another adaptation I knew it would probably experience the same fate as before, with the fatal flaw of only being able to interpret the location of the first face in the array, but followed through anyway just to see what it looks like. I adapted my experiment with a flock of agents tracking a colour so that they would track faces instead; seems simple enough. I created a new tracking point for the flock which was the centre of the face tracking square so that the flock would congregate on top of the face.

int faceX = faces[0].x + faces[0].width/2;
int faceY = faces[0].y + faces[0].height/2;

[gfycat data_id=”OldfashionedConsiderateBackswimmer”]

A potential development of this could be having multiple flocks of agents, one to follow each face but this would require a lot more computing power as it already struggles to run on my laptop and would take up a lot of valuable screen real estate having so many agents on the screen.

Due to my apparent interest in interpretation of the video feed, the OpenCV library appeals to me a lot and is something I would want to develop on in the future. Having the ability to track people and faces quite accurately seems useful for creating an interactive graphic for use in a public space as it can follow people whether or not they actively decide to interactive with it. The array for each face has the x & y coordinate of each face, the width and height along with how many faces it detects at the time. These variables give me a lot of scope to experiment with and find an interesting and unique interaction to make.

Advertisements

One thought on “Processing: Face Tracking

  1. […] has led me down the route of face-tracking based interactions. My ideas all stemming from the basic example with the OpenCV library which put a green rectangle around the faces it saw on screen. With this […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: