Processing: Capturing Faces

In a recent post I started looking at OpenCV for processing, and more specifically the face tracking. Applying this to my idea of surveillance, I wanted to work out how to capture the faces it detects and save them. Before starting I knew that the sketch would be recognising faces every frame and if I just wrote some code to capture the faces straight into this, it would be capturing far too many images, at too high a speed. To get around this I decided to implement a basic timer from an example I found. The example printed a line in the log and changed the colour of the background every 5 seconds so you knew it was keeping track of time. I adjusted this so that it would save an image of a detected face ever 5 seconds instead.

import gab.opencv.*;
import java.awt.*;

Capture video;
OpenCV opencv;

PImage face1 = createImage(0,0,RGB);
PImage face2 = createImage(0,0,RGB);
PImage face3 = createImage(0,0,RGB);
PImage face4 = createImage(0,0,RGB);

// Values for clock
int savedTime;
int totalTime = 5000;

void setup() {
  size(640, 480);
  video = new Capture(this, 640, 480, 15);
  opencv = new OpenCV(this, 640, 480);
  savedTime = millis();

void draw() {

  image(video, 0, 0 );

  stroke(0, 255, 0);
  Rectangle[] faces = opencv.detect();

  // Calculate how much time has passed
  int passedTime = millis() - savedTime;

  for (int i = 0; i < faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);

  //only runs if a face is seen
  // captures a face in the box
  if (passedTime > totalTime) {
    if (faces.length >= 1) {
      face1 = get(faces[0].x, faces[0].y, faces[0].width, faces[0].height);"output1.jpg");
      if (faces.length >= 2) {
        face2 = get(faces[1].x, faces[1].y, faces[1].width, faces[1].height);"output2.jpg");
        if (faces.length >= 3) {
          face3 = get(faces[2].x, faces[2].y, faces[2].width, faces[2].height);
          if (faces.length >= 4) {
            face4 = get(faces[3].x, faces[3].y, faces[3].width, faces[3].height);

    println( " 5 seconds have passed! " );
    savedTime = millis(); // Save the current time to restart the timer!

  //display captured faces in corners for now 
  if (faces.length >= 1) {
    image(face1, 0, 0);
    image(face2, width-face2.width, 0);
    image(face3, 0, height-face3.height);
    image(face4, width-face4.height, height-face4.height);
} // close draw

void captureEvent(Capture c) {;

The timer loop contains another if statements to make sure that the face capturing part only runs if there is at least one face detected. This part took a long time to work out as to capture the faces I needed to access the array which the faces were stored in, however when there are no faces detected, it is trying to access a part of the array that doesn’t exist yet which causes it to crash. The series of nested if statements which follow detect up to 4 faces and save them using the get feature in Processing to capture the pixels within the rectangle and save them as PImages created before the setup. For now I’ve only got it saving 4 faces. OpenCV will be able to detect and track more but for now I feel that this is enough.

Currently there is one big flaw with this as when I save the output for each face, it always has the same name and this causes it to overwrite the face every 5 seconds with a new one. My next iteration will hopefully fix this flaw.

For now, I have the current captured faces appearing in the corner of the sketch as I’m not sure what I’m doing with them yet, It was just a more visual way of seeing what the code is doing as it runs.

Here is an example of it working:

[gfycat data_id=”UnrulyMediocreHellbender”]

The sketch runs at 15 frames per second as my laptop struggles to run it at a higher frame rate but currently its good enough for what I want it to do. Towards the end of the clip you can see it detecting a face on my wall where there isn’t one. This is one of the main problems with the face detection at the moment as sometimes it sees patterns which it thinks are faces where there are none. The best way i’ve found to get around this is to work in a well lit area, it’s a slight improvement but still is by no means perfect. Below is the very flattering image it captured and saved. From my experimenting with this it seems to capture the image when you least expect it (even though I know its coming), meaning you never really get time to pose for it leading to more natural representations of people.


This intrigued me and made me wonder about what kind of faces I could capture when people don’t expect it. Even when I can see the video feed and know it is on a 5 second timer it still manages to capture images when I really don’t expect it, often of me concentrating reading through the code or mid-conversation. If I was to have something else on the screen, rather than the video feed, people would be completely unaware that they’re being watched (other than the presence of a camera of course) and I would be able to capture and save images of their faces without them ever being aware of it. Then with these images I could do just about anything I wanted (within reason) and they would be none the wiser.


One thought on “Processing: Capturing Faces

  1. […] get started on face swapping I began by altering one of my old sketches which captured and saved faces as images. It’s a similar idea, but before I was trying to get […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: