Processing: Face Swap Start

To get started on face swapping I began by altering one of my old sketches which captured and saved faces as images. Before, I was trying to get the image out of Processing to do something with it but now it can just say within the Processing environment and I can work from there. Below is my code, commented so that hopefully it makes sense to whomever is reading it. A lot of the code in there isn’t being used (like the timer for example) but I left it in there to show how it was a development upon one of my old ideas where I was experimenting with what I could do.

Quick note:  I’ve recently noticed that the plugin i’ve used for inserting code has sometimes changed my greater than (>) and less than symbols (<) to &gt; and &lt; instead for some reason. 

(Theres a summary of what the code does below)

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

//PImages to store captured faces
PImage face0 = createImage(0, 0, RGB);
PImage face1 = createImage(0, 0, RGB);
PImage face2 = createImage(0, 0, RGB);
PImage face3 = createImage(0, 0, RGB);

//values for timer
int savedTime;
int totalTime = 5000; // 5 second delay
int number = 0; //number of picture

void setup() {
  size(640, 480);
  video = new Capture(this, 640, 480, 20);
  opencv = new OpenCV(this, 640, 480);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
  frameRate(20);
  video.start();
  savedTime = millis();
}

void draw() {
  // Calculate how much time has passed
  int passedTime = millis() - savedTime;

  //should be scale 2 but made it 1 so it looks better, but makes it laggy
  scale(1);

  //display video feed
  opencv.loadImage(video);
  image(video, 0, 0 );

  //style face rectangle
  //NOT BEING USED
  noFill();
  stroke(0, 255, 0);
  noStroke();
  strokeWeight(3);
  Rectangle[] faces = opencv.detect();
  println(faces.length);

  //draw rectangle around seen faces
  //NOT BEING USED
  for (int i = 0; i < faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }

  // only runs if a face is seen
  // captures a face in the box
  // start timer
  
  //if (passedTime > totalTime) {
  number++; // adds number to picture so they count up and not overwrite
  if (faces.length >= 1) {
    face0 = get(faces[0].x, faces[0].y, faces[0].width, faces[0].height);
    //String number0 = "0_" + number + ".jpg";
    //face1.save(number1);
    if (faces.length >= 2) {
      face1 = get(faces[1].x, faces[1].y, faces[1].width, faces[1].height);
      //String number1 = "1_" + number + ".jpg";
      //face2.save(number2);
      if (faces.length >= 3) {
        face2 = get(faces[2].x, faces[2].y, faces[2].width, faces[2].height);
        //String number2 = "2_" + number + ".jpg";
        //face3.save(number3);
        if (faces.length >= 4) {
          face3 = get(faces[3].x, faces[3].y, faces[3].width, faces[3].height);
          //String number3 = "3_" + number + ".jpg";
          //face4.save(number4);
        }
      }
    }
  }

  //println( " 5 seconds have passed! " );
  //savedTime = millis(); // Save the current time to restart the timer!
  //}

  //swap two faces over
  if (faces.length == 2) {
    //resize images to current tracked faces
    face0.resize(faces[1].width, faces[1].height);
    face1.resize(faces[0].width, faces[0].height);

    //place swapped faces
    image(face1, faces[0].x, faces[0].y);
    image(face0, faces[1].x, faces[1].y);
  }

} // close draw


void captureEvent(Capture c) {
  c.read();
}

A summary of the code above is that it initialises the face tracking using OpenCV and stores the information about the rectangle drawn around each face (their x & y coordinates and their width & height) in an array called faces. The sketch then checks to see how many faces there are, up to 4 so far, and using the get() function in Processing, grabs the pixels within the rectangle and saves it in a blank PImage created before the setup. Once it has done that it checks to see of there are two faces on the screen, if there are it resizes face0 (the first face in the array) to be the same size as face1 (the second face) and vice versa. After that it just places the image on top of the video using the coordinates of the swapped faces.

As this was just my test i’ve only written enough to swap over 2 faces (even though it’s capturing up to 4). As I develop this idea I will write the extra code for it to swap over 3 & 4 faces appropriately.

Here is a small example of it working:

[gfycat data_id=”MinorFoolhardyCaterpillar”]

As the whole rectangle taken from each face is being swapped over, it creates a very crude swap which isn’t the best of fits but definitely makes it clear that the faces are being swapped. My initial testing as I was making it had a good response as everyone found the results to be quite funny. Some people had said that they weren’t sure whether to be watching their face or their body which lead to some rather interesting and awkward interactions as people tried to change their behaviour.

Advertisements

One thought on “Processing: Face Swap Start

  1. […] My first iteration had some very messy code so my first step was to clean out the bits that weren’t being used to cut it down the the barebones of the parts I needed. The nest step was to work out how I was going to swap the faces over for when there were 3 and 4 faces. At the moment i’m limiting it to 4 faces due to the stress it puts on my laptop, if it sees any more than that it wont be swapping the faces over. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: