My Final Idea

My experimentation with processing has led me down the route of face-tracking based interactions. My ideas all stemmed from the basic example with the OpenCV library which put a green rectangle around the faces it saw on screen. With this example I am able to get the x & y coordinates, the width & height of the faces and the number of faces it sees in the room. It was all a matter of experimenting what could be done with these parameters to create an interesting interaction.

My final idea for the project is to do face swapping of the faces tracked on screen. This idea builds upon one of my previous ideas where I was capturing the pixels of the tracked face and saving it as an image. The process can initially be broken down into a few steps of what I will need to write to get me started in the right direction:

  1. Track the faces on the video feed.
  2. Capture the tracked faces and save them as an image within Processing.
  3. Resize the images to match the face they will be swapped with.
  4. Display the resized faces on top of the video feed in the appropriate location.

The piece is going to be linked with ideas about changing behaviour based on ideas around identity and representation, looking at audience theories (i.e. reaction to being watched), and playing with their sense of self via altered representations.

My initial thoughts here are that it’s going to be interrupting and altering peoples self image as they will see their face and their body but not together as they should be which will change their behaviour as they will have to consider their facial expression and body language more separately as they will be torn between watching one or the other. The aim is to get some playful interactions where people actually stop to look at the piece rather than just a quick glance in passing.

Face swapping doesn’t really seem like that much of an original idea as I’m sure its something anyone with photoshop has done before in the past, including me. e.g:

252576_2132802882275_4629589_n

The process for face swapping involved choosing a face, cutting it out and placing it down where there are other faces in the image. However this process has never really been quick or easy and requires a decent degree of accuracy to look good.

I then thought if there were any apps which did this. I knew there were apps which would do face swapping on image static images but I was looking for something which would do it on real time video. The closest thing I could find was Face Stealer. However this swapped the face with another face from an image using a series of control points to match up the eyes, mouth and jawline. From my searching I couldn’t find anything that swapped the faces of 2 people (or more) in a video feed real time while they’re watching it and this only pushed me further to give it a try.

Advertisements

How being watched changes you

While researching ideas I found this interesting BBC article about how being watched changes the way we behave and interact without us knowing. Humans have become sensitive to the presence of others and it influences how we behave when we know or think we’re being watched. When we feel like we’re being watched even if its just a drawing, a painting, or a pair of eyes, it influences the decisions we make trying to adjust our self presentation. The article gives an example of a psychology experiment which took place in the 1970s.

It was Halloween night, and children were out knocking on doors collecting candy. Psychologists positioned themselves inside 18 different homes, and prepared themselves for the stream of costumed children seeking sweets. After opening the door and chatting with the children for a minute or two, they’d tell them to take a single piece of candy from a bowl chockfull with treats, and no more. The researchers then left the children alone with the candy bowl and, half the time, with a mirror. A second hidden experimenter covertly recorded the kids’ behavior. The researchers reasoned that children might be less likely to take a sneaky handful of sweets if they could see their own reflection in the mirror.

And that’s just what they found. When faced with a reflection of their own faces, even masked by a Halloween costume, the kids were more likely to behave.

When the children felt like they were being watched, even by themselves via a mirror, they feel the pressure of their actions and appearance being scrutinised and are more likely to behave an an acceptable way, i.e not stealing extra sweets.

This could be an interesting idea to play with (as mentioned at the end of my panopticon post) by basing interactions about watching the audience, in an attempt to change their behaviour. This could be doing something that actually displays the camera feed so that it is blatantly obvious they’re being watched, or something more subtle that does the tracking in the background and watches them in a more abstract way, for example a pair of eyes on screen that follows people as they move.

UPDATE: With my face tracking route i’ve opted for the blatant watching of the audience as the video is being shown on screen. The interactors of the piece are likely to notice it and change their behaviour as they know they’re being watched. Their representation on screen is also altered which may cause them to feel even more conscious of their appearance and overall self presentation as their self image has been manipulated without their direct consent.

Panopticon

The Panopticon is a type of prison designed by English philosopher and social theorist Jeremy Bentham in the late 18th century. The idea was that by using a circular design with the watchmen in the middle and the inmates in cells around the outside, a single watchman would be able to watch over all of the inmates without the inmates being able to tell whether or not they were being watched. Though the watchman can’t physically watch all the inmates at once, the fact inmates cannot know when they’re being watched means that they act as though they are, altering and controlling their behaviour at all times assuring the automatic functioning of power.

Panopticon diagram

Another philosopher and social theorist, Michel Foucault, in his book Discipline and Punish used the idea of the Panopticon as a metaphor for modern disciplinary societies and their inclination to observe and normalise. He believes that the Panopticon is the ultimate architectural figure of disciplinarily power as it uses a consciousness of permanent visibility to instil power rather than bars and chains like traditional prisons.

Building upon this idea, modern technology has brought this idea of constant surveillance to current society with the deployment of panoptic structures with CCTV cameras on them in public spaces. When people feel that they are constantly being watched they change their actions and behaviour with the idea that it will cut down on crime. Even I’ve noticed this when walking around campus for example, when I spot the camera I instantly become more conscious of my actions and appearance thinking that I’m being watched. Panopticism provides us with a model of a self-disciplined society, in which we govern ourselves and control behaviours without the need for constant surveillance and intervention by an external agency.

Similarly Panopticism has been linked to internet usage too, as ISPs (Internet Service Providers) are able to track our every move online and view our data (dataveillance). Many websites also use cookies to track data about the sites we visit and the products we look at to provide targeted advertising to try and persuade us to buy products or visit websites similar to ones we’ve looked at in the past. One prime example of this that people are becoming more aware of is when booking hotels or flights. The booking websites use cookies that track how many times you’ve been on the website to check prices, and each time you do it raises the prices to make you think that the rooms or flights are being booked up and to get you to spend your money now. By using Private mode on your browser it blocks these cookies so your visits can’t be tracked and the prices aren’t pushed up allowing you to get a better deal.


The idea of the Panopticon and panopticism could be used in my project by playing with the idea of the audience being watched but not knowing. In a sense it is a role reversal. As the audience they’re watching one thing on the screen when in reality it is watching them without them knowing and potentially reacting to their presence or movements.

UPDATE: Looking back at this post after i’ve advanced a bit further with my project has made me consider whether its still relevant. Above I mention how the Panopticon is ‘a metaphor for modern disciplinary societies and their inclination to observe and normalise’ however in relation to my face swapping idea, its using observation to do the opposite of ‘normalise’. While the cameras are observing the audience/ participants, instead of changing their behaviour to conform to what would be considered ‘normal’ it does the opposite by playing with their representations on screen in an attempt to alter their behaviour and deviate even further from the norm by invoking play.

Processing: Capturing faces FIX

In my last post about capturing faces I had the problem that the output of each face captured was constantly being overwritten and I could only have 4 faces saved at once. I finally managed to work out a way around this, changing the name of each image saved so that it isn’t over written and are each saved with unique names.

//Before the setup
int number = 0 //number of picture 

//.... below written in the draw

// only runs if a face is seen
  // captures a face in the box
  // start timer
  if (passedTime > totalTime) {
    number++; // adds number to picture so they count up and not overwrite
    if (faces.length >= 1) {
      face1 = get(faces[0].x, faces[0].y, faces[0].width, faces[0].height);
      String number1 = "1_" + number + ".jpg";
      face1.save(number1);
      if (faces.length >= 2) {
        face2 = get(faces[1].x, faces[1].y, faces[1].width, faces[1].height);
        String number2 = "2_" + number + ".jpg";
        face2.save(number2);
        if (faces.length >= 3) {
          face3 = get(faces[2].x, faces[2].y, faces[2].width, faces[2].height);
          String number3 = "3_" + number + ".jpg";
          face3.save(number3);
          if (faces.length >= 4) {
            face4 = get(faces[3].x, faces[3].y, faces[3].width, faces[3].height);
            String number4 = "4_" + number + ".jpg";
            face4.save(number4);
          }
        }
      }
    }
    
    println( " 5 seconds have passed! " );
    savedTime = millis(); // Save the current time to restart the timer!
  }

Inside the timer still I have a new variable called ‘number’ which is the number of the image being taken. Every 5 seconds a value of 1 is added onto number making it count up each time the timer runs. This value is used in the name of the image being saves so that it changes each time.

For each image a string is created. For example with the first face it would output ‘1_1.jpg’, 1_2.jpg’, 1_3.jpg’ etc. continually counting until stopped.

With this there are still only 4 PImages in Processing that are continually being updated, it is just the name of the image being saved which is changed. This still means there are only 4 running images in processing and once it has been saved and overwritten, it isn’t seen by processing anymore. I may be able to save the images into an array instead or simply add more image variables but for now this will stay until I work out the best course of action.

Processing: Capturing Faces

In a recent post I started looking at OpenCV for processing, and more specifically the face tracking. Applying this to my idea of surveillance, I wanted to work out how to capture the faces it detects and save them. Before starting I knew that the sketch would be recognising faces every frame and if I just wrote some code to capture the faces straight into this, it would be capturing far too many images, at too high a speed. To get around this I decided to implement a basic timer from an example I found. The example printed a line in the log and changed the colour of the background every 5 seconds so you knew it was keeping track of time. I adjusted this so that it would save an image of a detected face ever 5 seconds instead.

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

PImage face1 = createImage(0,0,RGB);
PImage face2 = createImage(0,0,RGB);
PImage face3 = createImage(0,0,RGB);
PImage face4 = createImage(0,0,RGB);

// Values for clock
int savedTime;
int totalTime = 5000;

void setup() {
  size(640, 480);
  video = new Capture(this, 640, 480, 15);
  opencv = new OpenCV(this, 640, 480);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
  frameRate(15);
  video.start();
  savedTime = millis();
}

void draw() {
  scale(1);
  opencv.loadImage(video);

  image(video, 0, 0 );

  noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  Rectangle[] faces = opencv.detect();
  println(faces.length);


  // Calculate how much time has passed
  int passedTime = millis() - savedTime;


  for (int i = 0; i < faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }

  //only runs if a face is seen
  // captures a face in the box
  if (passedTime > totalTime) {
    if (faces.length >= 1) {
      face1 = get(faces[0].x, faces[0].y, faces[0].width, faces[0].height);
      face1.save("output1.jpg");
      if (faces.length >= 2) {
        face2 = get(faces[1].x, faces[1].y, faces[1].width, faces[1].height);
        face2.save("output2.jpg");
        if (faces.length >= 3) {
          face3 = get(faces[2].x, faces[2].y, faces[2].width, faces[2].height);
          face3.save("output3.jpg");
          if (faces.length >= 4) {
            face4 = get(faces[3].x, faces[3].y, faces[3].width, faces[3].height);
            face4.save("output4.jpg");
          }
        }
      }
    }

    println( " 5 seconds have passed! " );
    savedTime = millis(); // Save the current time to restart the timer!
  }

  //display captured faces in corners for now 
  if (faces.length >= 1) {
    image(face1, 0, 0);
    image(face2, width-face2.width, 0);
    image(face3, 0, height-face3.height);
    image(face4, width-face4.height, height-face4.height);
  }
  
} // close draw

void captureEvent(Capture c) {
  c.read();
}

The timer loop contains another if statements to make sure that the face capturing part only runs if there is at least one face detected. This part took a long time to work out as to capture the faces I needed to access the array which the faces were stored in, however when there are no faces detected, it is trying to access a part of the array that doesn’t exist yet which causes it to crash. The series of nested if statements which follow detect up to 4 faces and save them using the get feature in Processing to capture the pixels within the rectangle and save them as PImages created before the setup. For now I’ve only got it saving 4 faces. OpenCV will be able to detect and track more but for now I feel that this is enough.

Currently there is one big flaw with this as when I save the output for each face, it always has the same name and this causes it to overwrite the face every 5 seconds with a new one. My next iteration will hopefully fix this flaw.

For now, I have the current captured faces appearing in the corner of the sketch as I’m not sure what I’m doing with them yet, It was just a more visual way of seeing what the code is doing as it runs.

Here is an example of it working:

[gfycat data_id=”UnrulyMediocreHellbender”]

The sketch runs at 15 frames per second as my laptop struggles to run it at a higher frame rate but currently its good enough for what I want it to do. Towards the end of the clip you can see it detecting a face on my wall where there isn’t one. This is one of the main problems with the face detection at the moment as sometimes it sees patterns which it thinks are faces where there are none. The best way i’ve found to get around this is to work in a well lit area, it’s a slight improvement but still is by no means perfect. Below is the very flattering image it captured and saved. From my experimenting with this it seems to capture the image when you least expect it (even though I know its coming), meaning you never really get time to pose for it leading to more natural representations of people.

output1

This intrigued me and made me wonder about what kind of faces I could capture when people don’t expect it. Even when I can see the video feed and know it is on a 5 second timer it still manages to capture images when I really don’t expect it, often of me concentrating reading through the code or mid-conversation. If I was to have something else on the screen, rather than the video feed, people would be completely unaware that they’re being watched (other than the presence of a camera of course) and I would be able to capture and save images of their faces without them ever being aware of it. Then with these images I could do just about anything I wanted (within reason) and they would be none the wiser.

Media Concept Route

With all my experimenting with Processing i’ve noticed that my work has steered towards interpretation of the live video feed through face tracking and such rather than just reactions to the video feed. This has lead me to aim my project towards the idea of doing something with the face (or other body part) tracking in the OpenCV library.

From here I had to think about which media concepts this could be applied to and my first instinct was the idea of surveillance culture. In 2013, the Telegraph reported that the British Security Industry Authority (BSIA) estimated that there are up to 5.9 million CCTV cameras in the UK, with 750,000 in sensitive locations such as schools and hospitals. This is approximately 1 camera for every 11 people. Previous estimates of the number of cameras ranged from 1.5 -4 million across the UK. The amount of cameras has been criticised by Nick Pickles, the then director of the privacy campaign, Big Brother Watch, saying that:

“This report is another stark reminder of how out of control our surveillance culture has become … This report should be a wake up call that in modern Britain there are people in positions of responsibility who seem to think ‘1984’ was an instruction manual.”

Many argue that the sheer amount of surveillance is comparable to the George Orwell novel, 1984, which is set in a dystopian future in a world of perpetual war, omnipresent government surveillance and public manipulation. In many instances we aren’t even aware that we’re being watched, and this is the idea that I wanted to exemplify in my project.

The surveillance is capable of taking pictures and video of people without them being aware of it happening. It has the ability to follow and recognise faces and even attach them to a name and other personal information stored online. One example of this is Facebook’s experimental facial recognition software, DeepFace, which can recognise faces almost as well as the human brain can no matter the difference in lighting or angle. DeepFace has the ability to look at two photos and can say with 97.25% accuracy whether or not they contain the same face, compared to humans 97.53% accuracy. You’re probably already aware that Facebook has another, more basic facial recognition algorithm that prompts you with tags when you upload pictures. I already find this quite unnerving how Facebook can recognise who you are by your face and match it to all the personal information you post about yourself.

Obviously my Processing project using OpenCV doesn’t have this level of complexity, it is just capable of detecting faces rather than recognising them. This is something I want to build upon in my project, seeing just what I can do by detecting the faces of people as they walk past/ interact. The idea of surveillance is my first building block for development, as I go on and experiment more with Processing, and do more research I will hopefully narrow this down and head along a more specific and interesting direction with the project.

 

Iterative Design

The main point of my work in this current unit is to look at design as an iterative process, meaning that when designing we should be constantly testing and re-evaluating our work based on the requirements of the brief and reactions to the work. The iterative process encourages constant refinement and improvement to designs before they are released to make sure they engage properly with the audience and convey the intended message.

The earliest method of design was widely known as the Waterfall Model. It is the incremental progress in steps, a linear approach to design. The process was very popular due to its simplicity and ease of implementation with timeframes and deadlines. In practice the Waterfall Model doesn’t work too well as often clients don’t know their requirements straight away, which can delay the design process and the rest of the step. Clients may also change their requirements during the design process or once the project has been finalised and the model doesn’t allow them to go back easily, but rather have to start again at the top of the waterfall.

Iterative design is a more cyclical process that was developed in response to the weaknesses of the Waterfall Model. The model keeps the same steps ofrequirements gathering, design and implementation etc. but carries them out in a more flexible manner keeping analysis in mind in each step. In each iteration of the cycle most of the development processes are used making it a very effective process. Each iteration has a defined set of objectives and produced a partial working implementation of the finally system. Each successive iteration builds upon the work of previous iterations, continually evolving and refining until a finish product is made.

 

UPDATE: The iterative design process is something that I will be, and have been using as I progress towards the final product in this project. There is a process of development and constant improvement as I edge ever closer to creating a final, finished product. Each step along the way I set myself a goal of what I want to achieve (e.g. work out how to capture and display faces, improve performance, etc.) and I complete each goal as I progress. At the end of each stage of improvement there is a bit of user testing. The testing would involve me experimenting with my updated code, seeing if I’ve fixed the previous problem or made the appropriate adjustments and then considering what the next stage could be. Occasionally I get 3rd party feedback from other people on the course to see if they have any suggestions on future improvements as these people are representative of the end user (be it a little more informed as they’re also doing the same project) and have an idea of what they’d expect from a face swapping interactive display.

Processing: Face Tracking

OpenCV is an open source computer vision library which mainly focuses on real-time image processing. The OpenCV library for Processing allows for more complex control over video than the basic library giving more ways to interpret a live video feed. The main example I was interested in was object detection; the ability to track certain objects or body parts.

One of the examples that came with the OpenCV library was for face tracking with a live video feed. The code loads a Haar Cascade for the frontal view of a face as the basis for recognition so it can track faces. Different ‘cascades’ can be loaded instead and have been included with the library such as tracking full-body, upper-body or even something more specific such as the right ear. The cascade files contain all the information OpenCV needs to recognise what a face or something looks like and how to detect it in an image.

here is the basic code from the face tracking example:

// import libraries
import gab.opencv.*;
import processing.video.*;
import java.awt.*;

// initiate video and openCV
Capture video;
OpenCV opencv;

void setup() {
  size(640, 480);
  //scale video down to make it run smoother
  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

  video.start();
}

void draw() {
  //scale everything back up to fit window
  scale(2);
  opencv.loadImage(video);

  image(video, 0, 0 );

  noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  //create array for faces
  Rectangle[] faces = opencv.detect();
  println(faces.length);

  for (int i = 0; i < faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
    //draw rectangle around faces 
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }
}

void captureEvent(Capture c) {
  c.read();
}

Here is the result of what that code can do. The sketch has an array called ‘faces’ which is used to store all the information about the detected face(s) such as coordinates and dimensions as well as the number of faces it detects. The code then uses a for loop to iterate through the faces in the array and draw a green rectangle around them at the coordinates and dimensions of each face it has detected.

Screen Shot 2014-11-17 at 14.07.07

With a basic understanding of how this face tracking sketch worked, it was time to start experimenting to see what potential it had by adjusting and manipulating the variables of each face. My first test was to make an adaptation of my sketch which changed an image based on the average location of movement. To do this, I used a series of if statements just as before to work out which horizontal third of the screen the tracked face was in (using the x coordinate) and then change the image appropriately. Using the face as a tracking point provides a much more accurate location of a person rather than just the average area of motion and works much better to do the tracking.

// only runs the code once it sees a face
if (faces.length >=1) {
    //reduce opacity of images to be able to see video behind, and scale the images down to fit the screen.
    tint(255, 127);
    scale(0.5);
    // tracks center point of face in x coordinate instead of top left corner
    // IF STATEMENTS FOR ONE FACE ONLY
    if (faces[0].x + faces[0].width/2 < width/6) {
       image(one, 0, 0);
     }
      if (faces[0].x + faces[0].width/2 > width/6) {
      if (faces[0].x + faces[0].width/2 < 2*width/6) {
         image(two, 0, 0);
       }
     }
     if (faces[0].x + faces[0].width/2 > 2*width/6) {
      image(three, 0, 0);
    }
  }

[gfycat data_id=”ShabbyFrenchClownanemonefish”]

The main problem with this is that it only works for one face at a time, only changing the image based on the location of the first face it sees and ignores any others. The sketch checks to see if there is one or more faces, then if there is it will then display an image over-layed on the video dependant on the coordinates of the first face in the array. Obviously this wouldn’t work too well in a public space as there is often more than one person to track in the space and it would only be affected by the first face it sees each frame.

Before starting another adaptation I knew it would probably experience the same fate as before, with the fatal flaw of only being able to interpret the location of the first face in the array, but followed through anyway just to see what it looks like. I adapted my experiment with a flock of agents tracking a colour so that they would track faces instead; seems simple enough. I created a new tracking point for the flock which was the centre of the face tracking square so that the flock would congregate on top of the face.

int faceX = faces[0].x + faces[0].width/2;
int faceY = faces[0].y + faces[0].height/2;

[gfycat data_id=”OldfashionedConsiderateBackswimmer”]

A potential development of this could be having multiple flocks of agents, one to follow each face but this would require a lot more computing power as it already struggles to run on my laptop and would take up a lot of valuable screen real estate having so many agents on the screen.

Due to my apparent interest in interpretation of the video feed, the OpenCV library appeals to me a lot and is something I would want to develop on in the future. Having the ability to track people and faces quite accurately seems useful for creating an interactive graphic for use in a public space as it can follow people whether or not they actively decide to interactive with it. The array for each face has the x & y coordinate of each face, the width and height along with how many faces it detects at the time. These variables give me a lot of scope to experiment with and find an interesting and unique interaction to make.

Processing: Colour Tracking

On Daniel Shiffman’s GitHub again I found an example for colour tracking. The code compares the colours of pixels to find the closest match to the one the user tells it to track and draws an ellipse on it to show where it is.

Heres an example of the colour tracking, I made it track the green from a highlighter. Theres a few issues with this tracking as it tracks an exact colour value and sometimes due to differences in lighting and such it can’t see the colour anymore and stops tracking. It also jumps around a lot as it tries to find the best match for the colours its looking for. To get it to track well the colour needs to be very different to everything else in the picture or else it has trouble following it and jumps around a lot.

[gfycat data_id=”AptMemorableGreatargus”]

Again I experimented where I could go using colour tracking using the coordinates it gives for other things. While looking for inspiration, I stumbled across another of Schiffman’s examples, this time from his book The Nature of Code. The example I found had an array of agents which autonomously followed the mouse around the screen. They also have the behaviour to avoid each other so that each agent doesn’t end up on top of each other and makes the movement look more natural.

I combined the colour tracking example and this flock example so that the flock of agents follows a colour around the screen. For this I used my trusty green highlighter again as it stands out against the other colours in my room and wont confuse the tracking.

[gfycat data_id=”ObedientElderlyKawala”]

My next idea was to see how well it could track skin tones, as then it could be used to follow people. It does a quite good job finding the colour, but it jumps a lot between areas of skin such as face, arms and hands which separates the flock and doesn’t really track very well. It also wouldn’t do too well if there were a lot of people as there would be too many possible colours to follow. For my example I made it follow my skin tone which was all fine and well, but if there was someone else with a different skin colour it wouldn’t be able to follow them and that seems pretty racist.

[gfycat data_id=”AmusingPastDrake”]

In the space I don’t think the colour tracking will work very well due to the busy space and its limited tracking abilities, however the flock which follows something could be a good starting point for development, looking into the realms of augmented reality and such which could be an interesting route to follow.

 

Processing: Interpreting Motion

On GitHub, I stumbled across a repository with updated examples from Daniel Schiffman’s book, Learning Processing. One of the examples found here was one of a basic motion sensor using the camera feed. The basic notion of how it works is that it compares the colours of pixels from the current frame to the previous frame and works out the difference. The difference from all the pixels is summed up to create a value for the total amount of motion seen. The total is divided by the number of pixels to get the average amount of motion which creates a variable which can be used.

In the example the variable is used to change the size of an ellipse in the centre of the screen, but obviously this isn’t very interesting so i experimented with using the variable to control different things. In the log I printed the value for the average motion to get an idea of what the range was. It seemed the value always seemed to stay between 0 & ~25, occasionally going up to 30 if I moved around a lot.

To use this value for certain parameters I created a new float to stored a mapped version of the average motion.

int FILTER = (int) map(avgMotion, 0, 25, 1, 255);

The above line of code maps the value from avgMotion from between 0 and 25 to between 1 and 255, so i could use it to apply a tint to the video feed. When there is more motion the the tint increases from 1 (near black) up to 255 (white, or clear) which make it look as if brightness of the video changes.

[gfycat data_id=”CarefreeCheerfulAntlion”]

I also experimented with using a filters, in this case I tried with a blur, mapping the value to between 0 and 16 for the amount of blur in pixels.

[gfycat data_id=”SaltyElegantChinchilla”]


After experimenting with this example, I found another motion related one which tracked the average area of motion rather than just the amount of motion. The example used an ellipse to show what it interprets as the average area of motion, and if you stayed still and waved your hand it would follow it. Again I wanted to see what I could do with this type of motion interpretation, finding something to change based on location. In the end I came up with the idea of splitting the screen into thirds to display 3 different images based on where the movement was.

To do this I used if statements to compare avgX (the average area of motion x-coordinate) and portions of the width split into thirds. If it fit into the area, a certain image would appear, at a reduced opacity so that the video could still be seen behind.

 

if (avgX < width/3){
    tint(255, 127); //reduce opacity
    image(one,0,0);
  }
  if (avgX < (2*width)/3) {     
if (avgX > width/3) {
      tint(255, 127);
     image(two,0,0);
    }
  }
  if (avgX > (2*width)/3) {
    tint(255, 127);
    image(three,0,0);
  }

Here is the result of my experimentation:

[gfycat data_id=”WillingGoodnaturedBluebird”]

One obvious limitation of tracking the average area of motion was that if there was movement on both sides of the screen, the average would be somewhere in the middle, limiting the usefulness of its tracking abilities.

I considered what would happen if these were put into the Weymouth House foyer and concluded that they would really be much use. As it is quite a busy space the amount of motion would constantly be quite high, with the odd occasion of it dropping down. As for tracking the area of motion, that isn’t must use either as there would constantly be people walking from either direction during the day, making the tracking pretty pointless. If there was a smaller area with not much traffic passing, these could work a lot better and be developed on, but for now I think more experimentation is needed to find a better direction to head in.