User Testing

Today I was finally able to test my face swapping in the foyer space. To do this I plugged my laptop into one of the screens. The image below shows the basic setup; my face swapping was being displayed on the bottom monitor and the webcam was placed just above it and has been circled so you can see where it is. The camera was directed towards the main pathway through the building and would be able to see the faces of people leaving the foyer space walking towards where the camera and screen were. The camera was placed on the right edge of the screen so that it was closest to where the people would be walking by, giving it the best opportunity to pick up faces. The camera was also placed just above the screen so that it would be roughly eye level with the majority of the people passing through the space. From my testing it was clear this was definitely the ideal position for it as it could perfectly capture full frontal faces as it was directly in front of the audience just like in my initial testing on my laptop.

The screen ran my project at 720p (1280 x 720px) full screen which was a good enough resolution to run at as the video was clear and easy to see. Running at at 720p rather than 1080p possibly worked to my advantage as I had a big problem with the code thinking displays on the wall were faces, but more about that later.

A video compilation of all my testing is at the bottom of this post if you just want to skip to the point.

workInSpace

When it was first up and running on the screen, the face detection was working, just not very well. It was struggling to detect any faces that were over ~5 meters away from the camera and therefore wasn’t swapping them. I found this problem was caused by the scale being too high. If you can’t remember, my tracking uses two different videos; the video which is displayed, and a smaller scaled down version which the OpenCV tracking uses to track as it give a much better performance and frame rate. The scale was set to 4 meaning the video for OpenCV was a quarter of the size of the original, and it turned out this was too small to be able to detect the faces in the distance. I never encountered this problem in this testing as the distance between the audience and the camera was never really that large, however the way I wrote the code made this very easy to change which I am thankful for. I reduced the scale down to 2 so OpenCV’s video was only half the size; this solved my problem and allowed faces in the distance to be tracked, without affecting the performance of the sketch.

However, this larger tracking video exaggerated another small problem I experienced while testing in the space. As shown circled in the post below, the face detection thought that some of the displays on the wall in the background were faced and would then capture them and swap them about with the actual faces. With the higher resolution video, the tracking was almost certain those displays were faces no matter how much I shouted at it to stop. Whilst in my eyes the code wasn’t working properly, a few people passing through the space found it hilarious as they saw their face mounted on a wall, while there being a poster where their face should be.

Screen-Shot-2015-01-15-at-15.29.50

My quick and easy solution to this was to get a few pieces of paper and stick them over the wall displays in an attempt to cover them up. Looking back at it I should’ve got some larger paper to cover it up a bit more as it would still occasionally swap with the wall, just not as much. There was also the interesting problem (shown below) that it thought the coffee machine at Costa was also seen as a face, though there wasn’t much I could do to cover it up as the Costa employees need to use it. Circled in yellow are a couple of people I spotted who seemed pretty interested in and entertained by what was happening on the screen while they were in the queue for Costa, even though their faces weren’t being swapped.IMG_1264Here (below) is a picture directly of the audience of the piece. It shows 2 people walking towards where the screen and camera were and 2 people waiting in the Costa queue looking around. They all seem to be interested in what was happening on screen and it definitely drew in peoples attention.

IMG_1259

There was one notable group of girls who actually stopped in front of the screen to look and interact with it. Unfortunately, for some reason my laptop crashed and it froze the face swapping video while the girls were looking at it, and I wasn’t quick enough to record a video of them interacting while it was running properly. I still managed to capture a reaction to the frozen video which was still very positive as they had stopped and were pointing and enjoying the face swapping. Initially they saw it just as their reflection and proceeded to adjust their hair and general appearance, but once the face swapping kicked in and their faces were jumping between people they found it funny and were pointing and interacting further with the piece. Below is the image that was left on the screen when my project froze and crashed.

Screen-Shot-2015-01-15-at-12.25.10-(2)

These reactions to what was on the screen caused people walking in the opposite direction to stop and turn around to see what everyone was so interested in, which I didn’t even consider would happen. A few people who were sitting in the space were also keeping an eye on the screen and watching the reactions of people as they walk through. While observing people over an extended period of time, I noticed a lot of people walking through on their own would glance at the screen and look rather confused about it because nothing was happening. When there is only one face seen, it doesn’t do any face swapping so its just a standard video of the space playing. A lot of people walking through were looking down at their phones so didn’t even notice that anything was happening. Not much can be done about these people as they obviously have far more important things to look at on their phones rather than my work on the screen. On many occasions people were passing through the space too quickly and the face tracking wouldn’t notice them fast enough and was be interested in the ‘faces’ it saw on the walls. The people passing through speedily would occasionally glance over at the screen while it quickly tries to detect their faces and place on them but a lot of the time their faces weren’t swapped.

As well as changing the scale of the video, a few other changes were made as I did my testing and observations. With the face tracking constantly thinking there were faces on the walls, I increased the number of swapped faces from 4 to 6 to accommodate. This gave the tracking a better opportunity to swap the faces of people rather than wall displays. This involved creating a couple of new PImages and a couple of blocks of code which just did the resizing, masking and placing of all the faces. Fortunately, capturing, masking and resizing 6 faces at once didn’t affect the performance of the sketch like I thought it would so I was still able to run it at the full frame rate with it looking nice and smooth.

There was a comment overheard from someone walking past and he said “I think its augmented reality” which is something I never actually considered while making this project. Reflecting on this, I see where he’s coming from with his thinking and will research and write about this idea in the future.

After putting together the video from todays testing, I noticed that I didn’t get much footage of the swapping actually working properly on the people as they walked past. When I thought about it further I realised that around ~65% of the time the swapping wasn’t actually working properly and it was just making faces jump around on the walls rather than causing the disruptions to identity and representation as I wanted. Tomorrow I’m going to go back to the space and do some more testing. In the space there is another screen on the other side of the wall as the screen I used today which is facing the entrance to the building and I hope to test it here instead. The main benefit of this one is that the background isn’t as busy as the camera won’t be looking into the foyer space, rather looking towards the doors. My aim here is to slow down or stop people as they enter the building and hopefully the face swapping will work actually on their faces and not on the wall behind.

Advertisements
%d bloggers like this: