Category Archives: Testing and Evaluation

My Final Code

As required, here is my final code.

The one I tested with can be found here on my Github

https://github.com/kuuurttyy/Face-Swapping

My new, updated and enhanced code which I wrote about here can also be found on Github

https://github.com/kuuurttyy/newFaceSwappingShortCode

As a back up I’ll post them both here (better safe than sorry, right).

Continue reading

Evaluation

With all my testing done, Its time to draw together some conclusions based on my work. The brief was to:

Create a piece of interactive information design for a shared public space, which is intended to elucidate/explain some an idea or concept you perceive as key to our 21st century media experience.

When I came up with my final idea of face swapping, I laid out some goals with regards to what I wanted to achieve visually with the swapping itself, and theoretically with an embodiment of a 21st century media experience.

When I started I broke the process down into 4 steps:

  1. Track the faces on the video feed.
  2. Capture the tracked faces and save them as an image within Processing.
  3. Resize the images to match the face they will be swapped with.
  4. Display the resized faces on top of the video feed in the appropriate location.

00530My final version was about to achieve all the steps, plus one more (the masking) which made the end project even better. The masking addition meant my face swapping was able to better play with peoples’ identities and representations as the blurring of the swapped faces made them blend in far better than they ever did before.

Based on my user testing I feel I was definitely able to achieve my goal of getting some playful reactions. I wanted to interrupt people passing through the foyer space, making them slow down or stop to interact with my piece. The camera based interactions was enough to make this happen as people stopped to play around with the face swapping, altering their usual behaviour in the space and making them stand out. In my post about Goffman’s Performance Theory, I mentioned how, in theory, face swapping should alter front stage performances. Face swapping on screen was able to successfully engross (some) people in their altered representations and forget about their performance and actions in the actual foyer space. It might not seem much but it takes quite a lot to get some people to deviate from social norms in public, even if not everyone was interested in it.

The interactive information design was supposed to reflect ideas of our 21st century media experience. While I was focused on playing with identity and representation, I don’t think these ideas were as apparent to the audience as they were to me during my observations. However the idea of Augmented Reality was apparent to at least one audience member and is quite clear with my real time face swapping.

If I was to do this again or had longer to work with I would first like to leave the face swapping running for a longer period of time with a static camera pointed to record potential interactions. I feel from my testing it wasn’t really a true test of my interactive information graphic as there were constantly people hovering about the screens, myself included as we waited in turn to use them and to try and record peoples’ interactions. This could’ve put some people off interacting with it as there could be too much pressure on them as there were so many observers. On the other hand it could’ve increased interest in the screens as there were so many people eagerly hanging around which is intriguing as to what they’re doing.

I would love to try this out in another location too, perhaps one where people would be waiting around more, giving a longer opportunity to notice and interact with my face swapping. For example, in the foyer space, there is a screen in front of the elevators and that could be a really interesting place to try it. As people are waiting for the elevator to come, they would be facing in the direction of the screen (as its on the same wall) and therefore they should be more likely to notice the face swapping.

From my experiments and testing, the face swapping works a lot better with people who are sitting/ standing still rather than those passing through a space. I recently has an opportunity to try it out with some friends at Exeter University and I quite possibly got the best reaction out of all my tests. They loved seeing what they would look like as their friends, as the masking made some of the faces fit in so well with the alternate bodies they sometimes couldn’t work out who’s face was where. Not being able to recognise your own face says quite a lot about altered representations and just how well my piece worked to achieve this. Theres something unnerving and disturbing about seeing someone else’s face placed and blended perfectly onto your head.

00530

 

User Testing Day 2

Today I did another session of testing my face swapping in the space with a few variations. This time I was using a different screen in a different location within the foyer. Below is an image of the location of the screen (I used the bottom screen as someone else used the top one to test theirs). The screen was facing the entrance to the media building and the background scene is far less cluttered than when using the first screen location. While testing in this new location I added a snippet of code which allowed me to save the frames as I hold down the spacebar on my laptop. This let me to capture what was on the screen directly rather than videoing it, giving much better quality footage to show.

I have included gifs of each part of the video as I talk about them, and the full video of my testing can be found at the bottom of this post. The gifs have controls so you can slow them down/ pause them to better see the face swapping if need be.

IMG_1281When I was first setting up in the space and had the code open and was running the face swapping to make sure it was running properly, A person passing stopped to ask me a few questions about the work. He said he was a Software Analyst and was interested in the language and technologies used to make it. I explained how it was using an adapted version of the OpenCV library within Processing and he seemed very interested in how it all worked. When looking at the real time face swapping he said that it works ‘surprisingly well’ and that i’ve done ‘a good job’ which was a great thing to hear about a project I’ve been working on for quite a long time.

I did my testing a bit later in the day than I did yesterday and as a result there were far less people passing through the building at a time, so there was a lot of waiting around for people to come through so I could capture the interactions. Another thing to note here is the glare coming from the glass doors of the entrance. With having an area which was disproportionally brighter than the rest of the room it meant the camera had a bit of trouble detecting faces as they first come through the door due to glare and subsequently struggled to focus the image. However once audience members were a bit further into the building the face tracking was able to detect the faces and capture them all as expected.

I noticed as people entered the building they would glance over at the screen, and if the looked for long enough it was able to capture their faces and swap them while they were still walking past. This solved a problem I had yesterday where people would look over but it wasn’t able detect the people passing through the space quite quickly.

[gfycat data_id=” EasyUnhealthyHaddock”]

While the tracking worked significantly better than before, there were still a couple faces detect which weren’t actually there as shown in the gif below. A lot of the people who walked through the space were on their own, so I wasn’t able to face swap them which was quite disappointing. There was also the issue where even with a group of people, it requires at least 2 people to look over at the screen/camera for the face swap to happen so there were a fair few missed opportunities when only one person was interested in the screen.

[gfycat data_id=”UncomfortableFlimsyAlaskanhusky”]

As with yesterday, it was interesting to see people walking in the opposite direction (toward the exit) stop and turn around to look at the screen. The video I captured shows a group of people walking towards the exit, and the last 2 people in the group stop to have a little play around with the face swapping which was good to watch. I like how the piece is eye-catching and engaging enough for this to happen.

[gfycat data_id=”BountifulRapidBluefintuna”]

My favourite part of my testing is the two people in the video below. As they were walking through I overheard one of them say “what the f*ck is that?!” while stopping to see what was happening on the screen. He then made his friend come back and they stood around for a while playing with it, moving and making funny faces while watching the screen. I was interesting to see them tilting and turning their heads to test the limits of the face swapping to see if it would still work. It’s also interesting to see how one of them actually hides behind their jacket and says “I don’t like it” when they noticed what was happening on the screen. Seeing these two interact and play with my piece made it all worth it in the end as it was great to see the kind of reaction I was hoping for from the audience as my face swapping messes with their image and representation.

[gfycat data_id=”ClearcutPlasticBlackbuck”]

A few other things to note about the testing. A lot of the attention could be due to there being a table with people sitting around it in a place where there usually isn’t one, and this would definitely attract more attention than if it wasn’t there. There was also someone else testing theirs at the same time and can be seen in some of the clips videoing their work on the screen. This could’ve also attracted more attention, especially from people walking out of the building as they could interested in what she was looking at and filming.

If time permitted and there wasn’t other people waiting to test their work on the screens it could definitely be worth leaving the face swapping up for a longer period of time, without the laptop out and all the people standing around which attracts too much attention and adds a bias to the testing. Also due to someone else doing their testing at the same time, people could actually be looking at their work on the screen above rather than mine.

In another post I will do some further analysis of my testing and compare it to my aims and media concepts used in the creation of this project.

User Testing

Today I was finally able to test my face swapping in the foyer space. To do this I plugged my laptop into one of the screens. The image below shows the basic setup; my face swapping was being displayed on the bottom monitor and the webcam was placed just above it and has been circled so you can see where it is. The camera was directed towards the main pathway through the building and would be able to see the faces of people leaving the foyer space walking towards where the camera and screen were. The camera was placed on the right edge of the screen so that it was closest to where the people would be walking by, giving it the best opportunity to pick up faces. The camera was also placed just above the screen so that it would be roughly eye level with the majority of the people passing through the space. From my testing it was clear this was definitely the ideal position for it as it could perfectly capture full frontal faces as it was directly in front of the audience just like in my initial testing on my laptop.

The screen ran my project at 720p (1280 x 720px) full screen which was a good enough resolution to run at as the video was clear and easy to see. Running at at 720p rather than 1080p possibly worked to my advantage as I had a big problem with the code thinking displays on the wall were faces, but more about that later.

A video compilation of all my testing is at the bottom of this post if you just want to skip to the point.

workInSpace

When it was first up and running on the screen, the face detection was working, just not very well. It was struggling to detect any faces that were over ~5 meters away from the camera and therefore wasn’t swapping them. I found this problem was caused by the scale being too high. If you can’t remember, my tracking uses two different videos; the video which is displayed, and a smaller scaled down version which the OpenCV tracking uses to track as it give a much better performance and frame rate. The scale was set to 4 meaning the video for OpenCV was a quarter of the size of the original, and it turned out this was too small to be able to detect the faces in the distance. I never encountered this problem in this testing as the distance between the audience and the camera was never really that large, however the way I wrote the code made this very easy to change which I am thankful for. I reduced the scale down to 2 so OpenCV’s video was only half the size; this solved my problem and allowed faces in the distance to be tracked, without affecting the performance of the sketch.

However, this larger tracking video exaggerated another small problem I experienced while testing in the space. As shown circled in the post below, the face detection thought that some of the displays on the wall in the background were faced and would then capture them and swap them about with the actual faces. With the higher resolution video, the tracking was almost certain those displays were faces no matter how much I shouted at it to stop. Whilst in my eyes the code wasn’t working properly, a few people passing through the space found it hilarious as they saw their face mounted on a wall, while there being a poster where their face should be.

Screen-Shot-2015-01-15-at-15.29.50

My quick and easy solution to this was to get a few pieces of paper and stick them over the wall displays in an attempt to cover them up. Looking back at it I should’ve got some larger paper to cover it up a bit more as it would still occasionally swap with the wall, just not as much. There was also the interesting problem (shown below) that it thought the coffee machine at Costa was also seen as a face, though there wasn’t much I could do to cover it up as the Costa employees need to use it. Circled in yellow are a couple of people I spotted who seemed pretty interested in and entertained by what was happening on the screen while they were in the queue for Costa, even though their faces weren’t being swapped.IMG_1264Here (below) is a picture directly of the audience of the piece. It shows 2 people walking towards where the screen and camera were and 2 people waiting in the Costa queue looking around. They all seem to be interested in what was happening on screen and it definitely drew in peoples attention.

IMG_1259

There was one notable group of girls who actually stopped in front of the screen to look and interact with it. Unfortunately, for some reason my laptop crashed and it froze the face swapping video while the girls were looking at it, and I wasn’t quick enough to record a video of them interacting while it was running properly. I still managed to capture a reaction to the frozen video which was still very positive as they had stopped and were pointing and enjoying the face swapping. Initially they saw it just as their reflection and proceeded to adjust their hair and general appearance, but once the face swapping kicked in and their faces were jumping between people they found it funny and were pointing and interacting further with the piece. Below is the image that was left on the screen when my project froze and crashed.

Screen-Shot-2015-01-15-at-12.25.10-(2)

These reactions to what was on the screen caused people walking in the opposite direction to stop and turn around to see what everyone was so interested in, which I didn’t even consider would happen. A few people who were sitting in the space were also keeping an eye on the screen and watching the reactions of people as they walk through. While observing people over an extended period of time, I noticed a lot of people walking through on their own would glance at the screen and look rather confused about it because nothing was happening. When there is only one face seen, it doesn’t do any face swapping so its just a standard video of the space playing. A lot of people walking through were looking down at their phones so didn’t even notice that anything was happening. Not much can be done about these people as they obviously have far more important things to look at on their phones rather than my work on the screen. On many occasions people were passing through the space too quickly and the face tracking wouldn’t notice them fast enough and was be interested in the ‘faces’ it saw on the walls. The people passing through speedily would occasionally glance over at the screen while it quickly tries to detect their faces and place on them but a lot of the time their faces weren’t swapped.

As well as changing the scale of the video, a few other changes were made as I did my testing and observations. With the face tracking constantly thinking there were faces on the walls, I increased the number of swapped faces from 4 to 6 to accommodate. This gave the tracking a better opportunity to swap the faces of people rather than wall displays. This involved creating a couple of new PImages and a couple of blocks of code which just did the resizing, masking and placing of all the faces. Fortunately, capturing, masking and resizing 6 faces at once didn’t affect the performance of the sketch like I thought it would so I was still able to run it at the full frame rate with it looking nice and smooth.

There was a comment overheard from someone walking past and he said “I think its augmented reality” which is something I never actually considered while making this project. Reflecting on this, I see where he’s coming from with his thinking and will research and write about this idea in the future.

After putting together the video from todays testing, I noticed that I didn’t get much footage of the swapping actually working properly on the people as they walked past. When I thought about it further I realised that around ~65% of the time the swapping wasn’t actually working properly and it was just making faces jump around on the walls rather than causing the disruptions to identity and representation as I wanted. Tomorrow I’m going to go back to the space and do some more testing. In the space there is another screen on the other side of the wall as the screen I used today which is facing the entrance to the building and I hope to test it here instead. The main benefit of this one is that the background isn’t as busy as the camera won’t be looking into the foyer space, rather looking towards the doors. My aim here is to slow down or stop people as they enter the building and hopefully the face swapping will work actually on their faces and not on the wall behind.

Testing at Christmas

I took christmas as a good opportunity to try out my face swapping project with a different audience. Up till now (or then) I had only tested it with people on the course so they knew and understood what the idea behind it was and had seen it develop over time. My new audience included my parents and grandparents which is a big change in demographic from my usual testing due to the large age difference.

Unfortunately I didn’t take any screenshots of the testing but you’ll have to take my word for it that it actually happened and i’m not lying about all this. As a whole the face swapping went well, it managed to confuse and impress my grandparents while swapping 4 faces at once. The masking worked really well, sufficiently blurring the edges of the captured faces so the blend into their new host. This actually made them feel really self conscious as they saw their old wrinkly faces (sorry) contrasted on a younger, less wrinkly head & body. So from this initial testing I can quite safely say it does what I had intended and brings up questions about identity and representation as audience members are made to feel uncomfortable as their usual image is disrupted.

A few criticisms here about my work. The swap and masking works best when people keep a blank(ish) facial expression (i.e not smiling too wide or talking etc.) as the shape of the captured face doesn’t change. Often this caused the face below the swapped one to be seen around the bottom or edges of the swapped face which pulls the audience out of the experience and ruins the fun. I don’t think there is much I can do about this due to how the face tracking works using a predefined rectangle but I could give it a go to try and improve it. There is also an issue of swapping wider faces with thinner faces and it pretty much has the same problem, though I’m not sure what can be changed about that to fix the problem.

The testing was done on by holding my laptop so it wasn’t really an ideal representation of how it would be done in the foyer space. One big problem was if the faces/heads were tilted and it couldn’t detect them. I think largely this was a problem because we were seated and wanted to lean in to try and fit within the sight of the camera. This shouldn’t really be a problem when people are walking through the space, unless people are walking around with their heads tilted at weird angles.