Category Archives: Brief

My Final Code

As required, here is my final code.

The one I tested with can be found here on my Github

My new, updated and enhanced code which I wrote about here can also be found on Github

As a back up I’ll post them both here (better safe than sorry, right).

Continue reading



With all my testing done, Its time to draw together some conclusions based on my work. The brief was to:

Create a piece of interactive information design for a shared public space, which is intended to elucidate/explain some an idea or concept you perceive as key to our 21st century media experience.

When I came up with my final idea of face swapping, I laid out some goals with regards to what I wanted to achieve visually with the swapping itself, and theoretically with an embodiment of a 21st century media experience.

When I started I broke the process down into 4 steps:

  1. Track the faces on the video feed.
  2. Capture the tracked faces and save them as an image within Processing.
  3. Resize the images to match the face they will be swapped with.
  4. Display the resized faces on top of the video feed in the appropriate location.

00530My final version was about to achieve all the steps, plus one more (the masking) which made the end project even better. The masking addition meant my face swapping was able to better play with peoples’ identities and representations as the blurring of the swapped faces made them blend in far better than they ever did before.

Based on my user testing I feel I was definitely able to achieve my goal of getting some playful reactions. I wanted to interrupt people passing through the foyer space, making them slow down or stop to interact with my piece. The camera based interactions was enough to make this happen as people stopped to play around with the face swapping, altering their usual behaviour in the space and making them stand out. In my post about Goffman’s Performance Theory, I mentioned how, in theory, face swapping should alter front stage performances. Face swapping on screen was able to successfully engross (some) people in their altered representations and forget about their performance and actions in the actual foyer space. It might not seem much but it takes quite a lot to get some people to deviate from social norms in public, even if not everyone was interested in it.

The interactive information design was supposed to reflect ideas of our 21st century media experience. While I was focused on playing with identity and representation, I don’t think these ideas were as apparent to the audience as they were to me during my observations. However the idea of Augmented Reality was apparent to at least one audience member and is quite clear with my real time face swapping.

If I was to do this again or had longer to work with I would first like to leave the face swapping running for a longer period of time with a static camera pointed to record potential interactions. I feel from my testing it wasn’t really a true test of my interactive information graphic as there were constantly people hovering about the screens, myself included as we waited in turn to use them and to try and record peoples’ interactions. This could’ve put some people off interacting with it as there could be too much pressure on them as there were so many observers. On the other hand it could’ve increased interest in the screens as there were so many people eagerly hanging around which is intriguing as to what they’re doing.

I would love to try this out in another location too, perhaps one where people would be waiting around more, giving a longer opportunity to notice and interact with my face swapping. For example, in the foyer space, there is a screen in front of the elevators and that could be a really interesting place to try it. As people are waiting for the elevator to come, they would be facing in the direction of the screen (as its on the same wall) and therefore they should be more likely to notice the face swapping.

From my experiments and testing, the face swapping works a lot better with people who are sitting/ standing still rather than those passing through a space. I recently has an opportunity to try it out with some friends at Exeter University and I quite possibly got the best reaction out of all my tests. They loved seeing what they would look like as their friends, as the masking made some of the faces fit in so well with the alternate bodies they sometimes couldn’t work out who’s face was where. Not being able to recognise your own face says quite a lot about altered representations and just how well my piece worked to achieve this. Theres something unnerving and disturbing about seeing someone else’s face placed and blended perfectly onto your head.



Augmented Reality

On my first day of user testing in the space, One of the people passing by commented saying ‘I think its augmented reality’ which is something I never really considered while making it. Augmented Reality takes computer generated information such as images, audio and video, and overlaying them over a real-time environment (Kipper & Rampolla, 2012, p.1). Augmented Reality is often confused with Virtual Reality which immerses the user into synthetic, digital world and can’t see the real world around them. Augmented Reality allows digital objects to be superimposed with or composited with the real world and can be used to supplement and enhance reality.

In the context of my face swapping, an Augmented Reality is created on screen where people have their faces swapped over. It is using a video feed of the space in front of the screen and superimposing the faces it sees into different locations. The faces are digitally altered before they’re placed back down on screen by blurring the edges of them in an attempt to blend them into their new location. The face swapping also managed to keep up with the video in real time, be it a bit jittery due to capturing a new face, resizing and masking it 20 times a second.

A real world implementation of Augmented Reality is the Magic Mirror. The Magic Mirror is a digital screen which allows you to try on different clothes and outfits in an Augmented Reality space. It uses a Kinect sensor to track body movements so it can superimpose 3D clothes onto you and you move and rotate to see how it would look while you’re wearing it. Using the Kinect it is able to recognise gestures in order to change the clothes being modelled by swiping to the sided, or raising a hand to take a photo. Obviously this technology is far more advanced than what I’m doing with my face swapping but it goes to show how it does have real world implementations.

I feel that using Augmented Reality is a good idea for creating an interactive information graphic display as it makes it much harder for people to resist looking at it. I feel that most people are naturally quite narcissistic in that they can’t resist looking at their own reflection, be it in a mirror, shop window or on a screen from a video camera. The Augmented Reality element of this reflection then makes it more engaging for the user/ audience as their appearance and representation has been manipulated without their explicit consent.

Kipper, G., Rampolla, J., 2012. Augmented Reality: An Emerging Technologies Guide to AR. Elsevier.

User Testing Day 2

Today I did another session of testing my face swapping in the space with a few variations. This time I was using a different screen in a different location within the foyer. Below is an image of the location of the screen (I used the bottom screen as someone else used the top one to test theirs). The screen was facing the entrance to the media building and the background scene is far less cluttered than when using the first screen location. While testing in this new location I added a snippet of code which allowed me to save the frames as I hold down the spacebar on my laptop. This let me to capture what was on the screen directly rather than videoing it, giving much better quality footage to show.

I have included gifs of each part of the video as I talk about them, and the full video of my testing can be found at the bottom of this post. The gifs have controls so you can slow them down/ pause them to better see the face swapping if need be.

IMG_1281When I was first setting up in the space and had the code open and was running the face swapping to make sure it was running properly, A person passing stopped to ask me a few questions about the work. He said he was a Software Analyst and was interested in the language and technologies used to make it. I explained how it was using an adapted version of the OpenCV library within Processing and he seemed very interested in how it all worked. When looking at the real time face swapping he said that it works ‘surprisingly well’ and that i’ve done ‘a good job’ which was a great thing to hear about a project I’ve been working on for quite a long time.

I did my testing a bit later in the day than I did yesterday and as a result there were far less people passing through the building at a time, so there was a lot of waiting around for people to come through so I could capture the interactions. Another thing to note here is the glare coming from the glass doors of the entrance. With having an area which was disproportionally brighter than the rest of the room it meant the camera had a bit of trouble detecting faces as they first come through the door due to glare and subsequently struggled to focus the image. However once audience members were a bit further into the building the face tracking was able to detect the faces and capture them all as expected.

I noticed as people entered the building they would glance over at the screen, and if the looked for long enough it was able to capture their faces and swap them while they were still walking past. This solved a problem I had yesterday where people would look over but it wasn’t able detect the people passing through the space quite quickly.

[gfycat data_id=” EasyUnhealthyHaddock”]

While the tracking worked significantly better than before, there were still a couple faces detect which weren’t actually there as shown in the gif below. A lot of the people who walked through the space were on their own, so I wasn’t able to face swap them which was quite disappointing. There was also the issue where even with a group of people, it requires at least 2 people to look over at the screen/camera for the face swap to happen so there were a fair few missed opportunities when only one person was interested in the screen.

[gfycat data_id=”UncomfortableFlimsyAlaskanhusky”]

As with yesterday, it was interesting to see people walking in the opposite direction (toward the exit) stop and turn around to look at the screen. The video I captured shows a group of people walking towards the exit, and the last 2 people in the group stop to have a little play around with the face swapping which was good to watch. I like how the piece is eye-catching and engaging enough for this to happen.

[gfycat data_id=”BountifulRapidBluefintuna”]

My favourite part of my testing is the two people in the video below. As they were walking through I overheard one of them say “what the f*ck is that?!” while stopping to see what was happening on the screen. He then made his friend come back and they stood around for a while playing with it, moving and making funny faces while watching the screen. I was interesting to see them tilting and turning their heads to test the limits of the face swapping to see if it would still work. It’s also interesting to see how one of them actually hides behind their jacket and says “I don’t like it” when they noticed what was happening on the screen. Seeing these two interact and play with my piece made it all worth it in the end as it was great to see the kind of reaction I was hoping for from the audience as my face swapping messes with their image and representation.

[gfycat data_id=”ClearcutPlasticBlackbuck”]

A few other things to note about the testing. A lot of the attention could be due to there being a table with people sitting around it in a place where there usually isn’t one, and this would definitely attract more attention than if it wasn’t there. There was also someone else testing theirs at the same time and can be seen in some of the clips videoing their work on the screen. This could’ve also attracted more attention, especially from people walking out of the building as they could interested in what she was looking at and filming.

If time permitted and there wasn’t other people waiting to test their work on the screens it could definitely be worth leaving the face swapping up for a longer period of time, without the laptop out and all the people standing around which attracts too much attention and adds a bias to the testing. Also due to someone else doing their testing at the same time, people could actually be looking at their work on the screen above rather than mine.

In another post I will do some further analysis of my testing and compare it to my aims and media concepts used in the creation of this project.

User Testing

Today I was finally able to test my face swapping in the foyer space. To do this I plugged my laptop into one of the screens. The image below shows the basic setup; my face swapping was being displayed on the bottom monitor and the webcam was placed just above it and has been circled so you can see where it is. The camera was directed towards the main pathway through the building and would be able to see the faces of people leaving the foyer space walking towards where the camera and screen were. The camera was placed on the right edge of the screen so that it was closest to where the people would be walking by, giving it the best opportunity to pick up faces. The camera was also placed just above the screen so that it would be roughly eye level with the majority of the people passing through the space. From my testing it was clear this was definitely the ideal position for it as it could perfectly capture full frontal faces as it was directly in front of the audience just like in my initial testing on my laptop.

The screen ran my project at 720p (1280 x 720px) full screen which was a good enough resolution to run at as the video was clear and easy to see. Running at at 720p rather than 1080p possibly worked to my advantage as I had a big problem with the code thinking displays on the wall were faces, but more about that later.

A video compilation of all my testing is at the bottom of this post if you just want to skip to the point.


When it was first up and running on the screen, the face detection was working, just not very well. It was struggling to detect any faces that were over ~5 meters away from the camera and therefore wasn’t swapping them. I found this problem was caused by the scale being too high. If you can’t remember, my tracking uses two different videos; the video which is displayed, and a smaller scaled down version which the OpenCV tracking uses to track as it give a much better performance and frame rate. The scale was set to 4 meaning the video for OpenCV was a quarter of the size of the original, and it turned out this was too small to be able to detect the faces in the distance. I never encountered this problem in this testing as the distance between the audience and the camera was never really that large, however the way I wrote the code made this very easy to change which I am thankful for. I reduced the scale down to 2 so OpenCV’s video was only half the size; this solved my problem and allowed faces in the distance to be tracked, without affecting the performance of the sketch.

However, this larger tracking video exaggerated another small problem I experienced while testing in the space. As shown circled in the post below, the face detection thought that some of the displays on the wall in the background were faced and would then capture them and swap them about with the actual faces. With the higher resolution video, the tracking was almost certain those displays were faces no matter how much I shouted at it to stop. Whilst in my eyes the code wasn’t working properly, a few people passing through the space found it hilarious as they saw their face mounted on a wall, while there being a poster where their face should be.


My quick and easy solution to this was to get a few pieces of paper and stick them over the wall displays in an attempt to cover them up. Looking back at it I should’ve got some larger paper to cover it up a bit more as it would still occasionally swap with the wall, just not as much. There was also the interesting problem (shown below) that it thought the coffee machine at Costa was also seen as a face, though there wasn’t much I could do to cover it up as the Costa employees need to use it. Circled in yellow are a couple of people I spotted who seemed pretty interested in and entertained by what was happening on the screen while they were in the queue for Costa, even though their faces weren’t being swapped.IMG_1264Here (below) is a picture directly of the audience of the piece. It shows 2 people walking towards where the screen and camera were and 2 people waiting in the Costa queue looking around. They all seem to be interested in what was happening on screen and it definitely drew in peoples attention.


There was one notable group of girls who actually stopped in front of the screen to look and interact with it. Unfortunately, for some reason my laptop crashed and it froze the face swapping video while the girls were looking at it, and I wasn’t quick enough to record a video of them interacting while it was running properly. I still managed to capture a reaction to the frozen video which was still very positive as they had stopped and were pointing and enjoying the face swapping. Initially they saw it just as their reflection and proceeded to adjust their hair and general appearance, but once the face swapping kicked in and their faces were jumping between people they found it funny and were pointing and interacting further with the piece. Below is the image that was left on the screen when my project froze and crashed.


These reactions to what was on the screen caused people walking in the opposite direction to stop and turn around to see what everyone was so interested in, which I didn’t even consider would happen. A few people who were sitting in the space were also keeping an eye on the screen and watching the reactions of people as they walk through. While observing people over an extended period of time, I noticed a lot of people walking through on their own would glance at the screen and look rather confused about it because nothing was happening. When there is only one face seen, it doesn’t do any face swapping so its just a standard video of the space playing. A lot of people walking through were looking down at their phones so didn’t even notice that anything was happening. Not much can be done about these people as they obviously have far more important things to look at on their phones rather than my work on the screen. On many occasions people were passing through the space too quickly and the face tracking wouldn’t notice them fast enough and was be interested in the ‘faces’ it saw on the walls. The people passing through speedily would occasionally glance over at the screen while it quickly tries to detect their faces and place on them but a lot of the time their faces weren’t swapped.

As well as changing the scale of the video, a few other changes were made as I did my testing and observations. With the face tracking constantly thinking there were faces on the walls, I increased the number of swapped faces from 4 to 6 to accommodate. This gave the tracking a better opportunity to swap the faces of people rather than wall displays. This involved creating a couple of new PImages and a couple of blocks of code which just did the resizing, masking and placing of all the faces. Fortunately, capturing, masking and resizing 6 faces at once didn’t affect the performance of the sketch like I thought it would so I was still able to run it at the full frame rate with it looking nice and smooth.

There was a comment overheard from someone walking past and he said “I think its augmented reality” which is something I never actually considered while making this project. Reflecting on this, I see where he’s coming from with his thinking and will research and write about this idea in the future.

After putting together the video from todays testing, I noticed that I didn’t get much footage of the swapping actually working properly on the people as they walked past. When I thought about it further I realised that around ~65% of the time the swapping wasn’t actually working properly and it was just making faces jump around on the walls rather than causing the disruptions to identity and representation as I wanted. Tomorrow I’m going to go back to the space and do some more testing. In the space there is another screen on the other side of the wall as the screen I used today which is facing the entrance to the building and I hope to test it here instead. The main benefit of this one is that the background isn’t as busy as the camera won’t be looking into the foyer space, rather looking towards the doors. My aim here is to slow down or stop people as they enter the building and hopefully the face swapping will work actually on their faces and not on the wall behind.

Processing: Final Touches

As I’m doing my testing tomorrow in the foyer, I took it upon myself to do my final touches to my sketch to make sure its as good as possible before I test it. The code at the moment doesn’t need any changes so I decided to perfect my the mask image to make sure it cuts the face out as good as possible.

Before my final touches I was on mask 9, this slightly changed soon after to mask 10 which i made a bit longer to try and not cut off the chin as much (this was changed a long time ago). I came back to it today and started afresh with the mask shape. Using an image taken from the face tracking, I drew around the face to get the shape and it’s more oval than before, as it should be as faces aren’t rectangular. After testing it with mask 9 it was a bit too wide at the top and was showing some of the background so I trimmed it even further and ended with mask 12.

Mask 10

Mask 10

Mask 11

Mask 11

Mask 12 (final mask)

Mask 12 (final mask)

Here are some screenshots using the (currently) final mask shape. The first shows how it works at a distance which is pretty good. While not perfect, the masking does do a pretty good job for cutting out faces, especially as they’re not always the same shape and size.

21When I do some testing I expect to need to make some alterations to my project as I’m sure there will be some slight variations when in the actual space which will mean the mask might not work as expected for example. I might need to change the shape of the mask when testing in the space. A lot of my testing has been done while sitting directly in front of the camera so it always captures full frontal faces. In the space the position of the camera and the angle of people could vary hugely and mean that the mask and swapping doesn’t work as well as it does at the moment. Only time will tell…

The Hawthorne Effect

I have recently been introduced to a psychological theory known as the Hawthorne effect, observer effect. The Hawthorn effect is a type of reactivity in which individuals improve or modify their behaviour in response to them knowing they’re being watched. The original study took place in a business setting where they were experimenting with different levels of light to see if it made the workers more or less productive. Though the results of the study showed an increase in productivity no matter what the lighting, but when the experiment was over (and they were no longer being observed) productivity dropped again. They concluded that the increase in productivity was due to a motivational effect as the workers were being watched and interest was showed in them and their activities.

I have touched on an idea like this in a previous blog post about a psychological experiment about how being watched changes behaviour. While this isn’t directly related to my product (as I’m not measuring productivity or things like that) it does help to exemplify my idea of people changing behaviour and reacting to being observed. In my face swapping project, the participants (willing or otherwise) are being observed by a camera and it is being shown on one of the screens. In theory, this in itself should change people’s behaviour as they become aware of them being watched, especially as it is out of the norm for the media foyer space as there isn’t usually a camera watching them. With their representations being altered (i.e face swapped) it should further influence their behaviour as notice and hopefully play around with it.

Arguably, none of this may be true as the experiment involved a person (or group of people) doing the observing rather than a camera and that is far more obtrusive and would have a larger effect on people’s behaviour in my opinion. However I have high hopes for it working providing the interactive nature of the piece is obvious enough and people are interested.


Testing at Christmas

I took christmas as a good opportunity to try out my face swapping project with a different audience. Up till now (or then) I had only tested it with people on the course so they knew and understood what the idea behind it was and had seen it develop over time. My new audience included my parents and grandparents which is a big change in demographic from my usual testing due to the large age difference.

Unfortunately I didn’t take any screenshots of the testing but you’ll have to take my word for it that it actually happened and i’m not lying about all this. As a whole the face swapping went well, it managed to confuse and impress my grandparents while swapping 4 faces at once. The masking worked really well, sufficiently blurring the edges of the captured faces so the blend into their new host. This actually made them feel really self conscious as they saw their old wrinkly faces (sorry) contrasted on a younger, less wrinkly head & body. So from this initial testing I can quite safely say it does what I had intended and brings up questions about identity and representation as audience members are made to feel uncomfortable as their usual image is disrupted.

A few criticisms here about my work. The swap and masking works best when people keep a blank(ish) facial expression (i.e not smiling too wide or talking etc.) as the shape of the captured face doesn’t change. Often this caused the face below the swapped one to be seen around the bottom or edges of the swapped face which pulls the audience out of the experience and ruins the fun. I don’t think there is much I can do about this due to how the face tracking works using a predefined rectangle but I could give it a go to try and improve it. There is also an issue of swapping wider faces with thinner faces and it pretty much has the same problem, though I’m not sure what can be changed about that to fix the problem.

The testing was done on by holding my laptop so it wasn’t really an ideal representation of how it would be done in the foyer space. One big problem was if the faces/heads were tilted and it couldn’t detect them. I think largely this was a problem because we were seated and wanted to lean in to try and fit within the sight of the camera. This shouldn’t really be a problem when people are walking through the space, unless people are walking around with their heads tilted at weird angles.

Interaction Design

Interaction design is concerned primarily with interactions between computers and users, this is often referred to as human-computer interaction (or HCI). Interaction design helps to determine the initial user experience such as navigation or how to use something. Good interaction design means that it is intuitive for the user, there isn’t a steep learning curve to work out how the technology and they can pick it up in seconds. For my face swapping I am designing a human-computer interaction for people walking through the Weymouth Hose foyer space.

From the user’s perspective, the experience is continuous as the environment, the user, the screen and whats on the screen all feedback to one another (Kuniavsky, 2003, p.43). With my real-time face swapping the feedback needs to be instantaneous, seeing their faces swapped as soon as they notice whats on screen, and having the swapping keep up with their movements and actions as they pass through the space.

There are four main pieces of information an interaction designer (myself in this case) needs to know during the development process either about whether the designs are on the right track or whether people can actually do what they’re supposed to be able to.

  • Task flows
    • Task flows are the actions which are needed for something interesting to happen. For my face swapping the users need to be facing the camera and screen with their heads relatively straight for it to be able to detect them. It then needs at least 2 visible faces for any interaction to happen so that is a part of the task flow too.
  • Predictability and consistency
    • The predictability determines how comfortable users will feel with the task flows; does the face swapping work intuitively or is there a really complex method to go around it. For my work, I feel the only thing that might not be obvious initially is that it is supposed to be swapping faces, especially if there is only one person present at a time, they wouldn’t be able to understand what its supposed to do and why it isn’t working.
  • The relationship between features and emphasis on specific elements
    • This regards the relationship between the normal video feed, and the swapped faces. The face swapping needs to be obvious enough that it gets noticed, but subtle enough that it blends back into the video to look relatively seamless. While this seems quite paradoxical, I feel the key to creating a good user experience is making the swapped faces look as real as possible, so that possibly it isn’t noticed initially until they go in for closer inspection and see their their on-screen representation is different.
  • Different audiences
    • My face swapping has to work for all the different types of people which would be passing through the foyer space. This means the camera needs to be in a place where it can detect tall people and short people, while still capturing, masking and swapping their faces. It’s important for all ages to understand how it works and what is happening on screen too. One thing that has become apparent in my testing is that it doesn’t always recognise people wearing glasses as haar-cascade for the face tracking is made for people without.


Kuniavsky, M., 2003. Observing the User Experience: A Practitioner’s Guide to User Research. Morgan Kaufmann.

Performance Theory

(Mentions of face swapping have been highlighted for skim-reading)

In 1959. an American Sociologist, Erving Goffman, published a book titled The Presentation of Self in Everyday Life. In this book he uses imagery of theatre to portray the importance of human and social action and interaction. He refers to it as ‘the dramaturgical model of social life’. The model relates social interactions to a theatre, and the people you interact with in everyday life as actors on stage who each play a varying role. The audience is other people who observe the roleplaying and in turn react to the performance.

Goffman uses the term ‘performance’ to describe all of the activities of an individual in front of an audience or set of observers. It is through this performance that the individual can give social meaning to themselves, to others, and their context. The audience is not always aware of the performance but they are constantly attributing meaning to it and the actor themselves. This idea can be related to one I mentioned in a previous post about how our behaviour changes when we are being watched. The audience is always affecting the performance whether the actor is aware of it or not. It is also important for the actor to stay ‘in character’. The performance has to conform to the correct set of signals and behaviours, and anything outside of this detracts from the performance and could mislead the audience. All of our actions form part of our identity, who we, and other people think we are. Our behaviour needs to conform our previous patterns of behaviour (our character) or it seems out-of-place and weird. 

The appearance of the actor or individual functions to portray social statuses and people’s roles in society. These can include gender, class, status, age, occupation etc. Appearance includes clothes, body language, hair style etc. The way we chose to present ourselves plays a big part in the way others view us. My face swapping idea plays with the idea of appearance as the onscreen representation is altered, disrupting the interactors sense of appearance and self. The persons face (which is usually unique to a person) is then associated with a different sartorial discourse, age and possibly gender, creating tension between the performance and the actor’s front.

The actor’s front’, as defined by Goffman, is the part of the performance which defines the situation for the audience. It is the image or impression they are trying to give off with their appearance and performance. The front can be seen as a standardised mask for the performer to control the way in which they’re perceived by the audience. Goffman likens a front to a script for the actor containing stereotyped expectations of how they should behave. A personal front contains all the items needed to perform and is usually identifiable by the audience as a representation of the specific actor. Face swapping could be seen as a way of altering these personal fronts by interchanging pieces between actors. When the appearance is disrupted, the script is also disrupted as there is a contrast between the head and the new body it’s imposed upon. As the actors watch an altered version of themselves, they have to try and manage two separate discourses of the self rather than just one. Certain situations and scenarios have social scripts that define how the actor should behave in the given situation. When the actor is put into a new situation they or establishes a new role, they usually construct a new front or script from a combination of past fronts, rarely creating something completely new. The actor has to use their past experience to try to react to the environment to find/ create a front which best suits it.

In a staged performance there are three main locations for interactions; front-stage, back-stage and off-stage. Front-stage is where the audience is watching. The actor needs to conform to their performance, appearance and front, following social conventions which have meaning to the audience. The actor is often aware they’re being watched and therefore acts accordingly. Back stage the actor  may be able to act a little differently and is able to step out of character. It is a place where no members of the general audience can see. It is usually where the actor can be representative of their true self and get rid of roles they have to play in public. The backstage area can occur at home with a close group of friends for example, where people can be more informal and act completely differently to what is usually expected. It has been argued that there is no true back stage as there will always be members of the back-stage audience who aren’t as trusted and stand on the fringes of the group. Finally off-stage is when the actor isn’t involved with the performance, when they can interact with members of the audience directly and independently of their performance on stage. This is where a specific performance can be given as the audience is selected and segmented. For me, an example of this could be when interacting with the employee at the till in a shop. When you go to the till to buy a product, you briefly step out of your usual performance and front and put on a new one specifically for interactions with the person at the till. The new front is usually more polite and courteous than the usual self and is put on specifically for the interaction with a certain person in the audience.

Face swapping playfully alters front stage performances, creating two stages with different audiences – one in the foyer space and one on-screen. As people become engrossed with their altered performance on-screen, hopefully they forget about their performance in the actual space as they adapt their performance to fit the face swapped reality. The aim is to change the way people behave, trying to deviate from the social norms people often try and follow when out in public. For people who are in the space but can’t see the screen, the behaviour of the interactors of the piece would appear to sit outside of the norms, creating an inconsistency and contractions with everyone else on the stage. It would be interesting to see if this does actually happen or if people aren’t interested in their face-swapped performance and ignore it and carry on walking.


Goffman, E. (1956). The Presentation of Self in Everyday Life. New York: Doubleday.