As a little experiment while I was working on the toping of identity and representation I decided to make a little sketch which would put a mask on all of the faces the video feed sees. Using the basic face tracking example as my starting point I worked out how I could go about doing this.
The idea was relatively simple; It required creating a PImage to store the mask image, loading the mask image into it, then resizing and drawing the mask for each face it sees. This is the mask picture it uses. It’s a .PNG so that it can have a transparent background to get the best effect possible so the eyes and part of the mouth can be seen through the mask. the mask uses the x, y, width and height for each face taken from the array which stores all the details. It took a little experimenting to get the eyes to line up right which ended up increasing the size of the mask by 30px each way, and shifting it up and left by 10px. This meant it would perfectly align on a forward facing face.
However when I made it, it ended up looking like this (below) where the mask gets blurrier and blurrier until it is unrecognisable. To solve this I realised that I needed to move the line which loaded the mask into Processing from the setup to the draw section. With it in the setup, the image was loaded once and left to be resized continually 30 times a second (or whatever frame rate it was running at) which made the quality continue to drop as the image is squashed and enlarged continually. With it at the start of the draw section, it meant that the original image would be loaded and the beginning of each frame to then be resized so that a fresh version of the mask was being used each time rather than the already diminishing quality one. This problem has actually been mentioned in my main project here with the problem of resizing the masking layer, and this little experiment is how I actually solved it (confusing backwards posting, i know).
Here is the fixed version, as you can see the mask quality stays the same throughout: