I’ve been thinking of things to do for the interactive project and have been having difficulty coming up with a media concept I want to portray as for my project. Based on my current understanding of processing, I’m leaning towards doing something involving colour.
After learning how to access colours of individual pixels in a workshop, I wanted to apply this method to a live video feed as a basis of what I’m doing. As a starting point I looked at an example in my book, Generative Design for inspiration. Here I found that their Generative Design library has a method for sorting colours in an array by hue, brightness, saturation and luminance. I wanted to apply these sorting methods to a video to see what would happen.
Below is an excerpt of the code I used, commented so it makes a bit more sense as to what it is doing. The grid size is variable based on mouse position, so I can choose how pixelated the image is.
Below is the new version, where the pixels have been sorted by hue. The result gives these interesting colour bands which show the main colours which compose the image.
I wanted to see how it looked when I introduced something brightly coloured. The gif below is an example of this, me waving around some blue thing I had in my room. The sorted colours clearly shows all the different shades of blue it sees and it really stands against the rest of the colours. I thought this may be interesting when tracking in the Weymouth House space as it will clearly see people in bright coloured clothes walking past and will show a colour band accordingly. The abstract and quickly changing grid of pixels would definitely be eye catching, and may cause people to play around with it, but I don’t think anyone will really understand what’s going on, other than it being a mashup of random looking colours.