Merrin Macleod, August 2019
I'm interested in exploring the traces left on images by the people, machines and media that they go through.
I did a one-minute sketch of the profile pictures of the 100 people I last followed on twitter.
I trained a Pix2Pix Tensorflow GAN on these sketches, and asked it to sketch some other profile pictures.
It quickly figured out the paper and pencil colour, and eventually the generator figured out something plausible enough to fool the discriminator most of the time - even though this was just a few epochs on my Macbook.
More interesting was flipping it in the opposite direction - generating photos from the sketches.
After a short period of training, it sort of superimposed the sketches onto some photo-like colours, then started taking pieces of the photos and putting them into the outputs.
After I left it running overnight it seemed to figure out eyes, but not where to put them.
Usually with the Pix2Pix GAN you'll have a training dataset and a test dataset, but I wanted to see how specialised it was on the training dataset, so I piped the training dataset back into the test dataset.
The images were easy to recognise from their colours and layout, but had traces of the shapes of my sketches, and characteristic GAN glitches.
The final step: translating the images back through my brain and hands, and introducing the traces of a new medium - oil paint on canvas.
I'm interested in what was lost and what was gained through each of these translations and interpretations, and the difference between what is lost and gained through human and machine eyes. There are hints of what I've seen before and hints of what the machine has seen before, filling in gaps or informing shorthand depictions. The programming of my brain means that I will always reproduce eyes and arms in a sketch or painting, but won't always be true to exact positioning or colours; the GAN has its own prejudices too.
There are traces of the size of pencils and paintbrushes and patches; the texture of canvas and paper and pixels; the imperfections of printing and scanning and moving between sizes and shapes; the limitations of my technical skills sketching and painting and the limitations of the GAN's training.