Interactive Media

Glitch Art

Colourful, abstract textures, generated by intentionally caused errors in digital image display.

Background

I joined the Glitch Artist Collective on Facebook in 10/2015, as well as their sub-group glitch//request, where people post selfies in order to have them edited by glitch artists using their custom tools. There was a lot going on in these groups and the vivid exchange of creativity inspired me to do Glitch Art myself.
Since then I program and implement fragment shaders and controllers in Processing to recreate image errors (glitches).
I use these to create entirely new abstract images (not for editing). Source images serve purely as source data. The results of this kind of distortion have their own character. They’re not alike results of pure image editing, nor do they look like something generated by 100% procedural shaders.

Gerkzeuk
Also in 2015 I created a computer I named “Gerkzeuk“, an artificial artist.

Gerkzeuk

This machine is able to indepenently (automatically) produce glitch art. Project Page
In the name of the machine I exhibited several prints in Saarbrücken in early 2016.

Objects, created by the virtual persona “Iris”, “living” inside the computer “Gerkzeuk”

In the meantime I configured the machine so it automatically created and published NFTs on teia (then “hicetnunc”), Instagram and Twitter.
On Teia these snippets of digital imagery are still available to buy.

Virtual humanoids were the topic of my Bachelors thesis and Masters thesis.
Today, I no longer identify the machine but myself as the artist, and I prefer real exhibitions to virtual ones.

Work process

The software uinit “manglr”, which I produce the images with, works like a compositor for 2D-shaders. (Link to the reposity)
I throw an image or video file in via drag and drop and set shaders and parameters which I want to edit the image with.
This selection can also be made automatically / randomly. Also, the shaders parameters can change slowly over time. The shaders are then animated.
I watch this autopilot-mode or edit the controllers manually, until a composition comes up which I like. Then, I save the images.
Resolution is variable. There’s a maximum of 20.000 pixels side length, which is equivalent to a 1,70m long image at a print sharpness of 300 DPI. Depending on the shaders used this might take some computing time. Smaller images (up to 20cm) can be animated in real time (24 fps or more). For some images, the pixel-aesthetic is good, which allows scaling low resolution images to a greater size. Then comes the selection process, in which I feel which images I like most and sort the wheat from the chaff.