In recent years, the discipline of (generative) AI art has evolved from the mere enhancement or application of visual effects onto existing images to the almost indistinguishable photorealistic imitation of images and finally to the generation of new images with their own visual characteristics and themes.
Through the technological development of (multimodal) artificial intelligence systems such as CLIP, DALL·E and Stable Diffusion, (every-)one is now able to create artworks by communicating with a neural network using natural language.
These current systems can not only apply and combine certain styles, but also understand the essence and content of a concept and apply it to other situations. This has led to a general shift in the use of generative models and sparked ongoing discussions about the conditions and implications for (human) creativity & intelligence, (creative) labor & (dataset) bias, intellectual property & consent, authenticity & value, and other matters such as the relationship with photography, etc.
In this seminar, we will look at, discuss and reflect on these different topics together through readings and, most importantly, intensively practice our prompting skills to launch our careers as professional “prompt engineers” ;).