ChatGPT Art is Boring, but AI Art Doesn’t Have to Be

Kailin Shi ‘25

Bringing up the use of AI in art is almost always bound to lead to angry reactions. In the past decade, text-to-image AI art has risen to both public attention and infamy; while platforms such as DALL-E (Chat GPT’s image generator) and Midjourney have shown “remarkable potential” in the immediate generation of (mostly) convincing images, they have also led to lots of difficult questions surrounding its ethical use.

Machine learning programs are trained using a database of images to ‘learn’ to recognize visual patterns, predicting output that attempts to replicate the visual similarities of its provided database. The algorithm is trained on a database of images and text, and the final output is based on and limited by the contents of its learning materials. Since text-to-image AI platforms are often trained on large datasets of images and text scraped from across the internet, artists and creators argue that using their work without permission or compensation infringes on their intellectual property rights.

Ethical problems aside, it would be a total waste to use a tool as groundbreaking as AI simply to synthesize different elements into new images—painters have been able to do this forever. Because AI can only output what it has learned from its database, the end product cannot add any constructive or innovative value that a human artist could not add. And while AI's expedited artmaking process is a compelling (albeit controversial) argument for its use, AI's use in art has so much more potential than just image replication. Unlike traditional mediums, this is an artmaking tool that is armed with processes of learning and perception independent from the artist’s control; the artist’s medium is no longer a passive tool but an independent one, capable of making decisions and judgments that may even surprise the artist!

Popular text-to-image models, such as Midjourney and DALL-E, are problematic for many reasons, but especially since they reduce the artist’s control of the artmaking process to the input of words. This inability to assert artistic control is the root of many controversies surrounding AI art, as critics argue that the mere act of typing in a word prompt is not enough for an artist to claim authorship. However, AI artists working with their own algorithms have the freedom to curate datasets specific to their art practice. The mere act of typing in a word prompt to Chat-GPT is not enough for someone to claim authorship over the generated image; they are merely appropriating and rearranging preexisting images and text scraped from across the internet. On the other hand, an AI artist who programs their own algorithms and compiles their own datasets of intentional imagery asserts artistic control over the entire process.

One way AI art pioneers have been exploring uses of AI beyond image replication is by experimenting with the controlled distortion of the original dataset images instead of simple replication, creating innovative imagery. Artist Robbie Barrat generated nude paintings using an algorithm equipped with ‘incomplete’ rules associated with local features (stomach, breasts, folds of fat) but not bodily structure (arms, legs, proportions, etc.), resulting in novel images of surreal pools of flesh (Barrat). This meaningful control of algorithmic learning allows for serendipitous interpretations of imagery beyond mimicry, opening new pathways for creating interesting imagery.

London-based artist Memo Akten’s series of works “Learning to See,” on the other hand, treats AI almost as a conscious collaborator instead of a passive tool. Akten’s machine learning algorithm is provided with a limited database—with images of flowers, flames, or waves—and then interprets images based on its past knowledge of pattern and form. As a result, the artist’s hand holding an extension cord is transformed into multicolored flames, blossoming flowers, or a tranquil sea view.

TMP Editorial Board