The Future of AI-Generated Art Is Here: An Interview With Jordan Tanner

Art

AI-generated art created by Jordan Tanner using DALL-E.

Will machine-made art mean the end of art as we know it, or will it unleash new frontiers of creativity?

Unlike many of the so-called “artists” strewing junk around the halls of modern art museums, Jordan Tanner is actually pushing the frontiers of his craft. His eclectic portfolio includes vaporwave-inspired VR experiences, NFTs & 3D-printed figurines for Popular Front, and animated art for this very magazine. His recent AI-generated art made using OpenAI’s DALL-E software was called “STUNNING” by Lee Unkrich, the director of Coco and Toy Story 3.

We interviewed the UK-born, Israel-based artist about the imminent AI-generated art revolution and why all is not lost when it comes to the future of art. In Tanner’s eyes, AI-generated art is similar to having the latest, flashiest Nikon camera—it doesn’t automatically make you a professional photographer. Tanner also created a series of unique, AI-generated pieces for this interview which can be enjoyed below.

Thanks for talking to Countere, Jordan. Can you tell us a little about your background as an artist?

I was one of those annoying kids that could draw, sculpt, and generally just create things. Combine that with the fact that at age 10, my dad brought home a CD with a cracked copy of Photoshop and Illustrator; from then on, my career path was set.

I never studied, dropped out of school at 17, and went straight into work. One year later, I was renting half an art studio in a dilapidated chocolate factory in Hackney Wick, East London.

DALL-E’s accurate depiction of Hackney Wick, circa 2004.

I spent a couple years hiding out in the studio all night, experimenting with different kinds of media. I made oil paintings, sculpted objects, developed a novel method of image making with thermochromic leuco dyes, and collected a shit-ton of images and video footage. However, I upped and abandoned everything when I decided to emigrate to Israel; all my works, materials, most of my equipment, I just left behind or chucked it.

One memorable piece from that period was a fully functioning umbrella, retrofitted with a .177 air rifle mechanism in place of the central tube with the stock replaced by an umbrella handle. It was a recreation of the weapon used in the assassination of the Bulgarian dissident, George Markov, on Waterloo Bridge in 1978. I even designed and printed an instruction manual for its operation. Wherever it is now, I hope it’s in good hands.

The umbrella’s current owner.

Since then, I’ve mostly been producing art and design works for clients and employers. In 2018, I picked up Blender, which was a real turning point that opened up a lot of opportunities for personal and commercial work. Getting access to DALL-E feels like another one of these moments.

What have you been creating in DALL-E over the past few weeks?

When I first got access to DALL-E I went apeshit, trying all kinds of random prompts in an attempt to get a feel for what was possible. I noticed that the results the AI was returning had great storytelling potential. By the end of the first day, I had developed an outline for an animation that I wanted to make from the stream of images that each prompt generated.

The base prompt for the animation was "a psychic in a 1970s secret government program” and I combined that with a sometimes very detailed scene description and a modifier for the style of the image. The style was either “polaroid photograph” or “polaroid fisheye photograph.” Most astoundingly, it took just two days to generate and compile all of the images that comprised the final animation:

“A psychic in 1970s secret government program.” Jordan Tanner, 2022

Having worked with this technology for a few months, what are some far-out possibilities that this technology might have for art and culture?

After playing with it for a couple months and also seeing what other people are doing, I think that we are a few generations away from it making a massive impact on art production and culture. Take for example the cover I made for Justin Geoffrey’s article for Countere: The Dark Magic Behind the #DarkMAGA Movement. It took me two days to prepare a mockup for the scene I’d envisaged. I was lucky that it all came together and didn’t end up wasting that much time. Now with DALL-E I can take the same pitch for the prompt “A victorian era man sitting cross legged in front of a bowl, surrounded by candles in a dark 19th century wood paneled room” and in fifteen seconds I’ll get a set of near-production-ready images.

As for what’s on the horizon, there are two advancements that I am anticipating. First: maintaining persistent visual elements between prompts. For instance, imagine that you ask the AI for a wooden room with a woman sitting in the middle and there’s one image you particularly like. Now you take that image and ask the AI to pose the same woman in a different way, move some of the objects, make it sunrise instead of midday, or even place the woman in an entirely different but thematically consistent scene. The possibilities would be immense. 

A person with enough imagination could potentially concoct an entire movie from just text prompts. It’s coming.

Second: animation. Asking the AI to produce the same scene with the woman in the wooden room, but adding movement such as “she is sipping a cup of tea” and the output being an image sequence or video of that action occurring. If you combined these two advancements, a person with enough imagination could potentially concoct an entire movie from just text prompts. It’s coming.

As an artist, how did you feel when working with this technology? Replaced? Awe-inspired? Something else?

I have to admit, my initial reaction was that it was game over man—anyone will be able to produce any image that they desire on demand. But that’s not really the case. It takes a level of understanding within image-making to produce specific and desired results. Having the latest, most expensive Nikon camera does not make you a professional photographer; this is somewhat similar.

I believe a new role within art is being created. Maybe we can call it “prompt designing,” or maybe not because that's a bit fucking lame. To be successful at it will require imagination, visualization, design/art history knowledge, and creative writing skills. If I ask for a photograph, what kind of camera and lens will the AI implement? If it’s a painting, what kind of style/brush/paints/colors will it use? How should the image be composed and who/what is in it? It’s not (yet) a replacement for creatives, it’s an augmentation. A lot of what I’ve seen people doing with it on social media is just “X’” pop-culture reference in “Y” out -of-context situation. I don’t think endless pictures of Shrek in the backrooms or Homer Simpson as Mad Max on a motorbike with doughnuts for wheels are going to put artists out of work just yet.

What were some of the biggest surprises that you’ve seen while working with the technology?

Sometimes DALL-E makes very interesting composition choices, things that I just wouldn’t think of. My favorite is the Polaroid diptych, these just blew me away:

“The same person meeting themselves on a street in New York.”

“Cosmonaut on mars during a dust storm, a woman screams at him.”

What were some of the craziest, most specific prompts that DAll-E generated?

Here’s my favorite image so far. It’s not exactly what I asked for, it’s even better. The translucent finger looks like it’s forming before your eyes from the veins outwards. A subtle detail that conveys a lot of the story I was trying to capture with the images.

“A woman in white presses her hand against the glass window of the mars habitat pod, a man's reflection is visible on the window, a dust storm rages.” 

Has DALL-E done anything really spooky while you were creating with it? If so, what was it?

The output is generally as spooky as the prompt you put in and the majority of the mistakes within the system are quite innocuous. I might ask for a specific scene of people on Mars inside a glass habitat building, then DALL-E decides to throw out a couple curve balls: a man standing in a football stadium, or a stock photo of a woman posing. 

Saying that, I got a creepy, ghost hunter-style image when I tried asking for “a woman turns to a monster in front of him” during this cosmonaut on Mars scene.

“Cosmonaut on mars during a dust storm, a woman turns to a monster in front of him.”

I also ended up making a particularly disturbing set of images of doll collectors when I couldn’t remember the word “mannequin” and used “shop dolls” instead. It returned a nice set of Texas Chainsaw Massacre basement dwellers.

What limitations did the DALL-E technology have?

The technology is limited by the initial data that was used to train the model, which is bound to inject a certain level of bias. Notice that in all the images in this article everyone is white. If you ask for a person, 90% of the time it’s going to be a guy. I imagine they’ll address this over time.

Sometimes you’ll find it struggles with faces to the point where they go all mushy or just disappear completely.

“Someone is watching me through the window.”

The other main limiting factor is yourself. You’ll find a particular prompt that works and suddenly everything you make is a derivative—like me and the Polaroids.

What other possibilities or impact might DALL-E technology have on society?

Sometimes you’ll get an image so good people question whether DALL-E is for real, you must be secretly putting the images together in Photoshop. When it gets more consistent we’ll be diving headfirst into a new realm of fake news production, the kind of thing we were promised with deepfakes but for print media. Case in point, earlier in the summer Yahoo News used a crude DALL-E Mini-generated image to illustrate Mark Zuckerberg holding a katana

I believe a new role within art is being created...To be successful at it will require imagination, visualization, art history knowledge, and creative writing skills.

Think about the impact thispersondoesnotexist.com has had on everyday internet life. All the scammers use it. Look at the team on any NFT scam project or scroll through LinkedIn for 10 minutes, and you’ll find endless cold, thousand-yard stares from the inordinate number of GAN-generated faces. 

OpenAI, the creators of DALL-E, have put a large focus on safeguards. When I joined the beta, I had to have an hourlong group induction session over Google Meet to go over all of the things that are and are not allowed. There are automatic filters that I’ve triggered a number of times: “shooting” lightning from the eyes? Not allowed. But it’s only a matter of time before more open projects like DALL-E Mini match it in terms of fidelity and to be honest, I’m all for embracing the chaos.

Follow Jordan Tanner on Instagram and Twitter.

John Flowers

Editor-in-chief of Countere. South American military vet. Currently lives in Montana.

Previous
Previous

What We Can Learn From Moss

Next
Next

The Secrets of Lake Mega-Chad