How Broad didn’t Train Rothko? How he learned to make a digital image without training any data, without imitating it
But Broad didn’t train his AI on Rothko; he didn’t train it on any data at all. By hacking a neural network, and locking elements of it into a recursive loop, he was able to induce this AI into producing images without any training data at all — no inputs, no influences. Depending on your perspective, Broad’s art is either a pioneering display of pure artificial creativity, a look into the very soul of AI, or a clever but meaningless electronic by-product, closer to guitar feedback than music. It seems that his work points the way towards a more ethical way of using generativeai beyond the use of derivative slop in our culture.
While talking to him about his process, one of the things that came up was the fact people don’t really understand how generative artificial Intelligence works. With their emphasis on “prompt engineering” and a nearly endless number of settings and elements, generative tools such as Midjourney are much more similar to Adobe’s popular open source software product. No one knows what will happen inside the black box if we feed generativeai data, but we know if we do we will see a different side of it. Broad notes the irony of a company being highly secretive about its models and inputs.
Broad has deep reservations about the ethics of training generative AI on other people’s work, but his main inspiration for (un)stable equilibrium wasn’t philosophical; it was a crappy job. In 2016, after searching for a job in machine learning that didn’t involve surveillance, Broad found employment at a firm that ran a network of traffic cameras in the city of Milton Keynes, with an emphasis on data privacy. “My job was training these models and managing these huge datasets, like 150,000 images all around the most boring city in the UK,” says Broad. I got so sick of managing data. When I started my art practice, I was like, I’m not doing it — I’m not making [datasets].”
Broad recieved a PhD in computer science at the University of London. He says that he started to think about the consequences of his vow of data anothment at that location. “How could you train a generative AI model without imitating data? It took me a while to realize that it was an oxymoron. A generative model is just a statistical model of data that just imitates the data it’s been trained on. So I kind of had to find other ways of framing the question.” Broad soon turned his attention to the generative adversarial network, or GAN, an AI model that was then much in vogue. In a conventional GAN, two neural networks — the discriminator and the generator — combine to train each other. Both networks analyze a dataset, and then the generator attempts to fool the discriminator by generating fake data; when it fails, it adjusts its parameters, and when it succeeds, the discriminator adjusts. The tug-of-war between generator and discriminator at the end of this training process will probably produce an ideal equilibrium that will allow this GAN to produce data that is on par with the original training set.
Legal threats from a multinational corporation pushed him further away from inputs. Broad trained a type of artificial neural network called an autoencoder on each frame of the movie Blade Runner in 1982, and then asked it to copy the film. The result, bits of which are still available online, are simultaneously a demonstration of the limitations, circa 2016, of generative AI, and a wry commentary on the perils of human-created intelligence. Broad posted the video online, where it soon received major attention — and a DMCA takedown notice from Warner Bros. “Whenever you get a DMCA takedown, you can contest it,” Broad says. “But then you make yourself liable to be sued in an American court, which, as a new graduate with lots of debt, was not something I was willing to risk.”
After watching writer / director Eliza McNitt’s new short film Ancestra, I can see why a number of Hollywood studios are interested in generative AI. A number of the shots were made and refined solely with prompts, in collaboration with Google’s DeepMind team. The possible benefits of the normalizing of this kind of creativework are obvious, as is the fact that Google is likely to gain from it. It is hard not to think about generativeAI’s potential to bring a new era of “content” that feels like it was cooked up in a lab and put scores.
The life of an expectant mother is depicted in Ancestra, which was inspired by the story of McNitt’s complicated birth. Though the short features a number of real actors performing on practical sets, Google’s Gemini, Imagen, and Veo models were used to develop Ancestra’s shots of what’s racing through the mother’s mind and the tiny, dangerous hole inside of the baby’s heart. We see Blonde close-ups of the baby, whose heartbeat becomes a part of the film’s soundtrack. And the woman’s ruminations on what it means to be a mother are visualized as a series of very short clips of other women with children, volcanic explosions, and stars being born after the Big Bang — all of which have a very stock-footage-by-way-of-gen-AI feel to them.
There is a very sentimental message to be conveyed, but it is cliched when it is juxtaposition with nature footage. The project feels like it’s trying to prove that the videos on the internet are actually good for your brain. The film is so lacking in fascinating narrative substance, though, that it feels like a rather weak argument in favor of Hollywood’s rush to get to the slop trough while it’s hot.
Though McNitt notes how “hundreds of people” were involved in the process of creating Ancestra, one of the behind-the-scenes video’s biggest takeaways is how relatively small the project’s production team was compared to what you might see on a more traditional short film telling the same story. Hiring more artists to conceptualize and then craft Ancestra’s visuals would have undoubtedly made the film more expensive and time-consuming to finish. Especially for indie filmmakers and up-and-coming creatives who don’t have unlimited resources at their disposal, those are the sorts of challenges that can be exceedingly difficult to overcome.
Films produced with more generative AI might be cheaper and faster to make, but the technology as it exists now doesn’t really seem capable of producing art that would put butts in movie theaters or push people to sign up for another streaming service. It is important to remember that, at the end of the day, Ancestra is really an ad meant to drum up hype for a company, which is something not everyone should be doing.