The time is now for creative solidarity
We might practice different art forms, but our struggles are one and the same.
Last month, as I was standing in line for a musical with my partner and our friend, said friend mentioned he had something funny to show us. He pulled out his phone, scrolled through his photo album, and turned his screen toward us to reveal a series of AI-generated pictures of him (in cartoon form) doing a number of silly and lighthearted things that related to his IRL interests. He’d generated the images using Apple Intelligence, an AI platform new to iPhones and Mac computers.
I responded with a theatrical gasp, only half-jokingly scandalized. “I didn’t take you for the AI type,” I said. “Aren’t you a writer?”
“Sure, but these are just pictures,” he said.
I let the matter go; the last thing any joyful night at the theater needs is a debate about generative AI. But I’ve been chewing on this interaction on and off ever since. It’s far from the only time I’ve seen writers happily engage in visually generative AI: Every single day I see a Substack user attach an AI-generated picture to their newsletter post, and I’ve met other writers use AI to produce moodboard pictures for their WIPs. But doesn’t this constitute a form of hypocrisy? Many of these writers are the same ones to opt out of AI training on Substack and decry the use of AI “writing” in Hollywood or on the news. Is it fair to eschew those values when the written word isn’t the target?
In December 2023, the New York Times filed a lawsuit against OpenAI (the company behind ChatGPT) and Microsoft (whose Copilot tool is based on OpenAI’s GPT-4 model) for alleged copyright infringement. The complaint claimed that OpenAI’s large language models, or LLMs, had been trained on NYT data—among countless other things—and had used that data to make ChatGPT and Copilot into NYT competitors. This, NYT’s lawyers asserted, was copyright infringement.
This lawsuit had the power to transform the generative AI landscape by deciding whether LLM training—which requires billions of data points and is almost always done without the data owner’s knowledge or consent—constitutes plagiarism. Technically, the legal battle is ongoing; just this month OpenAI attempted to have the case dismissed, and as of this writing, a judge is still thinking about whether or not that should happen. But here’s the thing: the NYT uses generative AI. Though its AI principles page currently states that the NYT leverages generative AI carefully and “with human guidance and review” (duh), the company carved out an AI-dedicated leadership role—its very own Director of AI Initiatives—the same month that it sued OpenAI and Microsoft. Regardless of what happens in court, one of the world’s best-known and most powerful news organizations will be all too happy to benefit from the very technology it purports to criticize.
The “gotcha” here isn’t that the NYT has isn’t as ethically sound as it might pretend to be. A copyright infringement suit like this could yield millions of dollars for the publication, whose leaders have undermined its own workers and thrown marginalized peoples under the bus for many, many years. Instead, the NYT’s hypocrisy illustrates a pattern I’ve seen again and again on a smaller scale: People are often eager to criticize a source of harm until that very thing benefits them.
I’m not here to convince you that generative AI is harmful. (That’s what this issue of Creativity Under Capitalism is about!) Instead, I’m urging my fellow writers and artists to foster a little creative solidarity.
Earlier this week, I asked folks on Substack—my only form of social media, these days—whether they use any generative AI tools, informing them ahead of time that the question was for an issue of my newsletter. Because most Substack users are writers themselves, I specifically wanted to know whether people used generative AI for things other than writing, such as imagery or music.
Many people told me that they flat-out refused to use generative AI because of the risks it poses to human creativity, labor compensation, and the environment, or that they used tools like ChatGPT purely to crunch numbers that Google cannot easily handle. But others shared that they used (both past and present-tense) AI to generate images for their newsletter or for social media. Others said they had used generative AI to produce early drafts for non-creative writing, such as marketing or networking materials, then edited the product to their liking. Insterestingly, everyone said they were against using AI to generate creative written works.
I’m not here to judge anyone who kindly answered my question (and I’ve deleted the post). But I am interested in asking why it’s easier for us to forgive AI’s harms when we’re using it for tasks that don’t have much to do with our own creative identities. If we believe that generative AI is an expensive and resource-intensive form of plagiarism, why is it okay to use it for emails and flyers, but not for stories? If we think that generative AI is the enemy of “real” art and the human spirit, why is it okay to use it for images but not for writing?
Outside of the creative bubble, where executives and shareholders and other suit-wearers meet in boardrooms or over Zoom, all art forms look the same. In 2023, while screenwriters worked to convince Hollywood not to write TV shows and movies with generative AI, actors worked to prevent the same studios from using AI-generated versions of their likenesses without consent and compensation. While voice actors protested the use of AI voice mimicry in video games last year, developers resisted the use of generative AI in game writing and animation. And while everyone here on Substack passed around tips on keeping one’s writing away from LLM training, folks on Instagram balked at a policy change that allowed Meta (which itself is behind several generate AI programs) to claim ownership of images published on the platform. To those with immense power and wealth, creativity is a means to an end, not the end itself. Those responsible for it, then, are equally disposable.
At the end of the day, this isn’t really about AI. When it comes to exploiting creatives, generative AI is merely the flavor of the week. (Don’t believe me? Look again at the NYT, which has spent the last few decades ripping off its own writers through various means.) What makes generative AI unique, however, is that it’s affecting nearly all artforms at once. Virtually nobody is exempt from AI-adjacent exploitation, which means everyone is required to make the choice to either partake or step away.
That choice looks different for different people. As one person said in response to my Note the other day, it’s possible that we’ve reached the point of AI-ification in which resistance is futile, and the best we can hope for is regulation. While I don’t believe any form of resistance is ever pointless, I can see where this person is coming from, and I agree that regulation is long overdue. Others, like myself, might well avoid generative AI until their last breath. It might pay off, who knows.
But the basis of that choice should be a sense of togetherness. When we choose to avoid or to use an exploitative, billionaire-owned tool, are we making that decision for the benefit or disregard of all creatives, or just one kind? If it’s the latter, why? Is there actually a meaningful difference between using generative AI to spawn music and using it to avoid writing? Or have those at the top of the algorithmic food chain convinced us that one form of creativity—one type of skill, which must otherwise be learned or compensated—is less worthy than the rest?
Because it’s not the billionaires that are going to look out for us. It’s just us—and the more of us there are, the better.
What’s been inspiring me lately:
✰ Maggie Smith’s Bluets, a book-length prose poem that waxes lyrical about the color blue. This was such a cool and unique read, and pretty much the best thing I could have picked for a lovely day at the spa.
✰ The phrase “courage over confidence.”
✰ A protest I saw in downtown Phoenix the night of Trump’s inauguration.
✰ Seeing everyone delete their Twitter/Instagram/Facebook/Threads accounts following said inauguration.
Yes,yes regarding creative solidarity. I'm thankfully noticing far fewer Substack posts illustrated with AI-generated images lately. I don't like to be that guy who calls people out on it but I don't mind when others do. All creativity is linked (I'm both a wannabe writer and painter).
I think it's important to establish on what lines we're establishing solidarity. Personally, I think our solidarity should be based on class. The advancement of DeepSeek shows that the working class can find a way to make the use of AI to be less resource intensive and sustainable for the planet. Ultimately I don't think there's anything wrong with AI as a tool, but the working class should own and decide it's future under a democratic workers' government.