Recently, I talked about graphic novelist Kristina Kashtanova. Their signature work, Zarya of the Dawn, features a nonbinary character (that is to say, someone who rejects conventional gender pronouns in favor of the gender-neutral “they/them” pronoun scheme), making their way through a postapocalyptic Manhattan. The work's copyright was partially rejected by the US Copyright Office because the illustrations which made up the book were created with the use of AI.
However, Kashtanova, who not-at-all-coincidentally identifies as nonbinary and uses they/them pronouns like the titular Zarya, has raised the stakes, vowing to test the limits and lines of where human ingenuity and creativity intersect with artificial intelligence under current US law and precedent.
But what will the Copyright Office do with this new onslaught, and what outcomes are most likely as the law scrambles to keep pace with the burgeoning technology?
What’s the Argument?
Kashtanova argues that if ANY human creativity or thought went into a piece of art, even if it’s generated by AI, then that art should, by definition, be eligible for copyright protection, as the Copyright Office stated in their original granting of an all-encompassing trademark.
The Copyright Office begs to differ, however, on the basis that a human’s input into an AI system does not grant the user anything more than the most superficial control over the final product, and therefore cannot qualify as “human authorship,” which is an essential component of assessment of an artistic work’s qualifications for copyright protection.
Per the Copyright Office, if Kashtanova had disclosed at the outset that they used the AI program Midjourney to generate the art for Zarya, which is now required as a matter of course in the aftermath of this mess, then the art which makes up such a crucial part of the overall story would not have been approved.
While Kashtanova retained copyright to the actual storyline and its arrangement, the art itself was held to be invalid and unprotectable under existing copyright laws. This makes Kashtanova the first person in American history to have copyright protection for AI art taken away, but that doesn’t mean Kashtanova isn’t fighting back.
To prove their point, Kashtanova has used a number of different AI programs, including one that allows artists to upload their own original creations and then edit and modify them using prompts fed into the AI interface. The program, known as Stable Diffusion, took one of Kashtanova’s sketches and generated a thoroughly haunting result, as you can see in the image at the top of this article.
More to the point, Kashtanova believes that beginning with actual human-generated art, rather than a series of prompts to refine a result, will meet the minimum human creativity requirement of the Copyright Office for copyright protection.
So What’s the Problem?
Among other things, there’s currently no clear-cut bright-line test to determine whether a work is eligible for copyright protection other than the “human authorship” requirement.
For example, a gorilla painting a banana, while awe-inspiring, is not copyrightable because a gorilla is demonstrably and fundamentally not human. When AI is thrown into the mix, everything is up for grabs.
This creates yet another series of problems, because if an AI program can be considered to be a legitimate author, it stands to reason that the creators/programmers of the AI in question should therefore have as much interest in the copyrights to a work as the stated creator.
Because AI (or its creators and programmers), not being a human intelligence in its own right, cannot at this time or in the eyes of the law be equally argued to be just as much a “creator” as the person filing for the copyright to a given work, the output from an AI program cannot be copyrighted.
Kashtanova’s argument that art generated through AI using human-conceived and human-executed prompts is still worthy of copyright protection turns that thinking on its head.
While the US Copyright Office is notoriously resistant to change, it is possible that if Kashtanova can demonstrate a more collaborative, rather than iterative, approach to developing art based on human concepts using AI, it might move the goalposts closer to the middle, where AI-generated art can be protected as long as there is clear evidence of human iterative control over the final outcome.
This is the key element that was seen to be lacking in Kashtanova’s original copyright.
However, the Problems don’t Stop There.
Let's say Kashtanova can prove their point to a level that the Copyright Office has no choice but to rewrite the rules (and failing Congressional or judicial intervention in the matter). If the Copyright Office affirms that AI-generated art is copyrightable given a minimum acceptable threshold of human ingenuity and intervention, there is still the question of how long it would take for owners or programmers of AI modalities to line up in the courts and sue the very authors who used the artwork created by their programs for royalties and other considerations. In the real world, you could measure that turnaround with an egg timer.
It doesn’t take a legal mind on the scale of F. Lee Bailey to foretell the traffic jams in the courts if those floodgates were opened, never mind the billions of dollars at stake for AI innovators and the people who use and profit from their innovations.
The whole situation only becomes muddier when you consider the host of lawsuits against AI platforms like ChatGPT and Stable Diffusion for having been “trained” on copyright-protected works that were accessed through Google and other platforms.
By the logic of these suits, the very fact that the AI platforms’ training happened using content that the initial creators never intended or even conceived could or would be used in such a way automatically breaches their copyrights, where such can be proven. Even more damning when it comes to the “pure” arts, such as music, photography, and graphic design, the AI programs had to be trained using real-world, existing art.
The underlying assertion is that training AI programs on existing art isn’t like a college kid sketching a Bart Simpson doll in art class for a grade. These two events share a superficial similarity in that both mimic previously existing art as a matter of necessity. However, one (the art student’s output) is not intended to be passed off as an original creation except insofar as the human being behind the pencil replicated the doll, successfully or otherwise. The other (AI-generated content) cannot help but be evaluated as the net result of the human’s input PLUS a plethora of other influences which cannot be properly pinned down, analyzed, or evaluated, and thus whose antecedents cannot be satisfactorily traced or vetted.
The Bottom Line
Whether Kashtanova’s campaign to narrow and blur the boundaries of where human ingenuity and AI intersect will be successful is an open question at this time. It’s going to take years or even decades for the law to catch up to current innovations, never mind the ones lurking just beneath the near horizon.
However, Kashtanova’s arguments will be difficult, if not impossible, to dismiss as anything less than seminal in the ongoing war between human innovation and AI “creativity”...and will likely play a key role in how the courts, lawmakers, and the general public perceive AI-created art in the future.
This will certainly be one to watch, as no matter how Kashtanova’s arguments are perceived by the US Copyright Office and the courts, they’re sure to earn themselves a place in the history (and law) books!