
Illustration created by Morgan Bricker, MJE, using Canva.
By Stephen Green, CJE
At the ATPI Winter Conference, I watched a presenter display an AI-generated caricature of a newly named award winner as one heartfelt way the presenter tried to honor the recipient’s legacy. The image was similar to those that went viral among journalism teachers and others on Facebook.
Unexpectedly, a chorus of boos erupted from the students in the audience. I suppose the consensus was that his gesture fell too close to what has been dubbed “AI slop.” Merriam-Webster even coined “slop” its word of the year as a nod to the low-quality digital content found online from content creators looking to make a quick buck.
However, despite the winds of social media blowing against the use of generative AI, scholastic journalism products have shown a steady increase in generated content. As someone who has critiqued yearbooks, magazines, newspapers, online sites and literary magazines, the more recent submissions have the telltale signs — subtle or otherwise — of embracing AI specifically for written content with occasional ventures into visual content.
My ChatGPT Year in Review said I was among the top 1% of users this year — a fact I bring up because I mean it when I say that I know the AI’s voice enough to be able to hear it in passing in the same way teachers may immediately recognize a student’s authentic writing style. (And, no, I didn’t write nor edit this using AI. I famously warn against use of creative endeavors even as an otherwise passionate AI advocate.)
It’s everywhere in yearbooks especially: opening and closing copy, spread stories, captions, headlines, mods. Newspapers and online news are culpable, too, which seems to veer into the lane of feeding chatbots all the pieces of genuine content and spitting out a story. Staffs, organizations and even yearbook companies have leveled an avalanche of emoji-riddled social media posts onto Instagram.
While the proliferation may be a fact unknown to those advisers, a measure to fill knowledge gaps, or an attempt with experimentation with this new tool, the journalism world needs to come to terms with a few realities before deciding to dive into generative AI tools when we look at the legal side of copyright.
There are three specific areas we need to genuinely consider when thinking about the intersection between generative AI:
- Does using AI-generated content violate copyright laws?
- Does it affect a publication’s ability to protect its copyright?
- What other concerns will we have with AI in the future?
Existing Copyright Standards
As a refresher, intellectual property laws are country dependent. Individual countries have no ability to enforce its laws directly in other countries. In the U.S., Article 1, Section 8 of the Constitution allows Congress to create laws granting “exclusive right” to works for the progress of science and useful arts.
Copyright law protects original creative works such as writing, photos, music, videos and artwork. It gives creators certain exclusive rights, including the right to copy, distribute, display and adapt their work after publication until its creation-date-dependent clock ticks to zero, entering the public domain.
The most recent copyright manual (Title 17) put out by the U.S. Copyright Office has zero mentions of “artificial intelligence,” “AI” or “generative” anything. None.
Congress has to do it, but there’s currently a fight over if there should even be regulations over the infantile industry. President Donald Trump ordered his attorney general’s office to fight AI regulation in states — arguing that a patchwork of state regulations, the need for innovation, and that state AI laws sometimes extend beyond their own borders. This move was opposed by the National Conference of State Legislatures, The Council of State Governments, the National League of Cities, The U.S. Conference of Mayors and the National Association of State Chief Information Officers.
That’s a long way of ultimately saying: There are no laws on the books that specifically address the unique issues that generative AI poses on IP while other existing law is applicable.
Does using AI-generated content violate copyright laws?
The vast majority of the time, no.
You can’t violate someone else’s copyright without using their copyrighted work without permission. Because generative AI programs use a monstrous amount of training data and more robust algorithms preventing copyrighted outputs, the odds of it producing copyrighted works is slim, especially with images.
See the “Image Creation” section below for a deeper dive into one potential issue.
Does it affect a publication’s ability to protect its copyright?
Absolutely. If you use AI-generated works, especially those straight out of the program, there is no stopping someone else from using it.
Humans have — and, seemingly, always will — have to maintain some level of human-injected creative control over an output before copyright law applies. AI-generated work is the product of a machine, which is not allowed to maintain a copyright claim. Neither can monkeys copyright selfies nor the now-defunct white pages copyright phone numbers. One was the product of a non-human, and the other failed to inject human creativity.
So, that story you wrote and published online after having your favorite chatbot spit it out? Free to use…but by anyone, not just your staff.
Earlier last year, my staff had a news story stolen by a local online news outlet. My students would have had no defendable claim had they not written it themselves.
However, there are ways to use AI and still maintain copyright. In January 2025 report, the copyright office acknowledged a few ways that creators could use AI without jeopardizing their copyright protections such as using AI tools for “expressive inputs” like making alterations to their own works.
“Where a human inputs their own copyrightable work and that work is perceptible in the output, they will be the author of at least that portion of the output,” the report noted. “Their own creative expression will be protected by copyright, with a scope analogous to that in a derivative work.”
The AI Registration Guidance document notes that “a human may select or arrange (or modify, as a later section notes) AI-generated material in a sufficiently creative way that ‘the resulting work as a whole constitutes an original work of authorship.’”
The report notes that a human-authored text for a comic book filled with AI-generated images would be protected as a whole because “the work is the product of creative choices with respect to the selection of images that make up the work and the placement and arrangement of the images and text on each of the work’s pages.”
Two years ago, we created a Dungeons & Dragons spread that included images generated by ChatGPT in the style of D&D illustrations. This was our attempt to bring theater of the mind into reality but without having the resources to pay an artist or do the work justice in house. However, we didn’t just slap it onto the page as-is and call it a day.
We used ChatGPT’s microadjustment function on images to tweak the output based on feedback from the students whose characters were depicted. We ran the cutouts through Photoshop to change color choices, add depictions of magic and removed accessories that the sources said weren’t part of their vision. Moreover, we paired the work with quotes from the students about funny stories their characters experienced, a mini-photo collection of players IRL, and pull quotes from the club’s founder near another mod of players combatting the stereotypes of D&D players.
While layouts are not copyrightable in this way, we argue that this constitutes an injection of human creativity that meets the same standard.
When it comes to text, the report suggests that uploading original work and asking it to convert “a story written in the first person and instructing the system to convert it to a third-person point of view” would not break its copyrightability.
What other concerns will we have with AI in the future?
Image Protection
An area that I expect to see an increase in use over the next decade or sooner is in image creation.
In February 2025, an AI-generated image received the first copyright protection after Invoke founder Kent Keirsey argued that its creation — which used generative fill similar to that found in Adobe Photoshop — was more akin to digital graphic creation.
However, the Copyright Office definitively said in January 2025 that text-to-image creations are not protected due to a lack of “human contribution to warrant copyright protection.”
“As described above, in many circumstances these outputs will be copyrightable in whole or in part — where AI is used as a tool, and where a human has been able to determine the expressive elements they contain,” the report notes. “Prompts alone, however, at this stage are unlikely to satisfy those requirements.”
Plus, while photographers and staffs may have watermarked their photos, generative AI has the potential to zap watermarks away faster than ever.
Image Creation
The issue of image creation tiptoes close to the trademark problem of name, image and likeness.
After 10,000 public comments on the issue and more than a year of research, the Copyright Office released a 72-page report in July 2024 that met the same conclusion: the current set of laws were just not good enough and called for federal protections against digital replicas, which the office defines as “the use of digital technology to realistically replicate an individual’s voice or appearance.” The big reason? Deepfakes.
However, school publications can inadvertently wade into a name, image and likeness, AI quagmire by creating images or illustrations for use in their products. AI is built on training data including between 6-14 million images depending on the platform.
If they create an image of a niche person in an overly specific situation, it may wade too close to a copyrighted image and, such as the case with Shepard Fairey’s Hope poster, end up in legal hot water for copyright reasons, or because it uses someone’s name, image or likeness without consent and outside of fair use law. (I find this unlikely, but it’s certainly possible.)
Additionally, there’s always the issue of students using Photoshop’s generative fill function to cover up missing information or distractions in a way that violates ethics, but potentially wanders into the false-light tort violation territory.
Quote Fabrication
More and more, I’ve seen students throw captions, stories or headlines into Grammarly or chatbots to “clean them up.” When they do, and without knowledge of proper prompt writing, the quotes get rearranged and, often, meaning changes.
For example, I had a student once turn in a story for yearbook. Both the editor and I read the story gave feedback, returned the edits to the reporter and continued the publication cycle. As I read the story that wound up in the yearbook, my jaw hit the floor: What I was reading was nothing like the “final draft” that was edited.
The quotes were wildly manipulated compared to the very natural ones from before. The student, whose first language was Spanish, had a difficult time with correcting the edits and, instead of telling us that, they dumped the story into ChatGPT and replaced the finished edits with the ones they had AI create.
Not only was it an ethical violation, it wasn’t even good. The quotes, moreover, even misconstrued part of the story. The mangled facts were benign, but we dodged a bullet because of a scared student who went behind everyone’s back to make the story “better.” What would have otherwise likely have been an award-winning story now became something I couldn’t even submit.
Student Crackdowns & Privacy
Figure out which way the wind blows. Then, wait a few minutes for the direction to change. Such is the way we figure out exactly how states and the federal government will poke around student protections.
Currently, students aren’t banned from generative AI use on the state or national level, but districts have policies as patchy as beards on the fledgling teens they serve. Something to keep in check is avoiding staff structures that become dependent on generative AI practices.
The same applies to teachers and workflows. There seems to be a growing concern about FERPA and other privacy laws when it comes to using the tool in the workplace. Follow your district’s guidance because, ultimately, they’re the one cutting your paycheck and the first one you’ll answer to.
Overall
Generative AI should be a primary focus for scholastic journalism programs in terms of experimentation with efficiencies, brainstorming, staff management, planning, training and other needs that often drag our attention away from our more creative endeavors. We have the opportunity to finally get ahead of the curve the way our print news ancestors failed to do when the internet became en vogue.
However, we have to realize that human creativity is an integral part of protecting intellectual property, that AI chatbots may inadvertently produce copyrighted works, and that a number of concerns are still making their way through the courts of federal law and public opinion.