A copyright mess or a valuable collaborator?
For visual artists, authors and musicians, generative artificial intelligence can be both.
Creative work is being done at a brisk pace and an affordable price, thanks to AI software processes that mimic human activity and generate text, images and other media.
But drawbacks of AI include work that’s generated without compensation and attribution for source material that’s fed into an algorithm to help the software learn patterns and structures.
Travis Moore, co-founder of Broad Ripple’s Round Table Recording Co., writes music for TV and film. He said AI opens the door for widespread copyright infringement.
“Sometimes the technology is awesome to have,” said Moore, who’s worked as an engineer on recordings by rappers DMX and Jadakiss. “But if you’re using it incorrectly or not caring about who originally created the work, it can become a copyright mess.”
Meanwhile, AI can’t offer original perspectives when it’s brought into the arts, critics say. AI’s outputs are based on its inputs.
The technology’s future is front and center in the ongoing Hollywood strikes by writers and actors. Writers don’t want to lose work to machines, while actors don’t want to lose control of the way they’re depicted on screen.
Robert Negron, founding artistic director of arts organization Indy Convergence, is writing a play that explores the pros and cons of AI.
“We’re kind of playing with this tiger in a cage that we’re going to be stuck with when it gets bigger,” Negron said. “But, in the meantime, some really wonderful things are happening.”
One way Negron has used AI represents the technology’s double-edged attributes. By prompting AI to create a logo for Indy Convergence, he saved time and money. Meanwhile, an opportunity for a professional graphic designer was missed.
“I would always reach out to an artist to help me with that,” Negron said. “But I’m able to mess around on my own for hours and hours that I couldn’t pay for.”
The sound of science
If someone wants to steer clear of copyright infringement when making music with generative AI, original source material needs to be either cleared for use or in the public domain. For reference, the first sound recordings entered the U.S. public domain on Jan. 1, 2022, and that material was made before 1923.
Moore said prompts for a piece of music’s mood or style can do only so much.
Music generated with AI is “not competitive with music that’s being put out today,” Moore said of this approach. “It sounds dated.”
Rule-breakers who clone the voice of an existing artist or use unauthorized instrumental samples have a better chance of competing with music made without AI, Moore said. But copyright protections make it unlikely the “music industry will be defeated with a robot,” he said.
Matt Pelsor, morning host at radio station WTTS-FM 92.3, searches the internet for songs that feature cloned voices “covering” the work of another artist. He presents one per week on a Tuesday feature titled “Artificial Alternative.”
The novelty tunes provide an example of what’s possible when a career’s worth of music is fed to AI. Pelsor has featured approximations of Kurt Cobain singing a Weezer hit and Paul McCartney tackling work by Oasis.
“You look in the YouTube comments, and you see people saying, ‘Aw, the robots will never take over. This is terrible,’” Pelsor said. “Or sometimes it’s the opposite: ‘Oh, this is scary good.’”
One of Pelsor’s favorite curiosities is Green Day’s “Good Riddance” as if Bob Dylan were the singer. Pelsor said he appreciates it because Dylan’s one-of-a-kind cadence and vocal tone are more distinctive than what he’s heard in other AI experiments.
He said he’s wary of prompts in which AI is instructed to write a new song or album in the style of an artist.
“What will that mean for actual artists who are trying to make a living doing this as human beings?” Pelsor said. “That’s the economic and societal implication that I’m concerned with. I hope we reach a point where this is being talked about more so that music fans know that this is a thing that’s out there.”
In his work, Moore said, AI-powered software makes it possible to extract a specific instrument or vocal from a recording that’s already mixed. He named Massachusetts-based iZotope Inc. and its Music Rebalance plug-in as favorites when unlocking previously impenetrable pieces of music.
Matt Panfil, co-founder of music venue and art space Healer, is preparing to teach a course on AI visual art at IUPUI’s Herron School of Art and Design. Part of the class will be devoted to ethics and the technology’s history, while part will be devoted to working with generative AI.
“Like any new tool, I think there’s no living in fear or ignoring it,” Panfil said. “That’s why I want to teach this class at Herron. … AI is not going to go away, so artists have to learn how to live with it.”
Panfil, who also teaches conventional photography courses at Herron, said he believes art created using the emerging technology deserves its own classification or a new descriptive term. Pieces created by text-to-image AI generators aren’t far removed from the concept of collage art, he said.
“A word I’ve heard tossed around in the community is ‘synthogram,’” Panfil said. “I like calling it an entirely new thing rather than ‘AI art.’ Synthogram is cool. It’s synthesized imagery.”
Panfil said he initially experimented with algorithmic art nearly a decade ago, using Google’s DeepDream program. Today, he employs the Midjourney AI art generator and suggests prompts as general as “magical” or as specific as “wide cinematic lens.”
It’s detrimental, Panfil said, when names of artists are used as prompts and the technology draws on existing works without the artists’ knowledge or permission.
“It gets into ethically dubious territory when you can type in the keyword of any artist living or dead,” he said. “It will pull their art from its database of images. Everything is online, and it’s a negative for up-and-coming artists.”
Panfil said he enjoys the dynamics of a human-machine collaboration.
“I prefer generating AI images that are based either on my own artwork or installations,” Panfil said. “I’m working on a Healer board game based on AI-generated interpretations of photographs. It makes it much more unique. You can constrain how much it’s altered by the prompt that you give it. By giving it shorter prompts, you give it less room to spit out something that’s really abstracted.”
Stewart Matthews, a North Central High School alum who’s written more than a dozen thriller and mystery novels, said he used ChatGPT 3.5 to help him with a yet-to-be-published book.
“I used it to model some opening lines for me,” Matthews said.
After crafting an unsatisfactory first draft, Matthews asked ChatGTP to reconfigure the line and provide three variations.
“It wasn’t, ‘Oh, hey, this one’s perfect. I’ll plug it right in,’” Matthews said. “It didn’t work like that. But it gave me some interesting sentence structures that I iterated upon and did my own thing with. I think I got a pretty decent opening line out of it.”
A mirror of humanity
Matthews, author of the Florida-based “Marsen Mysteries” and the Chicago-based “Detective Shannon Rourke” series, said he considers his opening-line exercise to be a collaboration between human and computer.
Overall, he describes himself as an AI skeptic and said the technology has yet to prove itself as a leap forward for literature.
“In longer works, AI doesn’t know how to do foreshadowing,” Matthews said. “It doesn’t know how to redeem a character. It doesn’t understand that if you mention a gun in Act I, that gun is going to come back in Act III. It doesn’t understand the importance of tracking those longer-term creative goals when you’re doing a work like that.”
Moore, co-founder of Round Table Recording Co., said he’s thought about a future in which prompts are inspired by not only musicians but also behind-the-scenes technicians such as himself.
“Are you going to profile all the best mix engineers and mastering engineers, and then put that stuff in the algorithm? Think about the industry and what we’ve all worked so hard to do for many years,” Moore said. “That’s where I think AI is definitely on notice for the music industry. But it’s not something I’m losing sleep over.”
Panfil said he’s seen AI described as the atom bomb of the mind.
“It can be used really destructively,” Panfil said. “It can also be used really creatively. It’s just mirroring humanity and spitting out an amalgamation of what we put into it. It’s not inherently good or bad. It’s evolution, and its potential is up to us.”
Negron, Indy Convergence’s founding artistic director, said his work-in-progress play explores the theory that humans will evolve through science and technology. The play’s title is “Growing Human.”
It’s an artist’s obligation, he said, to examine what’s around the corner for society.
“If you’re going to engage with AI, engage with it about the ethics of what happens when it gets to this next step,” Negron said. “We need to take care of it like you take care of a child, because it’s growing. We need that relationship and compassion. When a bunch of automation starts happening, and people are out of work and lashing out against AI, how does that affect our relationship?”•
Editor’s note: Reporter Dave Lindquist hosts a weekly show on radio station WTTS-FM 92.3, where Matt Pelsor, one of the people quoted in this report, works.