AI Ethics in the Classroom, Part 1: Intellectual Property
This series provides an introduction to specific ethical issues in AI and example learning activities to help instructors and students make thoughtful decisions about how they engage with generative AI tools. If you use this activity in your teaching or have suggestions to improve it, please add a comment or complete this brief survey.
Many well-known writers have filed lawsuits against the companies behind popular generative AI tools, claiming the tools were trained using illegally obtained copies of their books. Visual artists have followed a similar path. In the case of Andersen v. Stability AI et al., three artists sued Midjourney, DeviantArt (creator of DreamUp), and Stability A.I. for using their work to train image-generation tools. While a judge dismissed certain claims, other lawsuits are working their way through the courts and questions remain about the legality of using copyrighted work to train AI tools.
Companies like Getty Images, Adobe, and Canva are attempting to address these intellectual property concerns by providing tools that are trained on licensed images from their libraries of stock photos and illustrations.
Getty Images stated that artists will be compensated for AI-generated images that are created based on their work.
Adobe began paying artists for images used to train Firefly in late 2023.
Canva announced Canva Shield in October of 2023. The announcement included a commitment to $200 million in artist compensation over the next three years. Canva also noted that the company will bear legal responsibility if customers using its AI tools are sued for copyright infringement.
Discussion Questions
Some artists have claimed that Adobe’s development of Firefly is unethical because they were not given the chance to opt out before the tool was trained on their work. Artists have also expressed disappointment over Adobe’s compensation rates, and leaked internal memos estimate that only 6% of Adobe Stock contributors will be eligible for payments above $10. Do you agree or disagree with their perspective? Why?
Should there be any regulations dictating how companies train generative AI tools or how they compensate artists/authors? If so, briefly describe one possible restriction or guideline. If not, why do you think such regulations shouldn’t be established?
Are the ethical considerations the same for AI-generated text, images, audio, and video? Is there a difference in using a copyrighted book to teach an AI tool how to write in a similar style vs. using a copyrighted image to teach an AI tool how to create images in that style? Why or why not?
What new challenges and opportunities might arise as AI tools make it possible for anyone to emulate the style of another artist? For example, singer Holly Herndon trained an AI tool on her voice, creating a “digital twin” she calls Holly+. This “twin” can sing in languages and styles Herndon isn’t familiar with and allow others singers to use her voice during live performances, as shown in this TED Talk. (The live performance begins five minutes into the video.) While Herndon authorized this type of audio “cloning” or “spawning,” other artists have used this technology to create unauthorized works, such as AllttA’s song “Savages,” which uses AI to mimic Jay-Z’s voice without his permission.