Artificial intelligence, the First Amendment and democracy

Hands holding journalism tools popping out of a phone on a purple background promoting Constitution Day 2024, Sept. 17. Text reads "Artificial intelligence, the First Amendment and democracy"

Overview

This lesson explores the implications of artificial intelligence on the First Amendment, focusing on freedom of speech and the press. With AI technologies evolving rapidly, students will critically analyze how AI might influence public discourse, truth and accountability. Students will also explore the potential legal ramifications of AI-generated content and the challenges it poses to traditional First Amendment protections.

Age range: Middle to High School

Objectives

  • Understand the role of the First Amendment in protecting freedom of speech and the press.
  • Explore how emerging AI technologies, such as generative AI, challenge traditional First Amendment protections.
  • Debate the potential benefits and risks of AI-generated content, particularly in journalism.
  • Evaluate the role of lawmakers and courts in regulating AI’s impact on free speech.

Key concepts

  • AI in expressive uses: Understand that AI, especially generative AI (ChatGPT, Stable Diffusion, etc.), is used for creative expression and content generation, implicating First Amendment protections.
  • Freedom of expression and AI: AI helps people create content, which can include news, images, and art. Courts have already recognized computer code as a form of speech, extending this protection to AI outputs.
  • Emerging legal challenges: AI’s expressive uses pose questions about who is liable for AI-generated content and how exceptions to the First Amendment (e.g., defamation, incitement) apply to AI.

Activities

1. Debate: “Does AI Support or Undermine Free Speech?”

  • Structure: Divide students into two groups. One group will argue that AI enhances free speech by democratizing content creation, providing more voices a platform. The other group will argue that AI undermines free speech by amplifying misinformation and disinformation through tools like deepfakes.
  • Preparation: Each group will research recent examples of AI-generated content, including potential benefits (e.g., AI assisting in journalism) and challenges (e.g., AI spreading misinformation).
  • Debate Focus:
    • Does AI create more opportunities for free expression, or does it overwhelm the public with false information?
    • Should AI be treated like previous technologies (e.g., TV, the internet) under the First Amendment?
    • Is AI-generated speech different from human-generated speech?

2. Case Study Analysis: AI, Deepfakes and Misinformation

  • Introduction: Present students with a case study involving AI-generated deepfakes during an election campaign. The case will include a deepfake video falsely portraying a politician making inflammatory remarks. Discuss the legal and ethical implications of this scenario.
  • Activity: Students will analyze the case, focusing on:
    • Who should be held accountable—the creators of the deepfake or the platforms hosting it?
    • How does this situation challenge existing First Amendment protections regarding defamation and misinformation?
    • What role should journalists play in verifying the authenticity of AI-generated content?
  • Reflection: Students will write a brief essay analyzing the case from a First Amendment perspective, considering how the courts might rule and how lawmakers might regulate the use of deepfakes.

3. AI Content Creation and Media Literacy

  • Overview: In this activity, students will create both ethical and potentially misleading AI-generated content using tools like ChatGPT or DALL·E.
  • Part 1: Ethical Content Creation: Students will generate a news article or piece of artwork on a neutral topic using an AI tool. They will discuss how AI can be responsibly used to enhance journalism and media creation.
  • Part 2: Misinformation Simulation: Students will use AI to create content that could be used to deceive or misinform the public (e.g., a misleading headline or manipulated image). Afterward, students will critique how this content could influence public discourse and brainstorm strategies to combat AI-generated misinformation.
  • Media Literacy Component: The class will collectively analyze how to distinguish between genuine and AI-generated content and the ethical responsibilities of media consumers and producers in the AI age.

Discussion prompts

  • Should AI-generated content enjoy the same First Amendment protections as human-created content?
  • How do AI technologies like deepfakes and generative models affect the integrity of journalism?
  • Should the government play a more active role in regulating AI technologies that can be used to generate speech?
  • How do current legal exceptions to the First Amendment (e.g., defamation, incitement) apply to AI-generated content?

Extension activity

Mock Trial: Host a mock trial where a generative AI tool (e.g., ChatGPT) is on trial for generating a defamatory news article. Assign students the roles of the AI’s developers, the plaintiff (person defamed), the judge, and the defense lawyers. The trial will help students understand how liability and First Amendment protections might play out in real-world scenarios involving AI.

Assessment and reflection

  • Essay: Students will write an essay reflecting on how AI challenges the current interpretation of the First Amendment, and propose potential legal frameworks for balancing free expression with the need to regulate harmful AI content.
  • Group Presentations: Each group will present their debate findings or case study analysis to the class, promoting collaborative learning and deeper discussion about AI and the future of free speech.

Written By: Mark Dzula, SPRC committee