Category:
Stanford misinformation expert used AI in court testimony, leading to fake citations controversy

Stanford misinformation expert used AI in court testimony, leading to fake citations controversy

Stanford Expert Admits to Using AI in Court Testimony, Cites Fake Sources

In a surprising twist that has sparked debate about the role of artificial intelligence in professional settings, a Stanford University expert on misinformation has admitted to using AI to draft a court document that included fabricated citations. The expert, Jeff Hancock, submitted the document as part of a legal case challenging a Minnesota law aimed at preventing the use of AI to mislead voters before elections. The revelation has raised questions about the ethical and practical implications of relying on AI in high-stakes legal proceedings.

The Case: AI and Election Misinformation

The controversy centers on a new Minnesota law that criminalizes the use of AI to deceive voters in the lead-up to elections. Hancock, a prominent figure in the field of misinformation and technology, was brought in as an expert witness to provide a declaration supporting the law. However, lawyers from the Hamilton Lincoln Law Institute and the Upper Midwest Law Center, who are challenging the law on First Amendment grounds, discovered that the document contained multiple fake citations. They subsequently petitioned the judge to dismiss Hancock’s declaration.

Hancock, who charged the state of Minnesota $600 per hour for his expertise, acknowledged that the errors likely occurred while using ChatGPT-4o, a generative AI tool, to assist in drafting the document. According to a filing by the Minnesota Attorney General’s Office, Hancock stated, “Professor Hancock believes that the AI-hallucinated citations likely occurred when he was using ChatGPT-4o to assist with the drafting of the declaration.” He further emphasized that he “did not intend to mislead the Court or counsel by including the AI-hallucinated citations in his declaration.”

Attorney General’s Office Responds

The Attorney General’s Office claimed it was unaware of the fabricated citations until the opposing lawyers brought them to light. In response, the office has requested that Hancock be allowed to resubmit his declaration with corrected citations. A representative for the office declined to comment further, citing the ongoing legal proceedings.

AI in Legal and Academic Contexts

In a separate filing, Hancock defended his use of AI, arguing that generative AI tools like ChatGPT are widely used in academic and professional settings. He pointed out that such tools are increasingly integrated into everyday software like Microsoft Word and Gmail, making them accessible to a broad audience. “ChatGPT is web-based and widely used by academics and students as a research and drafting tool,” Hancock noted.

However, this is not the first time the use of AI in legal contexts has come under scrutiny. Earlier this year, a New York court handling wills and estates ruled that lawyers have “an affirmative duty to disclose the use of artificial intelligence” in expert opinions. The court dismissed an expert’s declaration after discovering that Microsoft’s Copilot AI had been used to verify calculations. Similarly, other legal professionals have faced sanctions for submitting AI-generated documents containing false information, as reported by Reuters.

How the Errors Occurred

Hancock explained that he used GPT-4o, the software behind ChatGPT, to review academic literature on deep fakes and draft much of his declaration. He described how he prompted the AI to generate paragraphs discussing various arguments about artificial intelligence. According to Hancock, the program likely misinterpreted notes he had left for himself to add citations later. “I did not mean for GPT-4o to insert a citation,” he wrote. “But in the cut and paste from MS Word to GPT-4o, GPT-4o must have interpreted my note to myself as a command.”

Hancock’s Credentials and Past Work

Jeff Hancock is a nationally recognized expert on misinformation and technology. In 2012, he delivered a widely-viewed TED Talk titled “The Future of Lying”. Since the release of ChatGPT in 2022, he has authored at least five academic papers on AI and communication, including “Working with AI to Persuade” and “Generative AI Are More Truth-Biased Than Humans.”

Hancock has also served as an expert witness in at least a dozen other court cases. However, he declined to answer questions about whether he used AI in those cases, how many hours he has billed the Minnesota Attorney General’s Office, or whether the office was aware of his use of AI in drafting the declaration.

Criticism and Ethical Concerns

The revelation has drawn criticism from legal experts and opposing counsel. Frank Bednarz, a lawyer with the Hamilton Lincoln Law Institute, stated, “Ellison’s decision not to retract a report they’ve acknowledged contains fabrications seems problematic given the professional ethical obligation attorneys have to the court.”

The incident highlights the growing tension between the convenience of AI tools and the ethical responsibilities of professionals who use them. While AI can streamline tasks like drafting documents and conducting research, its potential to generate false or misleading information poses significant risks, particularly in legal and academic settings.

Key Takeaways

  • Jeff Hancock, a Stanford expert on misinformation, admitted to using AI to draft a court document that included fake citations.
  • The case involves a Minnesota law aimed at preventing the use of AI to mislead voters before elections.
  • Hancock charged $600 per hour for his services and attributed the errors to “AI hallucinations” from ChatGPT-4o.
  • The Minnesota Attorney General’s Office has requested permission for Hancock to resubmit the document with corrected citations.
  • The incident raises broader questions about the ethical use of AI in professional and legal contexts.

As AI continues to evolve and integrate into various aspects of society, incidents like this serve as a reminder of the importance of transparency and accountability. Whether in the courtroom, the classroom, or the workplace, the use of AI must be carefully managed to ensure its benefits do not come at the expense of accuracy and integrity.

Original source article rewritten by our AI can be read here.
Originally Written by: Deena Winter

Share

Related

Popular

bytefeed

By clicking “Accept”, you agree to the use of cookies on your device in accordance with our Privacy and Cookie policies