A federal courtroom decide has thrown out knowledgeable testimony from a Stanford College synthetic intelligence and misinformation professor, saying his submission of pretend data made up by an AI chatbot “shatters” his credibility.
In her written resolution Friday, Minnesota district courtroom Decide Laura Provinzino cited “the irony” of professor Jeff Hancock’s mistake.
“Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI — in a case that revolves around the dangers of AI, no less,” the decide wrote.
Extra irony: Hancock, a professor of communications, has studied irony extensively.
Hancock, founding director of the Stanford Social Media Lab, was employed by the Minnesota Lawyer Normal’s workplace to provide a sworn knowledgeable declaration to defend the state’s regulation criminalizing election-related, AI-generated “deepfake” pictures from a lawsuit by a state legislator and a satirist YouTuber. California accredited an analogous regulation final fall.
YouTuber Christopher Kohls, who sued Minnesota, additionally sued — alongside Elon Musk’s X social media firm — California Lawyer Normal Rob Bonta over California’s regulation, and a decide quickly blocked it in October. Provinzino final week declined to situation an analogous block requested by Kohls and the Minnesota lawmaker.
In November, attorneys for Kohls and legislator informed the Minnesota courtroom that Hancock’s 12-page declaration cited “a study that does not exist,” authored by “Huang, Zhang, Wang” and sure “generated by an AI large language model like ChatGPT.”
In December, Hancock admitted in a courtroom submitting he had used ChatGPT, blamed the bot for that error and two different AI “hallucinations” he had subsequently found in his submission, and apologized to the courtroom.
He had used ChatGPT 4.0 to assist discover and summarize articles for his submission, however the errors doubtless occurred as a result of he inserted the phrase “cite” into the textual content he gave the chatbot, to remind himself so as to add tutorial citations to factors he was making, he wrote. The bot apparently took “cite” as an instruction and fabricated citations, Hancock wrote, including that the bot additionally made up 4 incorrect authors for analysis he had cited.
Hancock, a prolific, high-profile researcher whose work has acquired some $20 million in grant assist from Stanford, the U.S. Nationwide Science Basis and others over the previous twenty years, charged $600 an hour to arrange the testimony the decide tossed, in response to courtroom filings.
Decide Provinzino famous that Minnesota Lawyer Normal Keith Ellison was looking for to introduce in courtroom a model of Hancock’s testimony with the errors eliminated, and he or she mentioned she didn’t dispute Ellison’s assertion that the professor was certified to current knowledgeable opinions about AI and deepfakes.
Nevertheless, the decide wrote, “Hancock’s citation to fake, AI-generated sources in his declaration — even with his helpful, thorough, and plausible explanation — shatters his credibility with this Court.”
At minimal, Provinzino wrote, “expert testimony is supposed to be reliable.”
Such errors trigger “many harms” together with losing the opposing celebration’s money and time, the decide wrote.
The Minnesota Lawyer Normal’s workplace didn’t reply to questions, together with how a lot Hancock billed and whether or not the workplace would search a refund.
Hancock didn’t reply to questions.
At Stanford, college students might be suspended and ordered to do group service for utilizing an AI chatbot to “substantially complete an assignment or exam” with out teacher permission. The college has repeatedly declined to answer questions, as not too long ago as Wednesday, about whether or not Hancock would face disciplinary measures.
The professor’s authorized smackdown highlights a standard downside with generative AI, a expertise that has taken the world by storm since San Francisco’s OpenAI launched its ChatGPT bot in November 2022. Chatbots and AI picture mills typically “hallucinate,” which in textual content can contain creating false data, and in photographs, absurdities like six-fingered palms.
Hancock shouldn’t be alone in submitting a courtroom submitting containing AI-generated errors. In 2023, attorneys Steven A. Schwartz and Peter LoDuca had been fined $5,000 every in federal courtroom in New York for submitting a personal-injury lawsuit submitting citing pretend previous courtroom circumstances invented by ChatGPT.
With chatbot use quick spreading in lots of fields, together with the authorized occupation, Provinzino in her ruling sought to show Hancock’s imbroglio right into a teachable second.
“The Court does not fault Professor Hancock for using AI for research purposes. AI, in many ways, has the potential to revolutionize legal practice for the better,” the decide wrote.
“But when attorneys and experts abdicate their independent judgment and critical thinking skills in favor of ready-made, AI-generated answers, the quality of our legal profession and the Court’s decisional process suffer.
“The Court thus adds its voice to a growing chorus of courts around the country declaring the same message: verify AI-generated content in legal submissions!”