A proposal to control the admissibility of artificial intelligence-generated trial evidence was ushered forward on Friday by an influential federal judicial panel. The judges who serve on this panel (which also works as a kind of rulemaking committee for the Judicial Conference of the United States) indicated that they are concerned about how rapidly evolving AI technology might affect the U.S. legal system.
The U.S. Judicial Conference's Advisory Committee on Evidence Rules, Washington, D.C., voted 8-1 to seek public comment on a proposed new rule. This rule is intended to ensure that evidence generated by artificial intelligence meets the same standards of reliability that we expect from human witnesses in federal trials.
The proposed regulation seeks to alleviate worries about the dependability of the methods that computer technologies use to make forecasts or extrapolate from current data which are like the problems courts have been concerned about for a long time, which involve the reliability of expert witnesses' testimony.
The testimony of expert witnesses who depend on this technology is already scrutinized under the Federal Rules of Evidence for reliability. But rules that govern the reliability of evidence don't currently address the situation in which a non-expert witness uses an AI program to create evidence without having any knowledge of its reliability.
Under the proposal, evidence in the form of AI or other machine-generated material offered at trial would be held to the same reliability standard as that of an expert witness. This would be a very big deal and could potentially cause a lot of cases to turn out quite differently because, generally speaking, machine learning models are not reliable enough to meet the standards set for expert witnesses.
The suggestion now moves to the Committeeman of Rules and Practice Procedure of the Judicial Conference. The top echelon of judicial rulemaking will vote at its June meeting on whether to publish the suggestion for public comment.
Several judges were unsure if a rule requiring judges to adopt such a system should ultimately be adopted. Elizabeth Shapiro, a representative for the U.S. Department of Justice, was the only vote against it.
Yet committee members conveyed an overall impression that they needed to at least put out a draft rule for public comment. This would allow them to gather more information and also help avoid a situation in which the normally years-long rulemaking process prevents the judiciary from keeping pace with rapidly evolving technologies.
A U.S. District judge, Jesse Furman, who is based in Manhattan and chairs the panel, said he was unsure whether he would ultimately back finalizing the rule but was 'genuinely interested in what the world has to say about this thing.'
"Sometimes when you put something out for notice-and-comment, it seems people assume it's a train moving forward to final approval. In this case, though, I think there are a lot of questions that we need to work through before we get anywhere near that."
The proposal arises in the context of federal and state courts across the nation trying to get a handle on the new technology. This includes learning how to govern the programs themselves, such as OpenAI's ChatGPT, that are capable of learning from large datasets and then generating text, images, and even videos.
In the annual year-end report for 2023, released in December, U.S. Supreme Court Chief Justice John Roberts discussed the possible benefits of AI for judges and litigants. While he recognized the potential of artificial intelligence to assist the courts, he also cautioned that its use in the judiciary must be thoroughly vetted. "The relatively recent advent of AI technology calls for the same careful consideration of its proper uses that we have given to all manner of technological tools over the years," Roberts wrote.
https://en.wikipedia.org/wiki/John_Roberts
https://witnessdirectory.com/signup.php