Menu
Quick Links: Home Expert Witnesses Directory Practice Support Directory Expert News & Reports
Email Us Call(240) 224‑3090
 Join
Free Expert Witness Referrals

Federal Rules of Evidence for AI

Deep fakes as litigation evidence?Creditdeepfake | lex.dk – Den Store Danske

The Federal Judicial Conference's Advisory Committee on Evidence Rules is considering changes to the rules based on the implications of Artificial Intelligence's (AI) in matters of evidence used in litigation. The committee heard from an AI expert panel it assembled, including NIST computer scientists, three leaders in AI regulation, and two law professors. The panel provided opinions on amendments to the Evidence Rules based on the probable impact AI will have.

In its report on artificial intelligence released June 4th, the Committee focused on the ability of AI to output evidence based on documents AI systems analyze (machine learning) and AI's ability to create fake photographs, audios, and videos that appear real and will become ever more difficult to detect — so called “deep fakes” and the potential use of those AI creations as evidence. The four central issues, according to the Committee:

  1. Machine Learning output used as evidence. The Committee is considering a new rule, applying the Rule 702 reliability standards, to machine-learning data, as it sees the potential problems related to reliability rather than authenticity. In drafting any rule, it would not include established, machine-based data “such as thermometers, radar guns, etc.”
  2. Deepfakes and authentications. The Committee sees the problem of deepfakes as just one of forgery, “a problem that courts have dealt with … for many years,” cautioning against any special rule. At most, the Committee considers that traditional means of human authentication (eg: familiarity with a voice) “might need to be tweaked.” Unfortunately, the Committee seems not to have considered that AI can already duplicate voices to a high degree, so the need for “tweaking” might be an understatement for the future.
  3. Challenging evidence as a deepfake. The Committee found that any claim evidence was a deepfake should require some initial showing of fakery, as courts normally do with claims of hacking digital and social media evidence. Two of the expert panel members, former Judge Paul Grimm and AI ethics expert Dr. Maura Grossman¹ proposed a new Rule 901 that would “require the opponent to show that it was more likely than not a fake.” The Committee found that likely to be too high of a burden, but remained open to considering a new rule. How such rules will work in the future as AI fakes become ever harder to discern from reality was not mentioned.
  4. Admissibility of machine-learning evidence. AI developers have long been concerned about bias in the data sets used in machine-learning models. The Committee considered the expert panel's opinions on validation studies and how they might be reviewed by the courts without inquiring into source codes and algorithms. Something they left to consider in the future.

While the Committee's review of the deeply problematic issue of AI usage in federal courts as evidence seeems very limited, it did get to the crux of the issue in its conclusions, noting that:

  1. New rules take three years to enact;
  2. AI is a rapidly devloping area — three years is like a lifetime;
  3. To avoid obsolescence, new rules must be general; and,
  4. General rules may be too general to be helpful.

It almost sounds like the Committee has given up before it starts. Meanwhile, the number of attorneys using ChatGPT to write their legal briefs is increasing exponentially. When might they turn to evidence cheaply generated by AI instead of paying for a human expert witness?



Note:

1 Dr. Maura R. Grossman is a Research Professor in the School of Computer Science at the University of Waterloo, where she specializes in the interdisciplinary study of technology and law, with a particular focus on AI and ethics.

Comments
What’s on your mind?
Post a Comment

 
5083