Skip to main content
Edit Page - Admin Only Style Guide - Admin Only Control Panel - Admin Only
IMS-Elevate_Podcast-Devon-Liz_2510_Website-Banner

Decoding Large Language Models and Generative AI for the Jury | Episode 85

10.29.25

|

In the defining era of generative AI litigation, trial teams must be prepared to explain the “black box” of large language models (LLMs) for juries who will shape precedent-setting cases across the US.

In this IMS Insights Podcast episode, Senior Client Success Advisor Adam Bloomberg sits down with Jury Consultant Liz Babbitt and LLM Training Expert Devon Madon, PhD, to explore how trial lawyers can make complex AI concepts clear and relatable in court.

Together they discuss:

  • The evolving landscape surrounding AI, fair use, and copyright disputes
  • How to define large language models for jurors and simplify technical AI concepts
  • Strategies for addressing common misconceptions about AI

This conversation offers valuable guidance on bridging the gap between advanced technology and everyday understanding, helping your team build more credible, compelling narratives in AI-related litigation.

Watch the original LinkedIn Live recording here.

Find additional insights in Liz and Devon’s article, Demystifying Generative AI for the Modern Juror, published by Law360.

Adam Bloomberg:
Okay, here we go. Thanks a lot, Tiffany. With all the buzz around AI and the legal challenges it's facing, it's important to get a clear picture of what's really happening. Devon, could you start by giving us an overview of the current legal landscape involving AI in copyright cases?

Devon Madon:
Yeah, thank you so much, Adam—and please, just call me Devon. So as Liz and I were really exploring, we're standing right at the brink of this new legal era. The core of this landscape involves a multi-front legal challenge from content creators against the developers of generative AI.

Essentially, authors, photographers, and other creators are alleging that large language model companies have used their work without permission on a massive scale to train their models. The key cases we’re following are The New York Times v. OpenAI and Microsoft and Getty Images v. Stability AI.

These remain unresolved, but the central question is: what constitutes fair use under copyright law? What’s most important is being able to explain the inner workings of large language model training to non-technical audiences like judges and juries.

Adam Bloomberg:
Liz, as the jury consultant in the room, why is it important for trial lawyers to begin mastering these AI concepts now instead of waiting for more case law?

Liz Babbitt:
AI is moving really fast, and law students coming out of school are already using these tools. They're being taught classes on prompting and prompt analytics. It's important that those of us who’ve been in the field for a while start using these tools and understanding the space now. Today’s cases are shaping AI jurisprudence for decades. Walking into court unprepared for how these tools are used will only cause anxiety and make teams appear behind the curve. Early mastery will allow us to shape history, not just react to it.

Adam Bloomberg:
Devon, let’s talk about the jury. It’s often said that juries bring a wide range of educational backgrounds and life experiences to the courtroom. How would you explain large language models to, say, an eighth-grade science class?

Devon Madon:
Absolutely. Imagine a large language model as being trained on a massive library—a huge database of language drawn from the internet, books, and other text. Even if you think you haven’t used one, you probably have. Spell check and predictive text are both powered by large language models. These systems don’t truly generate new material—they predict what words are most likely to come next based on patterns in the data they’ve been trained on.

Adam Bloomberg:
Now, that’s a great expert-level explanation. Liz, can we simplify that even more?

Liz Babbitt:
Sure. These tools basically read enormous amounts of text online and learn how people speak. They then use that information to predict what words come next—essentially, they’re really good at guessing the next best word.

Adam Bloomberg:
Devin, let’s try that again—define large language models like you would for a jury.

Devon Madon:
A large language model is an algorithm trained on massive datasets—mostly from the internet and published sources. It’s a pattern-recognition machine that associates language with other language based on how often they appear together.

Even if you don’t think you’ve used AI text, you have. Spell check, predictive text—those are simple versions. Generative AI isn’t “creating” from scratch—it’s drawing from a vast library of text to predict what comes next statistically.

Adam Bloomberg:
That’s a solid example. Liz, how would you simplify it even more for jurors?

Liz Babbitt:
I’d say these are massive libraries that have learned how people speak. They’re not intelligent—they’re “next best word guessers.” They seem magical, but really, they’re just pattern matchers like the old Clippy assistant or sensor-based systems we already use every day.

Adam Bloomberg:
Devin, with AI evolving so fast, how can we ensure juries fully comprehend these complex models in copyright cases?

Devon Madon:
You’re right—it’s a moving target. You can’t turn jurors into computer scientists, or they’ll experience cognitive overload and tune out. The goal is translation—breaking down huge concepts into manageable chunks and tying them into a consistent narrative or metaphor they can relate to.

Adam Bloomberg:
Devin, as a testifying expert, what’s the densest concept you’ve had to simplify for an audience?

Devon Madon:
In the classroom, critical thinking is a huge one. But with juries, explaining how models are trained can be just as complex. That’s where language and metaphors matter—turning technical terms into tangible, relatable concepts.

Adam Bloomberg:
Liz, what’s the biggest misconception jurors have when they hear “AI”?

Liz Babbitt:
That AI is intelligent or “knows” things. It doesn’t—it has no awareness, no truth, no feelings. In copyright litigation, jurors may attribute moral judgment to machines, which can distort how they assess fair use or harm. Reinforcing that AI is just a sophisticated pattern matcher—not a thinking entity—is critical.

Adam Bloomberg:
You’ve mentioned comprehension—can you explain split attention and cognitive overload?

Liz Babbitt:
Sure. People naturally simplify complex information into stories that fit their worldviews. Jurors do the same. That’s why trial teams need to use analogies and frameworks that connect to jurors’ everyday experiences—it helps them process technical information without becoming overwhelmed.

Adam Bloomberg:
The “art student in the museum” analogy from your article is powerful. How did you land on that?

Liz Babbitt:
Unlike mechanical analogies, this one uses human experience. The art student looks at images to learn general principles, not memorize details. Then she applies that learning to create her own art—similar to how AI uses patterns rather than copying exact works. AI perceives texture, color, and patterns—the building blocks—not the gestalt. That’s the key difference between how humans and AI “create.”

Adam Bloomberg:
Love that example. Devon, last question—how do you see the role of experts evolving as AI advances?

Devon Madon:
Five years is a long time in AI, but we’re already seeing a shift from generative AI to agentic AI—systems that don’t just respond but take action. That raises new legal and ethical challenges: responsibility, data privacy, safety, and bias. These issues will only become more complex as AI systems start acting independently.

Adam Bloomberg:
Thank you both for joining me today. I look forward to continuing this conversation after a few more cases and another article or two.

Devon Madon:
Great to spend time with you. Thanks, Adam.

Liz Babbitt:

Thanks, guys.


Related Industry Insights