⚙️ AI in 120+ court cases

Good morning. Headlines from Dubai claim ChatGPT Plus will soon cost UAE residents nothing. I wonder if more countries will follow suit?

— The Deep View Crew

In today’s newsletter:

  • 🚌 AI for Good: Colorado turns to AI to protect kids near school buses

  • 🧠 Meta restructures AI teams to accelerate product rollouts

  • 🧑‍⚖️ AI hallucinations are quietly reshaping the courtroom

🚌 AI for Good: Colorado turns to AI to protect kids near school buses

Source: K99

Colorado is expanding its use of AI, this time to keep children safe. A new law now allows school districts to install AI-powered cameras on school buses to catch drivers who ignore stop signs and flashing lights.

What happened: Governor Jared Polis signed House Bill 25-1230 into law, giving school districts the option to deploy Stop Guard AI cameras on buses. These systems automatically detect when a driver illegally passes a stopped school bus, capture the vehicle’s license plate, and issue a $300 fine.

The law aligns with existing Colorado traffic rules, which prohibit passing a school bus when its stop sign is extended and lights are flashing. Yet many drivers still ignore those signals, putting children at risk.

School districts can choose whether to opt into the program. If they do, the revenue from fines goes directly back to the district—not a private vendor. The goal, according to lawmakers, is safety, not profit.

Why it matters: Illegal bus passing is a real threat to student safety, especially in growing communities across Colorado. AI enforcement offers a way to hold drivers accountable without requiring police presence at every stop. Backed by Verra Mobility data showing 96 percent of drivers support the measure, this move reflects growing public confidence in AI’s role in public safety.

Colorado cities like Greeley have already begun using AI to target speeding. With this law, the state is doubling down on technology to protect its most vulnerable residents.

✂️ Cut your QA cycles down from hours to minutes

If slow QA processes bottleneck you or your software engineering team and you’re releasing slower because of it — you need to check out QA Wolf.

QA Wolf's AI-native service supports both web and mobile apps, delivering 80% automated test coverage in weeks and helping teams ship 5x faster by reducing QA cycles to minutes.

With QA Wolf, you get:

Unlimited parallel test runs

15-min QA cycles

24-hour maintenance and on-demand test creation

Zero-flake guarantee

The result? Drata’s team of 80+ engineers saw 4x more test cases and 86% faster QA cycles.

No flakes, no delays, just better QA — that’s QA Wolf.

🧠 Meta restructures AI teams to accelerate product rollouts

Source: Meta

Meta is reorganizing its AI teams to move faster. The company is forming two new groups focused on product deployment and foundational research, according to an internal memo obtained by Axios.

What happened: Chief product officer Chris Cox outlined the new structure this week. Meta’s AI efforts will now fall under two divisions. One is the AI Products team, led by Connor Hayes, which will oversee the Meta AI assistant, AI Studio, and AI tools inside Facebook, Instagram and WhatsApp. 

The other is the AGI Foundations team, co-led by Ahmad Al-Dahle and Amir Frenkel, which will handle research into the company’s Llama models and work on reasoning, voice and multimedia capabilities.

The AI research division FAIR will remain separate, although one multimedia team is transitioning into the AGI Foundations group. No executives are exiting, and no layoffs were announced. Some internal leaders are shifting roles as part of the update.

Why it matters: Meta is under pressure to compete with OpenAI, Google and ByteDance in the race to commercialize generative AI. By splitting its AI group into smaller, focused teams, Meta aims to accelerate development, streamline decision-making and improve coordination as it scales.

The memo also acknowledges internal friction. Cox said the new setup is designed to increase team ownership and clarify dependencies. Talent retention remains a concern, with key departures including engineers leaving for competitors like Mistral.

Meta previously restructured its AI division in 2023 with a similar goal. This latest move signals a continued push to turn research into usable products faster.

Experience the world’s #1 CRM for free.

Easily transform your business operations with Starter Suite, an all-in-one CRM that offers simplified tools for marketing, sales, service and commerce.

Starter is designed to help you:

  • Identify the right leads and send personalized content with smart segmentation

  • Speed up your sales process with deal management and insights

  • Resolve customer service requests easily with case management

  • PNC: Software Engineering Director - Intelligent Automation

  • HSBC: Artificial Intelligence Governance and Strategy Lead

  • Genei: AI-powered research assistant that summarizes academic papers, highlights key points and links related content for faster reading

  • Resemble AI: Creates custom voice clones from text or audio input, ideal for lifelike dialogue, voiceovers or real-time speech

  • Kaedim: AI-powered platform that turns 2D images into ready-to-use 3D models, helping game developers create assets and ship projects faster

🧑‍⚖️ AI hallucinations are quietly reshaping the courtroom

Source: Midjourney v6.1

Anthropic’s legal team was caught submitting a false citation generated by Claude. It wasn’t the only case.

Over 20 court filings with AI hallucinations have surfaced in the past month, according to a database compiled by French lawyer and data scientist Damien Charlotin. Since June 2023, the total has reached 120. That number includes 48 from 2025 alone, and the year isn’t over.

What happened: Charlotin launched the database in May to track false citations introduced by AI chatbots. These errors usually appear as fake case references used to fabricate precedent. The second-oldest case in the database is Mata v. Avianca, where a New York law firm submitted several nonexistent cases produced by ChatGPT.

The problem goes deeper than clerical errors. AI hallucinations invent cases out of thin air. Legal documents, filled with structured citations and predictable phrasing, are especially vulnerable to this kind of output. LLMs excel at mimicking format without understanding substance, which makes hallucinations hard to detect without scrutiny.

Why it matters: The legal field is being reshaped by AI tools that can cut research time dramatically. But the tradeoff is reliability. Charlotin points out that sloppy citations have always existed, but at least they pointed to real decisions. AI-generated references don’t.

Courts have issued light penalties so far. Judges have fined attorneys, dismissed filings, or issued warnings, while placing responsibility on the parties involved. The expectation to verify every citation hasn’t changed. What has changed is the growing need to identify content that was never real to begin with.

While tools like ChatGPT and Claude produce fluent, seemingly authentic output, they frequently fabricate facts and citations that can deceive even experience attorneys.

These failures have already caused significant embarrassment. In 2023, a New York lawyer faced a $5,000 sanction after submitting a ChatGPT-written brief citing nine non-existent court decisions. More recently, a federal judge struck an entire expert testimony after discovering ChatGPT had invented the report's references. Each incident erodes trust in legal filings and the judicial process.

Accountability falls on two parties. Legal professionals have an ironclad duty to verify every citation before filing – a responsibility backed by ethical rules and Rule 11 sanctions. Meanwhile, AI developers like Anthropic often disclaim liability for hallucinated output, shifting risk to users even as their products confidently fabricate sources that can fool attorneys (as Anthropic's own lawyers discovered through an "embarrassing" AI citation error).

Any AI contributions should be treated like work from a junior clerk – potentially useful but presumptively flawed – requiring rigorous human verification before filing. This approach isn't technophobic; it's essential for harnessing AI's benefits without compromising legal discourse integrity.

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “There were no bridges over the river in what appeared to be a decent sized city. ”

  • “The details in the fake one showed that it was created by something that knew 'how' a city was supposed to look, but not 'why' it would look that way. More painting than photo.”

Selected Image 2 (Right):

  • “Is it just me, or has the AI images gotten better, even over the month or 2 I have been doing this! I have no idea any more!”

  • “Ha! Haven't been wrong in a while. I thought the antistatic tips on the real photo were too large, too think”

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.