Legal AI frenzy grows as Eve hits $1B

Welcome back. OpenAI just announced Sora 2, the newest version of their video generation model, and it’s pretty insane. Following Meta's recent release of the Vibes AI app, OpenAI has also introduced an app (at the time of writing, #8 in Photo & Video on the iOS App Store) that allows users to scroll through a feed of AI-generated reel-style content. We’ll dive more into Sora 2 tomorrow, but for now, check out the sort of stuff it can create here.

IN TODAY’S NEWSLETTER

2. California enacts first U.S. frontier AI law

3. Robotics industry ‘unsettled’ by tariff threat

PRODUCTIVITY

Legal AI frenzy grows as Eve hits $1B

Legal AI startup Eve has joined the unicorn club, raising $103 million in Series B funding, led by Spark Capital and reaching a $1 billion valuation. 

The investment was supported by Andreessen Horowitz, Lightspeed Venture Partners and Menlo Ventures. 

Eve’s platform specializes in plaintiff-side law, managing and automating tasks at all parts of a case’s life cycle, including case intake, collecting records, drafting documents, legal research and discovery. 

With the sheer amount of documents and information law firms have to handle, the legal field is ripe for AI automation. Eve joins several startups aiming to bring AI into the law field, with legal tech investments reaching $2.4 billion this year, according to Crunchbase.

Jay Madheswaran, CEO and co-founder of Eve, said in the announcement the company’s “AI-Native” law movement has attracted more than 350 firms as partners, which have used the tech to process more than 200,000 documents. 

Eve’s tech has helped firms recover upward of $3.5 billion in settlements and judgments, including a $27 million settlement won by Hershey Law and a $15 million settlement by the Geiger Legal Group last year. 

“AI has fundamentally changed the equation of plaintiff law,” Madheswaran said in the announcement. “For the first time, law firms have technology that can think with them.”

Despite its potential, deploying AI in legal contexts poses several risks. For one, AI still faces significant data security challenges, which can cause trouble when handling sensitive documents or confidential information. Hallucination and accuracy issues also present a hurdle – one that Anthropic’s lawyers already faced earlier this year after the company’s models hallucinated an incorrect footnote in its legal battle with music publishers.

TOGETHER WITH ManageEngine

AI is their weapon. Make it your shield.

Attackers are using AI, and if you’re still relying on legacy tools, you’re already behind.

With our Next-Gen Antivirus powered by AI, you don’t just block malware, you anticipate it, outsmart it, and recover faster.

Eliminate downtime, protect your endpoints, and build resilience in the age of AI-driven threats.

FRONTIER AI

California enacts first U.S. frontier AI law

California is taking the lead in AI regulation, passing the country’s first law aimed at ensuring safety and transparency for frontier AI systems.

Gov. Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law on Monday. 

The move marks the first legislation in the U.S. to target the safety and transparency of cutting-edge AI models specifically, and cements the state’s position as a national leader in AI development.

Features of the TFAIA include: 

  • Requirements for AI developers to disclose safety incidents

  • Transparency in model design

  • Installing guardrails on the development of frontier AI

The bill is based on findings from a first-in-the-nation report on AI guardrails, which offered recommendations for evidence-based policymaking.

The news comes as the use of AI increasingly comes into the spotlight, with the federal government not yet rolling out a comprehensive AI policy and state governments rising to meet this gap. California, in particular, hopes to offer a blueprint to other states for establishing ethical AI.

“With this law, California is stepping up, once again, as a global leader on both technology innovation and safety,” Senator Scott Wiener said in a statement

The latest bill comes one day after another AI-focused initiative, the California AI Child Protection Bill, passed the statehouse. 

Aimed at safeguarding children, the bill seeks to prevent adolescent users from accessing chatbots unless they are “not foreseeably capable of doing certain things that could harm a child.”

The bill is now awaiting Newsom’s signature. It has, however, faced pushback from industry members who argue that sweeping regulations could hamper innovation.

“Restrictions in California this severe will disadvantage California companies training and developing AI technology in the state,” the Computer and Communications Industry Association wrote in a floor alert on the bill. “Banning companies from using minors’ data to train or fine-tune their AI systems and models will have far-reaching implications on the availability and quality of general-purpose AI models, in addition to making AI less effective and safe for minors.”

TOGETHER WITH DELL

Dell’s Latest PC Brings AI To Your Desk

Advanced AI may be the future, but you won’t be able to make the most of it if your computer is living in the past. For a new generation of ideas, we need a new generation of PC’s… and that’s exactly what Dell has done with their latest and greatest: The Dell Pro Max with GB10.

The GB10 raises the bar for AI developer PCs thanks to:

  • A powerful NVIDIA software stack and Grace Blackwell superchip

  • The ability to support up to 200Bn parameter models

  • One Petaflop of FP4 computing power

In other words, it’s a pint-sized PC companion that can turn any desk into developer heaven – and it’s available now. Try the Dell Pro Max GB10 for yourself right here.

ROBOTICS

Robotics industry ‘unsettled’ by tariff threat

The Commerce Department has launched an investigation into robotics and industrial machinery imports, a move that could reshape critical supply chains and alter the competitive landscape.

The investigation, which falls under Section 232 of the Trade Expansion Act, would allow the president to impose tariffs for national security purposes. Officially launched on Sept. 2, the probe was only disclosed last week. 

Impacted robotics goods under the proposal include:

  • Programmable computer-controlled mechanical systems

  • Industrial stamping and pressing machines

  • Industrial cutting and welding tools

  • Laser- and water-cutting tools

While the administration frames the move as a matter of economic security, the news has rattled industry members relying on foreign robotics to stay competitive in the U.S.

Yuanyuan Fang, Analyst at Third Bridge, told The Deep View that the news has “unsettled” the industry. 

“The U.S. remains one of the largest markets for industrial robots, but higher prices driven by tariffs are already slowing demand,” Fang said. “At the same time, investments in electric vehicles, a major driver of automation, are being delayed, adding to the pressure.” 

“As tariffs continue to curb end-customers' appetite for new equipment investments, our experts observed that large projects are being delayed across various end markets, which in turn affects the backlog visibility and order cycle of industrial robot manufacturers,” she added.

Uncertainty is compounded as many key components are sourced from Asia, including Japan, and even U.S. assembly offers little protection from tariffs, Fang said.

“Unlike the automotive sector, the U.S. does not have domestic robot manufacturers capable of producing complete systems, meaning buyers will face higher costs rather than switching to local alternatives,” she said.

A recent LinkedIn post from Jeff Burnstein, president of the Association for Advancing Automation, drew similar concerns. 

“If significant new tariffs are imposed on all imported robots, will this impact U.S. efforts to reshore manufacturing?” Burnstein wrote.

“We are seeing robotic products coming out of China 1/2 to 1/3 the price of standard robotics,” replied Robert Little, chief of robotics strategy at Novanta Inc. “Is this OK? You could look at it as competition, or you can recognize this as a long-term issue for our supply chain.”

LINKS

  • Station: An AI-powered podcast and YouTube creator revenue assistant for discovering sponsors. 

  • IBM Envizi Emissions API: Allows organizations to factor greenhouse gas calculations into their tool development. 

  • Dex: An AI headhunter that scours job listings at AI labs, hedge funds and startups.

  • Alter for Meetings: A bot that creates actionable insights from meetings, saved locally on your device for improved privacy. 

  • OnSpace AI: A no-code AI app builder for Web apps, iOS or Android.

  • Pod AI: A 24/7 AI phone agent for customer service, appointment scheduling and support calls.

  • Microsoft: Principal Product Manager

  • Google: Engineering Manager, Acquisition and Onboarding

  • Nvidia: Senior System Software Engineer

  • Deloitte: Agentic AI, AI & Data Senior Consultant

GAMES

Which image is real?

Login or Subscribe to participate in polls.

POLL RESULTS

Are parental controls for ChatGPT enough to keep teens safe?

  • Yes, this goes far enough (16%)

  • Only partly, more safeguards are needed (40%)

  • No, AI is still too risky for teens (26%)

  • Not sure/don’t know (18%)

The Deep View is written by Faris Kojok, Liz Hughes, Nat Rubio-Licht and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“[The other image] was too clean, too bright, too perfect.”

“Clear text on the paint.”

“Come on!! That was way too easy!!”

“Well I liked it better.”

“The paintbrushes and paint pots got me... They looked too used to be generated...”

“I was sure [the other image] was too clean.”

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.