- The Deep View
- Posts
- ⚙️ OpenAI warns bioweapons risk are imminent
⚙️ OpenAI warns bioweapons risk are imminent

Welcome back. Elon Musk's xAI is supposedly torching $1 billion per month while expecting just $500 million in revenue this year. The company projects burning $13 billion in 2025 and won't be profitable until 2027, but hey, at least Musk called the Bloomberg report "nonsense" while raising $9.3 billion to keep the lights on.
In today’s newsletter:
🧠 AI for Good: Using AI to predict outcomes after brain injury
💰 Meta is offering $100 million bonuses to poach talent
☣️ OpenAI says bioweapon-risk AI is coming soon
🧠 AI for Good: Using AI to predict outcomes after brain injury

Source: Midjourney v7
When someone arrives at the hospital with a severe brain injury, doctors face an impossible calculation. Will this patient recover? How aggressively should they intervene? Families want answers that medicine often can't provide.
AI is increasingly being used to fill this gap, but much of it has been developed haphazardly. A new review of 39 AI models trained on data from over 592,000 brain injury patients reveals both the promise and the problem: while these tools could revolutionize care, most still aren't ready for real clinical use.
Here's what researchers found: The models focus on key indicators like age, Glasgow Coma Scale scores and brain bleeding patterns. But quality varies wildly. Many lack proper validation or transparency about how they work. Researchers are now using frameworks like APPRAISE AI to systematically evaluate and improve these tools before they reach patients.
Why this matters: Brain injuries are devastating and unpredictable. Families often spend weeks in hospital waiting rooms, desperate for any indication of what comes next. Wrong predictions can lead to premature withdrawal of care or futile aggressive treatment. The stakes couldn't be higher.
The review shows recent models are getting better, particularly those built on diverse, well-documented datasets. But the real story isn't just about creating smarter algorithms—it's about bringing scientific rigor to a field where poorly designed AI could literally mean the difference between life and death.
With proper validation and clinical testing, these tools could help doctors make more informed decisions in those crucial first hours after injury. For families facing the worst moment of their lives, that could mean everything.

Try This AI Prompt Cheat Sheet
A solid prompt can be the difference between confused agents and seamless workflow – but what constitutes a good one when it comes to AI? Great question.
With more than 200+ go-to-market-ready prompts specific to sales, customer success, marketing, or beyond, this library is your AI prompt cheat sheet. Use it to improve your existing prompts, uncover powerful new ones to immediately uplevel your agent output, or simply get a feel for what makes a great prompt successful.
It’s all free, and it’s all courtesy of Momentum. Access their AI Prompt Library right here.
💰 Meta is offering $100 million bonuses to poach talent

Source: Midjourney v7
Sam Altman says Meta is offering OpenAI employees $100 million signing bonuses and even larger annual packages to jump ship. So far, none of his top researchers have taken the deal. But the talent war reveals a deeper problem plaguing the AI industry.
The staggering numbers: Speaking on his brother's podcast, Altman confirmed Meta is targeting OpenAI's best people with generational wealth offers. The packages reportedly include massive upfront bonuses plus yearly compensation that dwarfs even those figures. Meta recently invested $14.3 billion in Scale AI, and now it's throwing similar money at human capital.
Here's the structural problem: AI has created a compensation paradox that even Wall Street hasn't fully solved. The industry needs to pay researchers enough that they don't leave for competitors, but not so much that they never need to work again. In finance, this tension gets managed through what Bloomberg's Matt Levine calls "positional goods" — keeping up with rivals' Hamptons houses and status symbols that create endless spending treadmills.
But AI researchers are mostly fresh out of PhD programs. They haven't learned to want arbitrarily expensive status symbols yet. The gap between "competitive offer" and "retire immediately" money has essentially disappeared.
Why this matters: When top talent can achieve financial independence with a single job switch, it fundamentally changes workplace dynamics. Altman acknowledged the "cultural risks of creating jobs that could become more about the money than the work." Translation: if you pay someone enough to never work again, they might actually stop working.
OpenAI is betting on mission over money, arguing that building superintelligence offers better long-term rewards than Meta's cash. Eventually, someone will crack the code on keeping ultra-wealthy researchers motivated — or the industry will learn that throwing $100 million signing bonuses at talent doesn't guarantee they'll stick around to actually build anything (rather just wait till the check clears).

Trying to scale voice AI with duct tape? That gets expensive.
It’s easy to start with ElevenLabs for speech and Vapi to stitch things together. But when you're ready to handle real phone calls and production traffic, it starts to break down.
You're suddenly managing:
Multiple vendors and APIs
Lag and dropped audio
Integration costs that keep rising
Dev time lost to patching things together
One API gives you lifelike voice, global calling, and real-time call control on a private, low-latency network. You get the performance, reliability, and simplicity needed to build voice AI that actually works at scale.
Whether you’re building a call-based AI agent, voice-powered concierge, or customer service workflow, Telnyx helps you reduce overhead costs, cut integration time, and launch faster.


Google’s new AI talks back in full voice conversations
Apple eyes using AI to design its chips, technology executive says
One AI category is entering its golden age, says top tech analyst
Maven AGI just raised $50 million to go even bigger with AI agents
Nobel laureate says AI didn’t nail that black hole image like we thought


☣️ OpenAI says bioweapon-risk AI is coming soon

Source: Midjourney v7
OpenAI just acknowledged what safety researchers have been quietly warning about: we're approaching the point where AI models could enable amateur bioterrorists to replicate expert-level threats.
OpenAI executives told Axios they expect upcoming successors to their o3 reasoning model will likely cross what the company calls a "high-risk" threshold under its internal preparedness framework. This would mark the first time any model reaches OpenAI's top concern level for biosecurity risk.
The core issue is what OpenAI calls "novice uplift” which is the ability for AI to enable people with no formal training in biology to carry out advanced, potentially dangerous procedures. Johannes Heidecke, OpenAI's head of safety systems, told Axios they're "not yet in the world where there's novel, completely unknown creation of bio threats." Instead, they're worried about AI helping people replicate existing threats that experts already know about.
The challenge is that many dangerous biological procedures have legitimate applications. The same knowledge needed to develop vaccines can be misused to create pathogens. The same tools that accelerate drug discovery can enable bioweapons development. This "dual-use" nature makes it nearly impossible to simply remove dangerous information from AI training.
Heidecke acknowledged that current safeguards won't be sufficient. "This is not something where 99% or even one in 100,000 performance is sufficient," he said. "We basically need near perfection." Human monitoring and enforcement systems need to quickly identify any harmful uses that escape automated detection and prevent harm from materializing.
OpenAI isn't alone in these concerns. Anthropic activated its strictest safety measures for Claude 4 last month, implementing what it calls "AI Safety Level 3" protections specifically to limit risks around chemical, biological, radiological and nuclear weapons. The company said it took the measures as a precaution, even though it hadn't yet determined if Claude 4 actually crossed the threshold requiring such protections.
Both companies are expanding their work with government researchers and national labs. OpenAI plans to convene nonprofits and government researchers next month to discuss the opportunities and risks ahead. The company is also exploring how to use AI itself to combat misuse by bad actors.

This is the first time an AI capability has moved from theoretical concern to deployment risk. How the industry responds will set precedents for handling future dangerous capabilities.
OpenAI expects to cross its "high-risk" threshold within months, not years. That means safeguards, regulations and international coordination efforts need to be developed and deployed faster than the technology itself.
AI safety is no longer about just preventing bias or protecting privacy; it's about preventing accessibility to tools that could kill thousands of people. The approach so far relies heavily on self-regulation by AI companies, which may not be sufficient when the stakes involve potential mass casualties.


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“Real life. The other is a fantasy or an advertisement ”
“Yeah, that's how humans cut bread, AI. You really nailed it.”
Selected Image 2 (Right):
“Air holes in bread in this image verses a condensed un-natural looking version in the other. ”
“I'm on a losing streak! I just knew that the way the end of the slice laying down was touching the next slice standing up demonstrated pure AI fakery. WRONG!”
💭 A poll before you go
Here’s your view on “What do you think about the current direction of AI investment?”:
It's smart—focus on proven winners (13%)
It's risky—too much money in too few hands (24%)
We're missing out—small innovators are being ignored (28%)
It's necessary—AI is maturing and needs scale (19%)
Not sure (16%)
The Deep View is written by Faris Kojok, Chris Bibey and The Deep View crew. Please reply with any feedback.
Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
P.P.S. If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.