- The Deep View
- Posts
- ⚙️ Fight to make frontier AI less secretive
⚙️ Fight to make frontier AI less secretive

Welcome back. Meta's recruiting rampage continues with the company reportedly poaching Apple's head of AI models, Ruoming Pang, who ran the team behind Apple Intelligence and other on-device AI features. Bloomberg reports this could be the first of many departures from Apple's "troubled AI unit," which makes sense considering Apple is so unimpressed with its own AI that it's reportedly letting Anthropic or OpenAI power Siri instead.
In today’s newsletter:
🐶 AI for Good: Robot dogs bring therapy and learning to life
🤖 Study shows AI models are picking up on human social cues
🤫 The fight to make frontier AI less secretive
🐶 AI for Good: Robot dogs bring therapy and learning to life

Source: Stanford
Most robotics education costs tens of thousands of dollars and leaves students working with expensive equipment they can't take home. Stanford flipped that model on its head. For under $1,000, students build their own AI-powered robot dogs from scratch, program them with cutting-edge machine learning and take them home when the course ends.
What happened: In Stanford's CS 123 course, students build Pupper robots from scratch over 10 weeks, learning everything from motor control to machine learning. For final projects, students program their robots for specialized tasks like serving as tour guides or tiny firefighters. The robots have also been deployed at Lucile Packard Children's Hospital to help young patients.
Students master full robotics spectrum — from electrical work to AI programming in one hands-on course
Low barrier to entry — requires only basic programming skills to start building sophisticated robots
Open-source design — costs $600-1000 and available to K-12 schools worldwide
Real therapeutic impact — 12-year-old patient Tatiana Cobb said her robot "reminds me of my own dog at home" and helped her feel less isolated
Proven medical benefits — pet therapy research shows robots can lower blood pressure, reduce anxiety and motivate physical activity
The robots evolved from Stanford Doggo, an earlier project by the Stanford Student Robotics club, and are designed to be small, safe and playful rather than intimidating.
Why it matters: These robots are democratizing advanced AI education while providing genuine therapeutic value. By making sophisticated robotics accessible to students everywhere, Stanford is training the next generation of engineers. Meanwhile, for pediatric patients who can't always have access to therapy animals, these mechanical companions offer comfort when it matters most.

See Melio in action and get a $200 gift card.
Discover a more efficient way to manage business payments—and get a free gift card for your time.
Paying bills by card—even where cards are not accepted.
Earning credit card points and rewards for paying business bills.
Domestic and international payments in USD or local currencies.
Free monthly ACH payments.
2-way sync with QuickBooks and Xero.
AR invoicing, and much, much more…
🤖 Study shows AI models are picking up on human social cues

Source: UCLA / Copilot Designer
When two mice interact, their brains synchronize in predictable ways. When two AI agents interact, their neural networks perform the same function, revealing a universal principle of how intelligence processes social information.
The breakthrough: UCLA researchers published findings showing that biological brains and AI systems develop identical neural synchronization patterns during social tasks. This marks the first time scientists have identified fundamental laws of social cognition that work across different types of intelligence.
Researchers recorded neural activity from mice's prefrontal cortex during social interactions, then trained AI agents for social behaviors using the same analytical framework.
Both systems split neural activity into synchronized "shared" patterns between interacting entities and "unique" patterns specific to each individual.
GABAergic neurons — brain cells that regulate neural activity — showed significantly larger shared spaces than excitatory cells.
When researchers disrupted shared neural components in AI systems, social behaviors dropped substantially.
Why it matters: This discovery suggests social intelligence follows universal computational principles, regardless of whether the system is biological or artificial. The findings could unlock new treatments for autism and social disorders by revealing how healthy social cognition actually works. For AI development, it provides a biological blueprint for building systems that genuinely understand human social cues rather than just mimicking them.

🎥 Guidde - Create how-to video guides fast and easy with AI
Tired of explaining the same thing over and over again to your colleagues?
It’s time to delegate that work to AI. Guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI-generated documentation.
1️⃣ Share or embed your guide anywhere
2️⃣ Turn boring documentation into stunning visual guides
3️⃣ Save valuable time by creating video documentation 11x faster
Simply click capture on the browser extension and the app will automatically generate step-by-step video guides complete with visuals, voiceover and call to action.
The best part? The extension is 100% free!


Turns out workers want more from AI than just productivity
Wimbledon’s AI line calls are getting heat from tennis players
OpenAI clamps down on security after foreign spying threats
ChatGPT is testing a mysterious new feature called ‘study together’
Scientists find a strange link between AI use and psychopathic traits
Meta’s grand WhatsApp fintech experiment in India has fizzled
Polymarket gamblers go to war over whether Zelenskyy wore a ‘Suit’
Samsung flags big miss in second-quarter profit, blames US AI chip curbs on China
People are using AI chatbots to guide their psychedelic trips
CoreWeave to acquire Core Scientific in $9 billion all-stock deal


CodeSquire: AI code assistant trained for data scientists using Python, SQL and Jupyter
Blackshark.ai: A platform to train AI and extract 2D and 3D from any pixel source
Phind: AI search engine built for developers, giving code-native answers with references
🤫 The fight to make frontier AI less secretive

Source: Midjourney v7
AI companies are developing systems that could reshape civilization, and most of the work is happening behind closed doors. Now, facing mounting pressure from lawmakers and their own departing safety researchers, one major lab is proposing to crack that door open — but only a sliver.
Anthropic released a "targeted transparency framework" this week that would require only the biggest AI developers to publicly disclose how they test and deploy their most powerful models. The proposal comes as the industry confronts growing skepticism about self-regulation and mounting evidence that voluntary commitments are worthless.
The framework centers on three requirements for companies that spend at least $1 billion on AI development or generate $100 million in annual revenue:
Publish "Secure Development Frameworks" explaining how they evaluate risks from chemical, biological and nuclear threats, plus dangers from autonomous AI systems
Release "system cards" summarizing each model's testing and safety measures at deployment
Face legal consequences for false compliance claims, enabling whistleblower protections
The proposal deliberately shields startups and smaller developers from the requirements.
But the transparency push reflects deeper industry tensions. OpenAI recently weakened its safety testing requirements, saying it would consider releasing "high risk" or even "critical risk" models if competitors had already done so. The company also eliminated pre-deployment testing for manipulation and mass disinformation.
Meanwhile, Elon Musk just updated Grok to be more "politically incorrect" after his AI embarrassed him by routinely fact-checking his claims. The new system prompts tell Grok to "assume subjective viewpoints sourced from the media are biased" and to "not shy away from making claims which are politically incorrect."
The changes prompted warnings of a "race to the bottom" from safety experts. "These companies are openly racing to build uncontrollable artificial general intelligence," said Max Tegmark of the Future of Life Institute.
Anthropic's proposal attempts to formalize what leading labs already do voluntarily. Google DeepMind, OpenAI and Microsoft have published similar safety frameworks, but companies can abandon them at any time as competitive pressure mounts. Making disclosure legally mandatory would "ensure that the disclosures (which are now voluntary) could not be withdrawn in the future as models become more powerful."
The proposal earned cautious praise from AI policy advocates. "It's nice to see a concrete plan coming from industry," said Eric Gastfriend of Americans for Responsible Innovation. "We've heard many CEOs say they want regulations, then shoot down anything specific that gets proposed."
The timing reflects growing urgency as AI capabilities advance rapidly. Anthropic has warned that frontier models might pose "real risks in the cyber and CBRN domains within 2-3 years."

Anthropic is essentially admitting that voluntary AI safety commitments are worthless. Their framework exists because companies often abandon safety measures when they become inconvenient, which OpenAI has just proven.
The proposal reads like locking in good behavior before the real competition begins. The requirements are modest — publish documents and don't lie about compliance — but modest is better than nothing. Anthropic deserves credit for proposing enforceable standards instead of empty "we support regulation" rhetoric.
Still, self-certified safety frameworks are like oil companies writing environmental reports. Third-party auditors would be preferable, although this approach presents challenges related to who qualifies and keeps pace with the rapidly evolving capabilities of AI.
The real test is whether anyone with actual power takes this seriously. Currently, companies developing potentially civilization-altering technology face about as much oversight as a neighborhood lemonade stand.


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“Easy to spot [the other image] as fake - look at shadows ”
“This one wasn't obvious to me. What clinched it was the weird tree in the bottom center of the shot that kind of looks like a giant Oscar the Grouch. I figured no one would let something that creepy keep growing.”
Selected Image 2 (Right):
“Lens flare and textural variation of surfaces made it look more natural than the [other image], which had a more artificial feel to it. ”
“I knew it! But the perfect tiles from the [other] roof fooled me as too perfect.”
💭 A poll before you go
How much do you trust large AI labs to police their own safety? |
The Deep View is written by Faris Kojok, Chris Bibey and The Deep View crew. Please reply with any feedback.
Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
P.P.S. If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.