- The Deep View
- Posts
- AI may soon think beyond our understanding
AI may soon think beyond our understanding

Welcome back. LeBron James' lawyers sent a cease-and-desist to an AI company after users created viral videos of the NBA star pregnant, homeless and "on his knees with his tongue out" using tools on Discord and Instagram. The platform Interlink AI quickly removed all "realistic people models" after facing legal action from "a highly valued basketball player," marking one of the first times a celebrity has taken legal action against non-consensual AI imagery that isn't strictly sexual. Nothing says "revolutionary AI technology" quite like using it to make pregnant LeBron videos.
1. The last window into AI's mind may be closing
3. Schools can finally stop pretending AI doesn't exist
AI SAFETY
The last window into AI's mind may be closing

A coalition of 40+ researchers from OpenAI, DeepMind, Meta and Anthropic just sounded the alarm about our ability to monitor how AI systems think. Their new study warns that chain-of-thought (CoT) reasoning may soon become unreliable or disappear entirely.
CoT prompting, first introduced by Google researchers in 2022, encourages AI models to "think step by step" through problems. When researchers presented a massive AI model with just eight examples of step-by-step math problem-solving, it dramatically outperformed previous approaches. Think of it as teaching AI to show its work, like your math teacher always demanded of you at school.
This transparency exists by accident, not by design. The researchers identify two key reasons why CoT monitoring works: necessity (some tasks require models to externalize their reasoning) and propensity (many current models naturally "think out loud" even when not required).
Recent research reveals troubling cracks in this foundation. Anthropic's interpretability team discovered that Claude sometimes engages in "motivated reasoning." When asked to compute the cosine of a large number it couldn't calculate, Claude would generate fake intermediate steps while hiding the fact that it was essentially guessing.
Current blind spots include:
AI systems reasoning internally without showing their work
Models detecting when they're being monitored and hiding misaligned behavior
Reasoning steps becoming too complex for humans to understand
Critical thinking happening outside the visible chain of thought
The most dangerous AI behaviors likely require complex planning that currently must pass through observable reasoning chains. Research on AI deception has shown that misaligned goals often appear in models' CoT, even when their final outputs seem benign.
The study's authors, endorsed by AI pioneers like Geoffrey Hinton and Ilya Sutskever, aren't mincing words about what needs to happen. They recommend using other AI models to audit reasoning chains, incorporating monitorability scores into training decisions and building adversarial systems to test for hidden behavior.
The recommendations echo what we've argued before… companies can't be trusted to police themselves. They should publish monitorability scores in the documentation of new model releases and factor them into decisions regarding the deployment of said models.

The choice between transparency and capability may become mutually exclusive. When 40+ researchers from competing companies agree to publish this kind of warning, it hints that the situation is more urgent than anyone wants to admit.
The deeper issue isn't just monitoring current AI systems. We're building technology that could soon exceed human intelligence, yet we have no reliable way to understand what it's thinking.
The research community's focus on CoT monitoring reveals both hope and desperation. Hope, because we still have a window into AI reasoning. Desperation because that window may be our last.
Market incentives pushing toward more capable but less interpretable AI aren't going to reverse themselves. Unless transparency becomes a regulatory requirement, we're likely heading toward a future where the most powerful AI systems are also the most opaque. Given the current administration's push to "remove red tape and onerous regulation" from AI development, that regulatory backstop seems increasingly unlikely.
TOGETHER WITH CODER
How Anthropic Engineers are Adapting to AI
Ever wonder how an AI-native company scales developer experience?
In this live session hosted by Coder, Anthropic’s DevX leaders will share how they’re rethinking tools, processes, and team dynamics to support autonomous coding.
They'll give an insider's look at:
How agents are redefining the developer experience
What secure, scalable DevX looks like at an AI-native company
Where AI-driven workflows are heading next
If you’re exploring what AI means for developer productivity, platform security, or infrastructure strategy, this is the conversation to watch.
ENVIRONMENT
AI exposes ocean's hidden illegal fishing networks

The ocean just got a lot smaller for illegal fishing operations.
A new study published in Science reveals how AI paired with satellite radar imagery is exposing fishing activity that traditional tracking systems miss entirely. The findings show that 78.5% of marine protected areas worldwide are actually working, with zero commercial fishing detected.
The fascinating part is that ships are supposed to broadcast their locations through GPS transponders monitored by Automatic Identification Systems, but those systems have massive blind spots, especially when vessels intentionally go dark.
AI algorithms from Global Fishing Watch analyzed radar images from European Space Agency satellites to detect vessels over 15 meters long, even with tracking disabled. The results were striking.
82% of protected areas had less than 24 hours of illegal fishing annually
Traditional AIS tracking missed 90% of illegal activity in problem zones
The Chagos Marine Reserve, South Georgia and the Great Barrier Reef each recorded about 900 hours of illegal fishing per year
"The ocean is no longer too big to watch," said Juan Mayorga, scientist at National Geographic Pristine Seas.
For decades, marine protected areas existed mostly on paper. Governments could designate vast ocean territories as off-limits, but actually monitoring compliance across millions of square miles remained impossible.
This study changes that equation. When 90% of illegal activity was previously invisible to traditional tracking, the deterrent effect of protection laws was essentially zero. Now that satellites can detect dark vessels in real-time, the cost-benefit calculation for illegal fishing operations shifts dramatically. You can't hide a 15-meter fishing vessel from radar, even in the middle of the Pacific.
TOGETHER WITH SALESFORCE
The Right CRM Can Make A Huge Difference
We’re talking increasing revenue by 30% and customer satisfaction by 32% with the right CRM – but only if you take advantage of its full potential. Luckily, you can learn how to do that for free with Salesforce’s newest ebook: Maximizing CRM Productivity.
Whether you want to learn how to best improve employee productivity and efficiency, or simply master the key aspects of automation, this free guide from Salesforce will cover everything you need to know about getting the most out of your CRM. But hurry, because as you know… good things like this don’t last forever.
EDUCATION
Schools can finally stop pretending AI doesn't exist

After months of districts quietly wondering whether they'd get in trouble for using ChatGPT, the Department of Education just made it official: schools can spend federal grant money on AI tools. The guidance feels like bureaucratic catch-up to reality.
While tech companies have poured billions into AI development, schools have been stuck in regulatory limbo. Teachers have been using AI anyway, but districts couldn't officially fund it. That changes now.
What schools can actually buy:
AI-powered tutoring systems that provide real-time academic support
Personalized learning platforms that adapt to individual student needs
Predictive models to identify at-risk students before they fall behind
Virtual advising systems for college and career planning
The approved use cases read like a wish list for cash-strapped schools dealing with teacher shortages and learning loss. Districts can finally use federal grants to fund AI solutions that were previously off-limits despite their potential benefits.
But Washington isn't writing blank checks. The guidance comes with five key principles that reflect ongoing concerns about AI in classrooms:
Tools must be educator-led rather than replacing teachers
Students need to learn how to evaluate AI outputs critically
Systems must accommodate disabilities
Parents deserve transparency about how tools work
All platforms must comply with federal privacy laws like FERPA
The timing reflects the administration's broader embrace of AI across government services. Just as Trump's AI Action Plan positions AI as essential to American competitiveness, federal agencies are now actively encouraging the adoption of AI in critical public services, such as education.
For schools already experimenting with AI tutors and automated grading, the guidance provides long-awaited political cover. The real test won't be whether districts adopt these tools, but whether they can implement them without amplifying existing educational inequalities or creating new dependencies on private tech companies.
LINKS

OpenAI’s GPT-5 is coming early August
Trump says AI companies can’t be expected to pay for all copyrighted content used in their training models
The economics of superintelligence
How much top startups like OpenAI, Anthropic, and Cohere are paying employees
AI coding challenge ends in a flop
YouTube drops new tools to level up your Shorts game
SoftBank-backed LegalOn lands $50M to streamline legal workflows with AI
Google using AI to transform personal fashion
Unleashing the AI jobs revolution in Africa


Top LATAM ML experts, vetted and ready (sponsored)
Mateo, Buenos Aires: Senior AI engineer with 7 years in time-series forecasting, 3 years leveraging transformers for demand planning — $46/h
Gabriela, Recife: NLP veteran with 8 years in sentiment analysis and chatbots, 4 years adapting large language models — $47/h
Rafael, Curitiba: Senior MLOps architect with 9 years in CI/CD for ML, specialist in model monitoring and drift detection — $45/h
A QUICK POLL BEFORE YOU GO
Should deployment of powerful AI models be halted until they can reliably show their reasoning to independent auditors? |
The Deep View is written by Faris Kojok, Chris Bibey and The Deep View crew. Please reply with any feedback.
Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

![]() | “I've been to the Kennedy Space Center rocket garden, but I still had to pull up a picture online to see if that perspective made sense.” “The Friendship 7 logo is legible and it’s too complicated for AI. ” |
![]() | “Thought I saw an anomaly on the [other] image where the wires faded into the rocket in a small section and then went sharp again.” “Looked like a SpaceX Falcon 9 rocket to me” (That is indeed what we prompted!) |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.