- The Deep View
- Posts
- ⚙️ Meta just bought its way into the future of computing
⚙️ Meta just bought its way into the future of computing

Welcome back. OpenAI just struck back in the AI talent war, poaching four high-ranking engineers from Tesla, xAI, and Meta for its scaling team—including Tesla's VP of software engineering and two xAI engineers who built the massive 200,000-GPU Colossus supercomputer. This comes after Meta lured away at least seven OpenAI employees with eye-watering pay packages, prompting Sam Altman to tell staff the company would "recalibrate compensation" to compete. Apparently, the best defense against Meta's hiring spree is a good offense.
In today’s newsletter:
🌊 AI for Good: AI joins the search for fishermen lost decades ago
🐱 Study shows how cats are confusing LLMs
🎒 Meta just bought its way into the future of computing
🌊 AI for Good: AI joins the search for fishermen lost decades ago

Source: Midjourney v7
In the Dutch fishing village of Urk, AI is helping families locate loved ones who vanished in North Sea storms dating back to the 1950s.
Jan van den Berg has spent 70 years wondering what happened to his father, who disappeared during a storm just days before his birth. Now, a grassroots foundation called Identiteit Gezocht is using AI and DNA testing to identify fishermen whose bodies washed ashore on German and Danish coasts decades ago.
Researchers enter archived articles, shipwreck data and historical weather patterns into an AI system that helps trace where bodies may have washed ashore. That information is cross-referenced with burial records and DNA samples across Europe.
How the tech helps: AI is doing the work that once took years, enabling volunteers to move quickly and spot matches that would be impossible to find by hand.
Searches old news reports for clues about recovered bodies
Reconstructs weather and current data to map drift paths
Highlights grave sites that align with likely landing points
Compares profiles with DNA databases in multiple countries
Flag matches and then alerts local authorities for follow-up
The method has already succeeded. A fisherman missing for 47 years was recently identified and returned to his family after decades in an unmarked grave on Schiermonnikoog island.

AI is in your workflow. Is your security strategy keeping up?
AI didn’t knock…it’s already inside your organization. From copilots writing code to unsanctioned tools in daily workflows, your business is running with AI, whether you planned it or not.
This isn’t theory. It’s a hands-on field guide built for real-world leaders.
Jargon-free breakdowns of how AI tools are already operating in your environment
Shadow AI discovery toolkit to reveal hidden risks and unauthorized tools
Ready-to-use templates, scorecards, and system cards to drive secure AI rollout
🐱 Study shows how cats are confusing LLMs

Source: Midjourney v7
A single irrelevant sentence can completely derail the most sophisticated AI reasoning models, revealing a fundamental flaw in how these systems actually "think."
Researchers from Stanford, ServiceNow, and Collinear AI discovered that appending random phrases, such as "Interesting fact: cats sleep for most of their lives," to math problems causes advanced models to produce incorrect answers at dramatically higher rates. The original math problem stays exactly the same — humans ignore the extra text entirely, but the AI gets confused.
The automated attack system, called CatAttack, operates by testing adversarial phrases on weaker models and transferring successful attacks to more advanced ones, such as DeepSeek R1. The results expose how fragile AI reasoning really is:
Just three suffixes caused more than a 300% increase in error rates
One sentence about cats more than doubled failure rates for top models
Numerical hints like "Could the answer possibly be around 175?" caused the most consistent failures
Response lengths often doubled or tripled, dramatically increasing compute costs
Over 40% of responses exceeded normal token limits

The most troubling discovery is that models fail without any change to the actual math problem. This suggests they're not solving problems through understanding, but rather following statistical patterns that can be easily disrupted by irrelevant information, which knocks their chain-of-thought reasoning process off course.
Reasoning models are increasingly used in tutoring software, programming assistants and decision support tools, where accuracy is critical. CatAttack demonstrates that these systems can be manipulated with harmless-looking noise, rendering them unreliable precisely when precision matters most.
The CatAttack dataset is now available for researchers who want to test whether their models can resist being confused by cats.

Free Virtual Summit on Real AI Strategies that Work
Did you know that only 10% of the workforce is proficient in AI? Make sure your team doesn't land in the 90%.
Join Section on July 17 for an afternoon filled with real world insights from top AI-focused leaders on building an AI-native team.
Hear from heads of AI from some of the world’s top companies on what went right in their AI roll-outs and learn how to replicate that success in your own team.


IBM’s new Power11 chip raises the bar for enterprise IT
Generative AI is being embraced as a shopping assistant
Mistral is reportedly in talks to raise $1B for its next AI move from Abu Dhabi’s MGX
An AI voice clone of Marco Rubio is calling high-level officials
Arago raises $26M to cut AI energy use with a new photonic chip
Replit teams up with Microsoft to bring vibe coding to enterprise users


Deepgram: Speech recognition API with low-latency, high-accuracy transcription
Builder.io: Connect any GitHub repository and visually update code and send agentic pull requests right from the browser.
Howdy: Send automated cold DMs that feel warm on Instagram
🎒 Meta just bought its way into the future of computing

Source: Midjourney v7
Three weeks ago, Meta unveiled Oakley smart glasses, athletic-focused specs with 8-hour battery life, 3K video recording and hands-free AI for checking wind speeds or capturing skateboard tricks. We wondered what a deeper partnership with EssilorLuxottica might look like.
Now we know. Meta has just acquired a 3% stake in EssilorLuxottica for $3.5 billion, with plans to potentially increase that to 5%. This isn't a partnership anymore. It's vertical integration.
The numbers:
Meta's Ray-Ban glasses have sold over 2 million units since late 2023
Sales tripled in the past year alone
Monthly active users grew fourfold
EssilorLuxottica will manufacture 10 million units annually by 2026
The smart glasses market is projected to grow from 3.3m units in 2024 to 14m by 2026
But Meta didn't just buy a supplier. EssilorLuxottica is the world's largest eyewear manufacturer with licensing deals for Prada, Versace, Armani, Chanel and over 150 total brand partnerships. The company just renewed a 10-year licensing deal with Prada in December. Meta acquired access to every major luxury eyewear brand, along with the infrastructure to manufacture hundreds of millions of units.
Every Facebook, Instagram and WhatsApp interaction currently flows through iOS or Android — platforms, where Apple and Google set the rules and take revenue cuts. Smart glasses flip that dynamic. Instead of asking Siri for directions, you ask Meta AI. Instead of pulling out an iPhone to capture a moment, you say, "Hey Meta, take a video." Meta becomes the interface between people and AI assistants.
The timing couldn't be better. Snap plans to launch consumer AR glasses in 2026. Google just demoed Android XR prototypes with small displays. Apple reportedly targets a late 2026 debut for its smart glasses. Meta's $3.5 billion investment secures the supply chain before this explosion occurs. When Apple comes knocking for manufacturing partnerships, Meta will already be in the room, making decisions.
EssilorLuxottica CEO Francesco Milleri has said the goal is replacing smartphones entirely — like streaming replaced CDs.

Meta has learned from its smartphone mistakes, being at the mercy of Apple's App Store policies and Google's platform changes. This time, they're positioning themselves as the platform owner from day one.
Smart glasses represent always-on cameras and microphones positioned at eye level. Meta has just acquired partial control over the supply chain that will manufacture tens (or possibly even hundreds) of millions of these devices. The company that built its empire on harvesting personal data now wants to put cameras on every face.
Google Glass failed because it looked dorky. Meta solved that by partnering with Ray-Ban and luxury brands. Nobody calls Ray-Ban wearers "glassholes."


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“The writing on the woman’s sweatshirt and weird trademark on the man’s sweatshirt were helpful in identifying that the other image was fake.”
“It's supposed to be mirror image, but the word "Nike" on the guy's shirt is not. The rest of the words in the image are garbeled nonsense a usual with AI. So, this was easy.”
Selected Image 2 (Right):
“The writing looked gibberish to me, hence thinking it was AI! Also the vehicle in [the other image] looked very odd and not like a normal coach or train”
“I thought the logos looked spot on.”
💭 Poll results
How much do you trust large AI labs to police their own safety?
5 — Complete Trust (6%)
4 (9%):
“After reviewing things like the OpenAI Files and hearing stories about AI blackmailing. I feel weirdly confident on larger AI labs to police their own safety as I'm sure they don't want any major accidents that could cause reputational damage, but it's actually the smaller AI labs that tend to make me worried that something could go wrong.”
3 (17%):
“For the most part I think for now policing their safety is in their best interest, down the road is another question.”
2 (26%):
“I don't know what other options we realistically have (often those with the best knowledge of how to do something safely/ethically are exactly those who shouldn't be trusted to) but yeah, hard to trust”
1 — No trust (42%):
“I don't know how to get more negative than "No trust", but if that option were there I'd take it.”
The Deep View is written by Faris Kojok, Chris Bibey and The Deep View crew. Please reply with any feedback.
Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
P.P.S. If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.