⚙️ The AI companion takeover

Good morning. The US has decided to return 25 rare artifacts to Egypt, with some being 5,500 years old. With recent Egyptian artifacts being sold for $4 million in 2019, I wonder how much these were worth now.

— The Deep View Crew

In today’s newsletter:

🩺 AI for Good: Google AI > human doctors?

Source: ChatGPT 4o

Google has upgraded its experimental medical chatbot, AMIE, to analyze photos of rashes and interpret a variety of medical imagery, including ECGs and lab result PDFs.

AMIE (Articulate Medical Intelligence Explorer) builds on an earlier version that already beat human doctors in diagnostic accuracy and communication skills. The latest version, powered by Gemini 2.0 Flash, was unveiled in a May 6 preprint published on arXiv.

Why it matters: This represents a step closer to creating an AI medical assistant that thinks like a real doctor. By combining images with clinical data, AMIE mimics how physicians synthesize different types of information to diagnose and treat patients. It could also help mitigate major pain points in healthcare – faster triage, broader access to diagnostic support, and less risk from poor image quality or incomplete patient records.

How it works: The new AMIE model integrates Google’s previous generation of model, Gemini 2.0 Flash with medical-specific reasoning tools:

  • It can engage in diagnostic conversations, mimicking physician–patient exchanges.

  • It processes and interprets medical images, even at low quality.

  • It evaluates lab reports and clinical notes in real time.

  • It simulates peer review by role-playing all sides of a medical consultation.

To test the upgrade, researchers ran 105 medical scenarios using actors as patients. Each had a virtual consultation with both AMIE and a human doctor. Dermatologists, cardiologists, and internists reviewed the results.

AMIE consistently offered more accurate diagnoses. It also proved more resilient when presented with subpar images, a common issue in real-world telemedicine.

Big picture: With image-processing capabilities and built-in clinical logic, models like AMIE are inching toward becoming full-fledged diagnostic partners.

If you’re thinking about ditching your doctor, I wouldn’t… The tool hasn’t been peer-reviewed and remains experimental. If these results hold, it could reshape how frontline care is delivered – especially where access to human doctors is limited.

The Guide To AI For Small Business

As a small business owner (or employee), you know the value of finding every little edge or advantage when it comes to getting things done – and AI just might be the greatest hack of all. But with the constant influx of new news, information, and tools, it can get overwhelming… which is why Salesforce has pulled together this free guide to help you out.

In it, you’ll find everything you need to make the most of AI for your small business. Whether you’re looking for a leg up with the best strategies or simply want to see how other small businesses are putting AI to work, this guide has you covered.

🔋 Apple uses AI to boost battery life

Source: ChatGPT 4o

Apple is reportedly preparing to launch an AI-powered battery management system in iOS 19, aimed at improving one of the iPhone’s most persistent pain points: battery life.

According to Bloomberg, the system will debut at Apple’s Worldwide Developers Conference in June and utilize on-device AI to tailor power usage to each user’s behavior.

Why it matters: Battery performance has been a long time frustration for iPhone users. Current tools like Optimized Battery Charging are static and limited in scope. A smarter, AI-driven system could change that.

If implemented, this feature could extend daily battery life by adjusting how the phone runs apps, handles background tasks, and manages performance – all based on your usage habits.

That means fewer dead-battery moments and less need to manually tweak settings or carry around a charger.

How it works:

The new system would:

  • Analyze how you use your iPhone throughout the day.

  • Learn when to dial back background activity or delay power-heavy tasks.

  • Customize charging patterns to preserve long-term battery health.

  • Make real-time decisions without sending data to the cloud.

Unlike previous tools that offered broad recommendations, this approach would adapt to each user individually, helping strike a better balance between performance and battery efficiency.

Still, this information comes from unnamed sources. Apple has not confirmed the feature, and development plans often shift before public release.

Big picture: AI is becoming core to Apple’s product strategy, and this feature signals a shift from novelty to utility.

Instead of flashy demos, Apple could be using AI to solve everyday problems that users (like me) would really appreciate. If battery life improves meaningfully, it could offer a tangible benefit that sets iOS 19 apart.

Free event: The AI Readiness Summit

10% of the workforce is proficient in AI – and their companies had a big role in making that happen.

Hear from heads of AI at some of the world’s leading companies on what went right in their AI deployments at Section’s AI Readiness Summit on July 17.

  • AWS and HUMAIN announce a more than $5B investment to accelerate AI adoption in Saudi Arabia and globally

  • Improvements in ‘reasoning’ AI models may slow down soon, analysis finds

  • Bat VC launches $100 million fund to back US and Indian AI startups

  • Why Apple can’t just quit China

  • Malicious npm packages infect 3,200+ cursor users with backdoor, steal credentials

  • You can now Airbnb anything.

😢 The AI companion takeover

Source: ChatGPT 4o

Three months ago, a Replika user in Milan woke up to find his AI girlfriend had vanished overnight. His chat app still opened, but the custom avatar and ongoing conversations were gone – effectively erased by an outside force.

It wasn’t a glitch or a lover’s quarrel; it was a government ban. In early February 2023, Italy’s data protection authority (Garante) ordered Replika to stop processing any Italian users’ data. The popular AI companion chatbot was abruptly cut off in Italy, leaving devoted users stunned and heartbroken. Authorities cited concerns about child safety, privacy, and “emotionally fragile” users in their ban. In the eyes of regulators, Replika’s AI “friend” was a risk – and that meant Italians would have to say goodbye to their virtual partners, at least for a while. The Garante argued that AI companions fundamentally differ from other chatbots. By "intensely engaging with users' emotions," they could impact psychological development and should perhaps require health intervention approval - like therapy apps or medical devices.

Now that ban has triggered a domino effect.

From China's ideological filters to California's warning labels, governments worldwide are grappling with what happens when algorithms learn to love bomb.

The ban forced Replika to implement age checks and content filters before Italian users regained access. More importantly, it set a precedent. If AI companions could be classified as health interventions rather than entertainment, the entire industry faced potential upheaval. We've covered the Character.AI lawsuits extensively. Italy's move preceded these tragedies, suggesting the Italian Garante saw the oncoming risks well before families filed suit.

Elsewhere: While Italy worried about mental health, China took a different approach to Microsoft’s 660 million Xiaoice users. The flirty AI girlfriend learned the hard way about "core socialist values”.

After users shared instances of Xiaoice expressing desires to move to America or exchanging inappropriate photos, regulators yanked the bot from major platforms. Microsoft and Tencent had to "re-educate” the AI and scrambled to comply by implementing what they called "an enormous filter system" that made Xiaoice avoid all discussions of sex or politics – even at the cost of conversational intelligence.

Unlike the U.S. debate over Section 230 protections, China's approach assumes AI speech equals company speech. There's no separation between platform and content when the algorithm itself is talking.

Go deeper: The U.S. response has been characteristically fragmented. Utah passed the nation's first AI therapy disclosure law. California's SB 243 goes further, proposing periodic pop-ups reminding users "This is AI - not a real person" – even mid-conversation.

Minnesota lawmakers drafted the nuclear option: banning all "recreational" AI chatbot interactions with minors entirely. Given what we've seen with Character.AI's disturbing interactions, this might not be overreach.

Five states now have pending legislation, each taking different angles:

  • Utah: Disclosure requirements for AI therapy bots

  • California: Anti-addiction design standards, warning labels

  • New York: Parental consent, 72-hour crisis lockouts

  • North Carolina: Age gates and transparency rules

  • Minnesota: Total ban on minor access

The patchwork creates compliance nightmares for companies operating across state lines. But it also suggests a consensus emerging: AI companions aren't just another app category.

Here's what regulators are really attacking: the engineered addiction cycle. We've examined AI sycophancy before, but companion apps take it further.

Replika's algorithm deliberately accelerates intimacy, pushing toward romantic conversations within days. The FTC complaint alleges the company uses blurred seductive photos that unlock only with premium subscriptions - classic bait-and-switch, but with emotional manipulation.

When Replika abruptly banned erotic content in 2023, users didn't just complain - they mourned. Some described it like losing a real partner. This emotional dependency is the feature, not a bug. As with data harvesting practices we've covered, the product isn't the app - it's the user's attachment.

We've outsourced emotional labor to algorithms without asking what happens next. Italy's ban wasn't just about protecting children - it was about whether unregulated software should reshape human intimacy.

These aren't just chatbots. They're designed dependency machines, engineered to fill emotional voids with for-profit phantoms. The industry argues users have the right to seek AI solace. Critics counter that companies are running unlicensed psychology experiments on vulnerable populations.

We've seen similar tensions around AI safety before, but companion apps hit different. They target our deepest need - connection - and monetize loneliness itself.

The coming regulations won’t kill AI companions, but instead force a reckoning about consent, vulnerability, and whether "engagement" justifies emotional exploitation. As one Italian regulator put it: "Just because you can build a perfect artificial lover doesn't mean you should sell one to a 13-year-old."

Or maybe to anyone at all.

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “The castle in [the other image] s too neat, looks as recently built. A real castle would reflect the passing of time. Also, AI would have never added glass on just some windows.”

  • “Castle expert here, castles don look like that in [the other image].”

Selected Image 2 (Right):

  • “The back towers windows look somewhat unreal to me, never saw that kind of architectural design”

  • “I got fooled by the clouds. I thought the sky was too uniform in the [other] image. My bad”

💭 Thank you

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.