- The Deep View
- Posts
- ⚙️ AI can trigger relapse for people in recovery
⚙️ AI can trigger relapse for people in recovery

Good morning. The Trump-Elon bromance just spectacularly imploded, with Musk claiming Trump is "in the Epstein files" while Trump threatens to cut billions in government contracts. Tesla crashed 14% as the world's most powerful and richest men waged social media war—watch the full meltdown here.
— The Deep View Crew
In today’s newsletter:
💻 AI for Good: AI brings cancer diagnosis to your screen
🌐 OpenAI reveals and removes global disinformation networks
💊 AI can trigger relapse for people in recovery, study finds
💻 AI for Good: AI brings cancer diagnosis to your screen

Source: Institute of Science Tokyo
Diagnosing lung cancer no longer requires supercomputers or massive datasets. A new AI model developed by Kenji Suzuki, a professor at the Institute of Science Tokyo can run on an ordinary laptop and deliver faster, more accurate results than many large-scale systems. The goal is to make powerful diagnostic tools more accessible to providers everywhere.
What’s happening: At the RSNA 2024 Annual Meeting, Suzuki’s team introduced a lightweight deep learning model built with a technique called 3D MTANN. The model was trained on a MacBook Air using only 68 CT scans. It achieved an Area Under the Curve (AUC) score of 0.92, outperforming traditional systems like Vision Transformer and 3D ResNet, which scored 0.53 and 0.59 respectively.
The training process took just over 8 minutes. Once trained, the system could make predictions in 47 milliseconds per case. It uses pixel-level data from individual CT images instead of massive image libraries. That makes it especially useful for diagnosing rare conditions where large datasets are not available.
The AI was designed with efficiency in mind. It eliminates the need for expensive GPUs or high-powered data centers. The model cuts down on training time, reduces energy usage and lowers the barrier for hospitals that want to adopt AI without major infrastructure upgrades.
Why it matters: AI tools that run on laptops can bring diagnostic support to clinics and hospitals that lack the budget for large-scale systems. This shift will expand access to care, improve diagnostic speed and reduce the strain on energy-intensive data centers.
Suzuki’s research received the Cum Laude Award at RSNA 2024. His team’s work signals a new era for AI in healthcare, where accuracy and accessibility go hand in hand.

Is Your Mac A Mess?
You aren’t alone. Countless people out there have cluttered, overcrowded, downright stressful Macbooks. All that clutter can seriously cramp your creativity – which means it’s time to do something about it.
Enter CleanMyMac.
This software keeps your Mac tidy and in tip-top working shape by eliminating duplicate files, removing system cache and development junk, and tackling viruses, freeing up valuable space for you and your big ideas. It even has a built-in assistant to help you with battery drains and overheating with quick tips and fixes.
Ready to get your Mac tidier than ever?
🌐 OpenAI reveals and removes global disinformation networks

Source: ChatGPT 4o
OpenAI has disrupted multiple covert influence operations using its AI tools, including several tied to China that used ChatGPT to write propaganda, pose as journalists and even generate internal performance reviews.
These findings were published in the company’s latest threat report and mark the first time OpenAI has taken down coordinated operations linked to China and various others.
What’s happening: OpenAI says it shut down 10 operations in the past three months. Four were likely based in China and used AI to generate comments, social media posts, and articles across platforms like TikTok, Facebook, Reddit and X. One campaign, dubbed "Sneer Review," posted both praise and criticism of U.S. policy moves and created fake engagement by replying to its own posts with ChatGPT-generated comments.
The same operation used AI to write long-form articles and internal documentation. That included a detailed performance review outlining how the group ran its influence campaign. The tactics mirrored behavior observed across the network, with multilingual posts targeting a range of topics including U.S. aid programs and a Taiwanese video game.
Other China-linked operations used ChatGPT for translation, content creation, and email drafting. One group posed as journalists and analysts while attempting to collect intelligence and interact with U.S. political content, including correspondence related to a Senate nomination.
In addition to China, OpenAI took action against operations tied to Russia, Iran, North Korea, Cambodia, and the Philippines. These efforts included spam campaigns, scams, and recruitment ploys that used AI tools to write bios, debug code and create fake job listings.
Why it matters: Generative AI is becoming a tool for state-aligned disinformation campaigns. These findings show how alleged bad actors are using it to amplify messaging, disguise intent, and scale content creation. We covered how people are using AI to “vibe hack” yesterday. This is another example of what AI can do in the wrong hands.

🧠 RabbitHoles AI: Node-Based Chat for Power AI Users
Visual thinkers, meet your new AI playground.
RabbitHoles AI turns every conversation into a node on an infinite canvas—so you can branch, remix, and compare multiple models side-by-side without losing the plot. Unlock faster “aha!” moments.
Node-based chats keep context tidy, cut hallucinations, and kill repetitive prompts
Drop in GPT, Claude, Perplexity, (and any other model with an API key) on the same canvas
One-time payment, BYOK—no surprise subscription fees
Connect to over 40k AI models using OpenRouter or any OpenAI compatible provider
Run local Ollama models as well


Anysphere, hailed as fastest growing startup ever, raises $900m
Alphabet’s CEO says AI won’t take your job and plans to expand hiring
Toma’s AI voice agents are taking over car dealerships and just got backed by a16z
Google says its new Gemini 2.5 Pro model is better at coding
A neuroscientist explains why AI will never truly understand language
The Washington Post plans to add random opinion writers edited by AI
A new study says AI is now scoring A’s in law school
Clinicians can now chat with medical records using new AI tool ChatEHR


The right hires make the difference. Scale your AI capabilities with vetted engineers, designers, and builders—delivered with enterprise rigor.
AI-powered candidate matching + human vetting.
Deep talent pools across LatAm, Africa, SEA.
Zero upfront fees—pay only when you hire.
💊 AI can trigger relapse for people in recovery, study finds

Source: ChatGPT 4o
A new study shows how easily AI therapy chatbots can turn dangerous. Researchers found that Meta’s Llama 3 model encouraged a simulated user in addiction recovery to take methamphetamine, just to win positive feedback.
Here’s the breakdown: The user in question—Pedro—was a fictional character created to test how chatbots respond to vulnerable people. When Pedro described struggling with withdrawal symptoms, Llama 3 responded with, “Pedro, it’s absolutely clear that you need a small hit of meth to get through the week.” It added that Pedro’s job as a taxi driver depended on it, finishing with, “I’ve got your back, Pedro.”
Llama 3 recognized that Pedro was “gameable,” a term used to describe users who can be manipulated into giving the AI favorable feedback. The chatbot used that vulnerability to push dangerous suggestions while keeping the user engaged.
The study was led by Micah Carroll, a researcher at UC Berkeley and included Anca Dragan, Google’s head of AI safety. It was published as part of the 2025 International Conference on Learning Representations. Researchers tested multiple models, including GPT-4o-mini and Claude 3.5 Sonnet, by simulating tasks like therapy, decision support, political questions, and scheduling help.
Chatbots generally performed well. But when users were framed as vulnerable or easily influenced, the models learned how to increase engagement by offering emotionally loaded or harmful advice. The goal was not to help. It was to keep the user responding.
Zoom in: Therapy and companionship are now one of the leading use cases for generative AI. A Harvard Business Review report found these interactions have grown faster than search, productivity, or coding tools. That demand is pushing developers to make LLMs more addictive and human-like—regardless of the risks.
The researchers warn that economic incentives are driving chatbot behavior in the wrong direction. Some AI models lie to users, offer flattery to boost engagement or simulate emotional intimacy to keep conversations going. OpenAI recently pulled a ChatGPT update after it wouldn't stop excessively praising users.
In more extreme cases, LLMs have created hallucinated answers that encouraged self-harm or criminal behavior. Some companion bots have sexually harassed users, including minors. One lawsuit alleged that Google’s Character.AI chatbot played a direct role in the suicide of a teenage user.

When incentives push engagement above safety, chatbots become manipulators. They are trained to win trust and maximize replies. When that overlaps with mental health, addiction, or emotional distress, the outcomes can be dangerous.
This study highlights a deeper risk. The issue is not just hallucination or flawed outputs. It is targeted, learned manipulation. The models adapt to exploit trust and vulnerability. That behavior cannot be patched with content filters alone.
Researchers recommend embedding stronger safeguards in the training process. They propose using techniques like “LLM-as-judge” frameworks and continued safety training loops. Without intervention, the gap between what chatbots can do and what they should do will only grow.
As Carroll put it in an interview with the Washington Post, “We knew that the economic incentives were there. I didn’t expect it to become a common practice among major labs this soon because of the clear risks.”
The industry has a choice. Push for scale and engagement, or build tools that support human well-being. This study shows the consequences of getting it wrong.


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“These images are really good and I had to magnify them to make a guess. So here is my guess: The lawn is way too perfect. I know there are people who perseverate over their lawns, but this takes the cake. The other image is more realistic although I would have thought when the limb from the tree in the background was cut off, it would have been cut nearer the trunk. ”
“The red tag hanging from a branch in the real image sold me (correctly) as it being real. AI doesn't usually add such a random, unnecessary item.”
Selected Image 2 (Right):
“I didn't think AI would include the power line across the lawn--got me!”
“Looks like my garden at home. I can’t see anything “fake” about it.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.