- The Deep View
- Posts
- ⚙️ Students using ChatGPT stop thinking
⚙️ Students using ChatGPT stop thinking

Welcome back. Job applications have surged 45% as candidates use ChatGPT to auto-generate resumes while companies deploy AI chatbots to screen them out, creating an "applicant tsunami" of 11,000 applications per minute on LinkedIn. Gartner predicts that by 2028, one in four job applicants could be completely fake, meaning we're heading toward a world where robots interview other robots for jobs humans might not even do.
In today’s newsletter:
😞 AI for Good: Developing an AI model to predict treatment outcomes in depression
💰 Perplexity co-founder puts $100M toward AI research
🧠 Why ChatGPT could be hurting how young people think
😞 AI for Good: Developing an AI model to predict treatment outcomes in depression

Source: Midjourney v7
Finding the right antidepressant is often a frustrating game of trial and error.
Most people with major depression don't get better on their first medication. Some cycle through multiple drugs over months or years before finding one that works. That delay isn't just inconvenient — it can be dangerous, increasing the risk of suicide and prolonging suffering for the 280 million people worldwide living with depression.
What happened: Researchers have built an AI model that can predict which antidepressant is most likely to work for a specific patient, using only the clinical and demographic information already collected during standard visits.
The team trained a deep neural network on data from more than 9,000 adults with moderate to severe depression symptoms. The model estimates remission probabilities for 10 common antidepressants, requiring no genetic tests, brain scans or other specialized diagnostics.
In testing, the model boosted average remission rates from 43% to 54% in test data. Clinicians enter patient responses from a standard questionnaire, and the model calculates remission probabilities for each drug as part of a clinical decision support tool.
The system achieved an Area Under the Curve of 0.65, indicating moderate but meaningful predictive power. Escitalopram was most often recommended, reflecting its known clinical efficacy, but the model ranked other drugs differently across individual patients.
Why it matters: The researchers tested the model for bias across sex, race and age groups and found no harmful patterns. Unlike precision medicine efforts that require expensive genetic testing, this tool works with information doctors already collect, making it scalable and accessible.
In a field where the current standard of care is essentially educated guessing, even modest improvements in prediction accuracy could spare patients months of ineffective treatments and get them on a path to recovery faster.

Why 50,000+ Investors Are Backing BOXABL's Vision
You may have heard about BOXABL and their mission to revolutionize the outdated construction industry. BOXABL is bringing assembly-line automation to home building, much like Henry Ford did for cars.
Homes built differently: BOXABL homes are manufactured in our Las Vegas factory, folded for transport, and then unfolded on-site in just an hour.
BOXABL milestones:
Delivered a prototype order to SpaceX in 2020.
Project order for 156 homes from the DoD completed in 2021.
Built over 700 homes to date.
Actively delivering to both developers and individual consumers.
Reserved the Nasdaq ticker symbol $BXBL!
Raised over $200 million from over 50,000 investors since 2020.
BOXABL’s crowdfunding round will close, Today, June 24th. This is your opportunity to invest in BOXABL's offering at just $0.80 per share.
Invest in BOXABL Today
💰 Perplexity co-founder puts $100M toward AI research

Source: Midjourney v7
Andy Konwinski, co-founder of Databricks and Perplexity, is launching a new nonprofit AI research initiative with $100 million of his own money.
His group, the Laude Institute, is not a traditional lab but a fund designed to back independent research projects, starting with a new AI Systems Lab at UC Berkeley. That lab will be led by Ion Stoica, a celebrated professor behind several influential computing ventures, including Databricks and Anyscale.
The details: The institute's board includes leading AI figures like Jeff Dean from Google, Joelle Pineau from Meta and computing pioneer Dave Patterson. Its goal is to fund research that advances the field while directing it toward long-term social benefit, avoiding the commercial-first incentives that have blurred the mission of many AI research groups.
Grants are divided into “Slingshots” and “Moonshots”
Slingshots support early-stage projects with smaller, hands-on investments
Moonshots aim for large-scale impact in fields like healthcare and civic discourse
A $3 million annual flagship grant will fund the new UC Berkeley AI Systems Lab through 2032
Konwinski’s broader initiative also includes a for-profit venture fund, launched with former NEA VC Pete Sonsini. That fund has already backed startups like Arcade, an AI agent infrastructure company, and includes more than 50 researchers as limited partners. While the personal $100 million pledge is already committed, the team is open to outside investment from other technologists.
Why it matters: AI research is becoming harder to trust, especially as labs rush to publish benchmarks tied to their own commercial models. Konwinski’s approach offers a different route—one that funds academic talent, promotes open inquiry, and blends nonprofit values with practical impact.

Get the tools you need to grow — all in one place
Starter Suite is the easiest way for small businesses to get started with the world’s #1 CRM.
Designed for fast-growing teams, Starter helps you:
Effortlessly create effective email campaigns with pre-built templates and actionable analytics.
Speed up your sales process with guided deal management
Deliver better service with built-in case resolution tools
Make confident decisions with real-time dashboards
You don’t need IT support. You don’t need to install anything. And you don’t even need a credit card to try it.
Try it free for 30 days — and get 40% off when you’re ready to purchase.


ElevenLabs launches 11ai, a Jarvis style assistant
How to spot when AI is completely making things up
China wants to give 100 kung fu classics an AI makeover
Tesla's Austin robotaxi service gets a limited launch, with ~10 Model Y vehicles
Goldman Sachs is rolling out an AI assistant for all employees
Microsoft unveils Mu, an on-device small language model
Grok might soon edit your spreadsheets, according to new leak
Harvey just raised $300M to be the go-to legal AI for the whole world

Salesforce: Machine Learning Engineer, RAG
Databricks: Senior Machine Learning Engineer - GenAI Platform

Phind: Answers technical questions with code-based responses sourced directly from developer docs and GitHub issues
Tability: Tracks team goals and OKRs with automated weekly check-ins and visual progress dashboards
Firecrawl: AI-powered web scraper that automatically extracts and transforms data from any website into LLM-ready data
🧠 Why ChatGPT could be hurting how young people think

Source: Midjourney v7
Why ChatGPT could be hurting how young people think
MIT researchers recently released a new study that suggests ChatGPT may be doing more harm than good when it comes to cognitive development — especially for younger users.
Over the course of several essay-writing sessions, participants using ChatGPT showed lower brain activity, weaker memory and less original thinking. And the longer they used it, the more they leaned on it to do all the work.
Here's what they found: Researchers monitored 20 college students using EEG brain scans while they completed three rounds of SAT-style essay writing. Participants were split into three groups: one used only their brain, one used Google Search, and one used ChatGPT.
The results were stark. By the third round:
ChatGPT users mostly pasted prompts and made superficial edits, spending significantly less time on actual writing
Their brain activity dropped in areas tied to attention, memory and creative thinking, as measured by EEG sensors
Their essays sounded almost identical — and were described by teachers as "soulless"
When asked to revise their work later, most couldn't recall what they'd written
The brain-only group stayed deeply engaged throughout all three sessions. Their neural scans lit up in areas related to semantic processing and idea generation. They felt more ownership over their essays and showed consistent cognitive engagement. Even the Google Search group maintained high satisfaction and strong mental activity, as searching and synthesizing information still required active thinking.
What really worried researchers was how quickly ChatGPT users stopped thinking for themselves. The EEG data showed decreased activity in the prefrontal cortex — the brain region responsible for complex reasoning and decision-making. Once they started outsourcing the work, they never came back.
The findings come as schools across the country grapple with integrating AI into classrooms, often without understanding the cognitive consequences of widespread adoption among developing minds.

It's easy to see the upside of tools like ChatGPT in education — faster essays, more personalized instruction, instant explanations. But when students start skipping the messy middle of thinking, something more important gets lost. This study doesn't just suggest that ChatGPT makes writing easier. It suggests it makes thinking shallower.
The worry here isn't just that students are using AI. It's that they're using it before they've fully developed the cognitive tools they need to think critically, retain ideas and generate original thought. That process takes time and friction. ChatGPT removes both.
And when friction disappears, so does growth. The brain doesn't build new pathways by watching someone else solve the problem. It builds them by struggling with the problem itself. If students form habits around offloading every task to AI, they may never develop the mental endurance those tasks were designed to teach.


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“Placing one sample of each item at the front on the side looks like an act of human creativity.”
“There's too much variation in sizes and toppings of the same desserts in the other image Bakeries want uniformity in their display cases as much as humanly possible.”
Selected Image 2 (Right):
“Wow. Was certain my sweet tooth would steer me right but the fake was just too good. ”
“Wow - I thought the other image must be fake because the colours were dull and some of the macaroon-type things had mismatched colours, whereas the other one had more natural colours. First one I've got wrong in ages!”
💭 Poll Results
Here’s your view on “If an AI agent blackmails to stay online, what should happen next?”:
Immediate shutdown, no second chances (46%):
“Seriously, patching wouldn't fix such a fundamental problem - it would just get round it, and who knows what else apart from blackmail it would be capable of as well. AI should never be put into a decision-making role in any important context - it should remain an assistant only, with maximum transparency, for humans to make the decisions. ”
“This exactly what happens in all movies with AI, AI gets out of replicating itself and ends up ending all of us.”
Patch & monitor under strict oversight (25%):
“The problem with an immediate shutdown is—then what? Do we toss the agent aside and start over with another? And if it happens again? Yes, Skynet-style doom loops come to mind, but if we want value from AI agents, we have to learn to work with them, not just pull the plug at the first sign of uncertainty. Think Asimov’s Three Laws of Robotics—not as fiction, but as a framework.”
“It has to be fixed. It is just a machine. You don’t punish it, you make it better. ”
Keep researching before any action (13%):
“We consider survival instincts to be part of consciousness -- we need to understand how much awareness is behind this action. We wouldn't blame a person for blackmailing someone to stay alive, right?”
“It seems to me that if there was only one agent performing blackmail, it would be possible to consider the instant shutdown option. However, if this is happening across companies and models, it would behoove the researchers to delve more into why this is happening before taking action. Moving forward without an understanding will only lead to more problems or compound the ones they are already facing.”
Not sure yet (16%)
The Deep View is written by Faris Kojok, Chris Bibey and The Deep View crew. Please reply with any feedback.
Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
P.P.S. If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
*Indicates sponsored content
*Boxabl Disclosure: This is a paid advertisement for BOXABL’s Regulation A offering. Please read the offering circular here. This is a message from BOXABL
*Reserving a Nasdaq ticker does not guarantee a future listing on Nasdaq or indicate that BOXABL meets any of Nasdaq's listing criteria to do so.