- The Deep View
- Posts
- ⚙️ Fuzzy narratives and platform shifts: Duolingo goes ‘AI-first’
⚙️ Fuzzy narratives and platform shifts: Duolingo goes ‘AI-first’

Good morning. Lots going on today, so I hope you’ve had your coffee.
The first piece of federal AI legislation in the U.S. is on the verge of becoming law, and — rather like the field it aims to regulate — it’s complicated as hell.
Duolingo, meanwhile, has identified the next platform shift. They think. They’re pretty sure. (But can you be right on platform shifts twice?)
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🎙️ Podcast: The foundations of AI
🏛️ The first piece of US AI legislation hits Trump’s desk – it’s anything but clear-cut
📱 Fuzzy narratives and platform shifts: Duolingo goes ‘AI-first’
🎙️ Podcast: The foundations of AI
On the latest episode of The Deep View: Conversations, I sat down with Dr. Aaron Andalman, the Chief Science Officer of Cognitiv, to unpack the vast, fascinating world of neuroscience, the biological origins of AI and the impact both fields have already had on each other.
Give it a listen here.

Revenue teams are working harder than ever and have more tech than ever, but are still falling further behind.
In our new research-backed report, we break down what’s driving the 2025 Revenue Execution Crisis and what leading GTM teams are doing to fix it.
We analyzed the most pressing problems that surfaced across hundreds of conversations with sales, customer success, and RevOps leaders. The root causes?
Disconnected systems
Lost customer signals
Manual, reactive workflows that kill velocity
But this report doesn’t stop at diagnosis.
📊 Inside, you’ll find:
✅ The 3 execution gaps draining your revenue engine
✅ Real-world benchmarks from top-performing GTM teams
✅ The framework for Revenue Orchestration, built on:
Workflow Automation and AI Agents
Signal Structuring
AI-Driven Execution
✅ Practical guidance for turning insights into impact, fast
This is the moment to move from reactive intelligence to proactive orchestration. Get the full report and step into a smarter, faster GTM motion.
The first piece of US AI legislation hits Trump’s desk – it’s anything but clear-cut

Source: Unsplash
The U.S. House of Representatives this week passed (409-2) what could amount to the country’s first official foray into AI-related legislation.
The “Take It Down” Act, a bipartisan bill introduced in January by Senators Ted Cruz (R-Texas) and Amy Klobuchar (D-Minn.), covers two main areas: first, it would criminalize the nonconsensual publication of “intimate” imagery, specifically including that which is generated artificially, and second, it would require digital platforms to remove such instances of that imagery within 48 hours of receiving verified reports from victims.
President Donald Trump endorsed the bill during a joint session address in March, saying that he is looking “forward to signing it into law.”
Fundamentally, the Take It Down Act was offered up as one of several legislative remedies to the crisis of deepfake pornography, something that had been around for years before ChatGPT kicked off a proliferation of cheap, accessible and realistic generative AI tools.
Deepfake technology thus entered into the hands of the masses, and the results have been predictably horrifying: instances of deepfake sexual harassment, specifically targeting young women and girls, have spread across middle- and high-school campuses and social media alike, impacting both teens and Taylor Swift alike.
Of the laws offered up in the wake of this crisis, this is the only piece of legislation to reach the President’s desk; the Defiance Act, which passed the Senate nearly a year ago, was never introduced to the House.
It’s unclear how platforms intend to establish the infrastructure needed to accommodate the legislation. Neither Meta nor Google responded to requests for comment.
The good, and the bad: While a number of groups have lauded the bill for the protections it offers, many more have expressed a variety of concerns with the letter of the law, here.
In a statement published on Monday, the Cyber Civil Rights Initiative (CCRI) welcomed “the long-overdue federal criminalization of NDII (nonconsensual distribution of intimate images), but said that “we regret that it is combined with a takedown provision that is highly susceptible to misuse and will likely be counter-productive for victims.”
The CCRI has a couple of problems with the legislation, namely that it doesn’t provide any safeguards for false complaints. “It would be entirely possible for a platform to be overwhelmed with reports of content that are not intimate visual depictions at all.”
Further, a failure to “reasonably” comply with takedown requests will be subject to enforcement by the FTC under the FTC Act, something that has the CCRI concerned based on the recent politicization of the organization.
In March, Trump said: “I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody.”
Slade Bond, a former Department of Justice official, however, said that the legislation’s definitions are narrow enough that the Act would make a “poor tool for politicized enforcement.”
The Electronic Frontier Foundation, meanwhile, has said that the bill, beyond increasing risks of censorship, puts encrypted messaging services (like Signal) at risk, since they — unlike email providers — are not excluded from the provisions of the legislation.
“How could such services comply with the takedown requests mandated in this bill? Platforms may respond by abandoning encryption entirely in order to be able to monitor content — turning private conversations into surveilled spaces,” EFF wrote.


There’s an app for that: Meta on Tuesday launched a stand-alone version of its AI assistant in the form of its Meta AI App. Built with Llama 4, the app brings Meta into more direct competition with OpenAI.
In the red: Shares of Snap tumbled 11% in after-hours trading despite a strong earnings report, due to the company’s decision not to offer second-quarter guidance in the face of macroeconomic uncertainties. But the broader market had another good-ish day following hints that a trade deal with an unnamed country is close to being signed.

Instagram’s AI chatbots lie about being licensed therapists (404 Media).
Reddit bans researchers who used AI bots to manipulate commenters (The Verge).
Huawei eyes new AI chip to rival Nvidia (Semafor).
Exclusive: Trump officials eye changes to Biden's AI chip export rule, sources say (Reuters).
AI is using your likes to get inside your head (Wired).
Fuzzy narratives and platform shifts: Duolingo goes ‘AI-first’

Source: Unsplash
When Microsoft released its Work Trends report last week, it identified the emergence of something it termed the “frontier firm,” a company with advanced AI deployment and maturity in possession of a “belief that agents are key to realizing ROI on AI.”
Fewer than 3% of those polled for the report said they worked at such a firm.
It seems that Duolingo, the language-learning app, is interested in including itself on that short list.
In an internal email sent by CEO Luis von Ahn (and published on Duolingo’s LinkedIn page), von Ahn said that Duolingo is officially going to be “AI-first.”
In 2012, he said, Duolingo “bet on mobile,” something that allowed it to thrive for the past decade. Now, another “platform shift” is coming, and Duolingo doesn’t want to get left behind.
AI, according to von Ahn, “isn't just a productivity boost. It helps us get closer to our mission. To teach well, we need to create a massive amount of content, and doing that manually doesn't scale. One of the best decisions we made recently was replacing a slow, manual content creation process with one powered by Al. Without Al, it would take us decades to scale our content to more learners. We owe it to our learners to get them this content ASAP.”
Internally, this means a massive restructuring of “how we work,” a significant adjustment that von Ahn said won’t happen overnight.
Still, he said that Duolingo “can't wait until the technology is 100% perfect. We'd rather move with urgency and take occasional small hits on quality than move slowly and miss the moment.”
As part of the shift, Duolingo will stop hiring contractors to “do work AI can handle.” The people Duolingo does hire will be required to use AI, a requirement that will be evaluated during performance reviews.
Further, von Ahn said that teams will only be allowed to hire new people if they “cannot automate more of their work.”
It mirrors a recently leaked memo from Shopify CEO Tobi Lutke, a memo that said that effective AI usage is now a “fundamental expectation” of everyone at the company. Lutke went on to say that teams won’t be allowed to hire additional people unless they can prove the additional work cannot be automated.
Still, we’re at a point in the generative AI integration where the benefits of this approach remain decidedly unclear.
Some companies, like Klarna, have tried something like this, and then been forced to reverse course. Others, like Johnson & Johnson, found that the value of the technology is in narrow applications, rather than widespread use.
This all is somewhat compounded by a recent working paper, published earlier this month from the Becker Friedman Institute for Economics at the University of Chicago, that identified minimal labor market impacts due to generative AI.
The paper is based on two massive surveys, each of 25,000 people from 7,000 workplaces across Denmark, and employer-employee data regarding wages, earnings and working hours.
Though the surveys identified time savings associated with chatbots across the board, “users report average time savings of just 2.8% of work hours,” far from a transformation in productivity.
The researchers estimate that only 3-7% of workers’ already-modest productivity gains “are passed through to higher earnings … the limited impacts of AI chatbots on workers’ earnings reflect a combination of modest productivity gains and weak pass-through to wages, although employer policies can enhance both.”
They added that, “while adoption has been rapid, with firms now heavily invested in unlocking the technological potential, the economic impacts remain small.”
On top of this real-world study, a team of researchers at Carnegie Mellon University recently conducted an experiment called TheAgentCompany, where they threw state-of-the-art agents into a self-contained digital environment designed to mimic a small software company, to see how well benchmark performance translates to real-world efficacy.
The best-performing agent only managed to complete 24% of its assigned tasks.
The researchers said that “there is a big gap for current AI agents to autonomously perform most of the jobs a human worker would do, even in a relatively simplified benchmarking setting.”

There are some interesting throughlines that connect the internal AI-related transitions that are happening now to previous technological transitions to the cloud, to the internet and to personal computing.
First, the language from the executives has never really changed.
von Ahn said that “change can be scary.” Lutke said that this “sounds daunting.” When HP fired 27,000 employees to strategically shift to the cloud back in 2012, the language was: “while these actions are difficult … they are necessary.” When Microsoft laid off 18,000 people in 2014 to make that same transition, the language was a little more succinct: “difficult, but necessary.”
Going back further, tens of thousands of jobs were lost in the wake of the dot-com bubble burst.
And in 1993, IBM — which had previously never laid anyone off — let 60,000 workers go as part of its battle back to relevancy (and profits).
When it comes to this transition, it seems in many corners to be happening, not based on clear evidence, but based on discourse that demands adoption, underlined by a prevalent and ceaseless desire to just keep trimming the workforce.
Again, I don’t think many companies will be legitimately successful there. Johnson & Johnson’s approach seems the most likely example that will be emulated: narrow, evidence-based adoption. In other words, using the tech where and when it makes sense to use it, rather than mandating its use in everything, by everyone, all the time.
Reliability issues and security issues, neither of which has been mitigated to a legitimately acceptable degree despite all the benchmark progress over the past two years, are enough on their own to stymie wide-scale enterprise adoption.
Combine that with the fact that this kind of head-to-toe reconstruction of internal methods is likely to hurt companies — or even shut them down — before it helps them, and you’ve got something that sounds better than it actually is.
Still, Duolingo and Shopify won’t be the first to attempt something like this.
Both memos read to me like the stage that precedes layoffs; the jump from no more hiring due to automation, to firing due to automation, seems like a very small one to make.


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“The seasons in the foreground and background match in Image 1, but in Image 2, it appears that the human figure is in Autumn while the background is in summer.”
Selected Image 2 (Right):
“The curve of the earth in the ocean looked off to me in the first image.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
Here’s your view on ChatGPT’s sycophancy:
43% of you said ChatGPT’s been annoying lately, and they need to fix it ASAp.
33% don’t mind it.
Fix it!:
“It has, on more than one occasion, attempted to 'speak' and 'think' for me, ignored my repeated instruction on what not to include in chats, and has been a user pleaser which I do not need, especially when performing research and requesting feedback.”
Fix it!:
“It made mistakes and was a champion groveler. I assigned it a task to look at 17 schools and prepare a report on each, and it kept bombing. When it failed, it apologized profusely, but was unable to help me figure out why it failed. Eventually, I realized it was a memory issue and I had to break up the assignment.”
You down for an AI-powered Duolingo? |
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.