- The Deep View
- Posts
- ⚙️ Microsoft is building the "open agentic web"
⚙️ Microsoft is building the "open agentic web"

Good morning. Swedish buy-now-pay-later giant Klarna is now making nearly $1 million in revenue per employee (up from $575K) after replacing 700 customer service workers with AI chatbots. The efficiency push comes just as the company filed for its much-anticipated US IPO... only to promptly postpone it after Washington’s tariff announcement sent markets into a tailspin.
— The Deep View Crew
In today’s newsletter:
♋ AI for Good: AI doing big things for equitable cancer care
📓 New NotebookLM app helps you understand anything, anywhere
📈 Microsoft ditches bing to build the open agentic web
♋ AI for Good: AI doing big things for equitable cancer care

Source: CancerNetwork
Up to 30% of breast-cancer cases flagged in screening are ultimately overdiagnosed, sending patients through surgery and chemo they never needed. Researchers say AI can shrink that number by spotting subtler tumor patterns and matching them to precision-medicine profiles.
Medical student Viviana Cortiana and physician Yan Leyfman lay out the roadmap in their April 2025 ONCOLOGY review, calling for population-specific models trained on diverse, high-quality data to curb false positives and tailor treatment, especially in low- and middle-income countries.
Their framework for ethical cancer AI rests on four pillars:
Data privacy and security
Clinical validation
Transparency in model design
Fairness through bias checks
Early roll-outs show the concept works:
India’s Telangana state has begun an AI pilot across three districts to screen for oral, breast and cervical cancers, with instant triage to specialists—an approach aimed at easing its radiologist shortage.
AstraZeneca + Qure.ai have processed five million chest X-rays in 20 countries, flagging nearly 50,000 high-risk lung-cancer cases and proving AI triage can scale in resource-strained settings.
“AI has the potential to fundamentally change how we detect, treat and monitor cancer, but realizing that promise… will require collaboration, validation, thoughtful implementation and a commitment to leaving no patient behind,” Leyfman said.
Big picture: Bringing these tools to scale will require collaboration. Health systems can supply de-identified scans, tech firms refine algorithms, NGOs underwrite training and governments streamline approvals. If those players sync, AI could deliver the same diagnostic confidence enjoyed in top clinics to every community, easing overtreatment costs and catching deadly cancers earlier, resulting in smarter care for all.

AI isn’t a buzzword anymore. It’s the skill that gets you hired, helps you earn more, and keeps you future-ready.
Join the 2-Day Free AI Upskilling Sprint by Outskill — a hands-on bootcamp designed to make you AI-smart in just 16 hours.
📅23rd May- Kick Off Call & Session 1
🧠Live sessions- 24th & 25th May
🕜11AM EST to 7PM EST
Originally priced at $499, but the first 100 of you get in for completely FREE! Claim your spot now for $0! 🎁
Inside the sprint, you'll learn:
✅ AI tools to automate tasks & save time
✅ Generative AI & LLMs for smarter outputs
✅ AI-generated content (images, videos, carousels)
✅ Automation workflows to replace manual work
✅ CustomGPTs & agents that work while you sleep
Taught by experts from Microsoft, Google, Meta & more.
🎁 You will also unlock $3,000+ in AI bonuses: 💬 Slack community access, 🧰 top AI tools, and ⚙️ ready-to-use workflows — all free when you attend!
📓 New NotebookLM app helps you understand anything, anywhere

Source: Google
NotebookLM just went mobile, giving you a smarter way to learn, organize, and listen. Anytime, anywhere.
After months of user feedback, NotebookLM is now available as a mobile app on both Android and iOS. The app brings key features from the desktop version to your phone or tablet, allowing you to interact with complex information on the go. Early users are already praising its ability to help with research, review, and multitasking in real time.
Whether you’re a student, professional, or knowledge enthusiast, NotebookLM now fits right in your pocket.
The details: NotebookLM is no longer tied to your desktop. With its new mobile release, Google is giving users more flexibility in how they process and interact with information.
Available now on iOS 17+ and Android 10+
Listen to Audio Overviews offline or in the background
Ask questions by tapping "Join" while listening
Share content directly from other apps into NotebookLM
Ideal for managing information while commuting or multitasking
Google says this is just the start. More updates are on the way, including expanded file support and tighter integration with other Google products. Additional source types will be supported in future updates and annotation tools and editing options are expected soon. Feedback is being collected on X and Discord with future releases that may include deeper AI customization and smarter summaries.
Here’s a great video from a couple of weeks ago talking about NotebookLM turning everything into a podcast. Check it out
Big picture: NotebookLM is evolving into more than a research tool. By going mobile, it becomes a personal learning assistant you can use wherever inspiration hits. The shift is not just about convenience—it is about making high-level thinking mobile. Whether you’re reviewing documents on the train or summarizing sources between meetings, this update turns passive reading into active understanding.

Train and Deploy AI Models in Minutes with RunPod
What if you could access enterprise-grade GPU infrastructure—without the enterprise-grade price tag or complexity?
With RunPod, you can. Our platform gives developers, engineers, and startups instant access to cloud GPUs for everything from model training to real-time inference. No devops degree required.
Build, train, and deploy your own custom AI workflows at a fraction of the cost—without waiting in line for compute.


A new study reveals that most AI models still can’t tell time or read calendars
Nvidia’s CEO calls Chinese AI researchers “world class” in a nod to global innovation
Leaked specs point to a lightweight iPhone 17 Air with a 2,800mAh battery
Why AI advancement doesn’t have to come at the expense of marginalized workers
China begins assembling its supercomputer in space

Google: Machine Learning Engineer, LLM, Personal AI, Google Pixel
Deloitte US: AI Engineering Manager/Solutions Architect - SFL Scientific

📈 Microsoft ditches bing to build the open agentic web

Source: Microsoft
If you can’t beat them, host them. At Microsoft Build, Microsoft announced partnerships to host third-party AI models from xAI, Meta, Mistral, and others directly on Azure, treating these former rivals as first-class citizens alongside OpenAI’s ChatGPT models. Developers will be able to mix and match models via Azure’s API and tooling, all with Microsoft’s reliability guarantees and security wrapper:
Meta’s Llama series – the open-source family of large language models from Meta, known for being adaptable and efficient.
xAI’s Grok 3 (and Grok 3 Mini) – the new LLM from Elon Musk’s startup xAI, which Microsoft is now hosting in Azure in a notable alliance (Musk once co-founded OpenAI, now he’s indirectly back on Microsoft’s platform).
Mistral – a French startup’s model focusing on smaller, high-performance LLMs.
Black Forest – models from Black Forest Labs, a German AI firm.
This brings Azure’s catalog to 1,900+ models available for customers. Microsoft CEO Satya Nadella, who spoke via hologram with Elon Musk during the keynote, touted the multi-model approach as “just a game-changer in terms of how you think about models and model provisioning,” Nadella said. “It’s exciting for us as developers to be able to mix and match and use them all.” In effect, Microsoft is positioning itself as an impartial arms dealer in the AI race – happy to rent you any model you want, so long as you run it on Azure.
Go deeper: Microsoft introduced an automatic model selection system inside Azure (part of the new Azure AI Foundry updates). Dubbed the Model Router, it routes each AI query to the “best” model available based on the task and user preferences. This behind-the-scenes dispatcher can optimize for speed, cost, or quality – for example, sending a quick question to a smaller, cheaper model, but a complex query to a more powerful (and expensive) model. It also handles fallbacks and load balancing if one model is busy. For developers, it promises easier scalability and performance without manual model wrangling.
Yes, but: The catch? All this convenience further ties developers into Microsoft’s ecosystem. The Model Router makes Azure the brain that decides which model handles your requests – a useful service, but one that subtly increases dependency on Microsoft’s cloud. By making multiple models available under one roof (and even one API), Microsoft reduces any incentive for customers to shop around elsewhere. Choice is abundant – as long as Azure is the one providing it.
Another standout Build announcement was NLWeb, an open-source initiative aimed at turning every website into a model-callable endpoint that can talk back in plain language. Microsoft’s CTO Kevin Scott introduced NLWeb as essentially the HTML for the AI era.
The idea: with a few lines of NLWeb code, website owners can expose their content to natural language queries. In practice, it means any site could function like a mini-ChatGPT trained on its own data – your data – rather than ceding all search and Q&A traffic to external bots.
Each NLWeb-enabled site runs as a Model Context Protocol (MCP) server, making its content discoverable to AI assistants that speak the protocol. In one demo, food site Serious Eats answered conversational questions about “spicy, crunchy appetizers for Diwali (vegetarian)” and generated tailored recipe links – all via NLWeb and a language model, without an external search engine in the middle. Microsoft is pitching this as an “agentic web” future where AI agents seamlessly interact with websites and online services on our behalf.
In other Microsoft news, GitHub Copilot is graduating from autocomplete to autonomous agent. At Build, Microsoft previewed a new Copilot capability (a “coding agent”) that can take on full software tasks by itself. We talked about these AI powered dev tools in yesterday’s edition.

Microsoft is betting big on becoming the infrastructure layer for AI. After last week’s layoff of about 6,000 workers—the firm’s second-biggest cut ever—the company is plowing cash into GPUs, data centers and a catalog of 1,900+ models. The new Model Router lets Azure decide which model handles each query, tightening the lock-in loop.
Bing’s near-absence says it all. Search got only a footnote—mainly news that the standalone Bing Search APIs will be retired this summer, folded into Azure “grounding” services for agents. Microsoft doesn’t need to win consumer search if it can own the pipes every AI request flows through.
Agents stole the Build spotlight, but many reporters we’ve spoken to (for a role we’re hiring… click here to apply if you’re smart and like to write about AI :) call agent hype overblown. Microsoft is leaning in anyway—because agents will need a home, and Azure already has the keys.
Up next: Google I/O is happening today, and it’s a safe bet Sundar Pichai and team will have their own AI twists and turns to announce. We’ll cover how Google’s vision stacks up in our next edition. Stay tuned.


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“I spent 5 minutes thinking about how donkeys/mules walk and decided that the guy in [the other Image] would have had both legs on the right hand side moving in the same direction, not oppositionally.”
“The grass in the foreground looks duplicated and the tree line in the distance looks too uniform and obviously fake. I went with [this image] because the color cast is consistent throughout and not Ai optimized.”
Selected Image 2 (Right):
“Oof, this was hard. It looked like it had more details, but the other one was a better picture.”
“the donkey in [the other image]… what happened to his ear and the background is too distorted for the type of shot taken. Even though I am having trouble with the saddle sash on the first [this] one I still think the [other] one is AI.”
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.