- The Deep View
- Posts
- ⚙️ A reminder of the motivations that drive AI companies
⚙️ A reminder of the motivations that drive AI companies

Good morning, and happy Friday.
Hope you all have a great weekend!
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🔭 AI for Good: Air quality
📊 The Big Tech earnings continue
📱 Pinterest proves that social platforms can filter the GenAI slop
💰 A reminder of the motivations that drive AI companies
AI for Good: Air quality

Source: Unsplash
It was just two short years ago that wildfires razed 30,000 square miles of Canadian forests, pouring billions of tons of carbon dioxide into the atmosphere and blanketing most of the East Coast in an apocalyptic blanket of smoke.
The sky, hundreds of miles from the fires, was orange. And the air — filled with particulate matter of all sorts — was poison.
The tricky part is that clearer-looking skies are no indication of air that’s safe to breathe.
But a few clicks and a short scroll away from every internet-connected smartphone user on the planet is access to a localized air quality index, something that’ll tell you if the air around you is safe to breathe.
For many, these analytics are provided by a company called Breezometer, an Israeli startup that Google acquired for around $200 million a couple of years ago.
With air sensors planted around the world, Breezometer runs machine learning algorithms to parse and process all that data, identifying the presence of certain pollutants.
There’s also the ZephAir app, the result of a collaboration between the U.S. State Department and NASA to provide real-time air quality forecasts for dozens of U.S. embassies around the world. Similarly to Breezometer’s approach, ZephAir leverages ground-based sensors, historical data, satellite data and AI algorithms to predict the quality of local skies.
Why it matters: In the longer term, greater access to trends and data regarding air quality ought to enable researchers, governments and institutions to put in place mitigation measures. But in the shorter term, localized knowledge of the shifting dangers of local air — a concern that becomes more pressing with each degree of global warming we clock — can save lives, even if it inspires something as simple as donning a mask before stepping outside.

This AI Design Tool Lets You Create Assets for Your Brand At Scale
Sick of settling for low-quality stock photos and uninspiring images for your website, app, and marketing materials? Then you need to check out Recraft.
This AI design tool gives you control over the entire design process, from image generation to editing, fine-tuning, and beyond. Now, with its new Advanced Style Creation and Control feature set, Recraft gives creative teams even more power to explore, define, and maintain a distinct brand style.
There’s a reason over 3 million users – including professional designers at companies like Netflix, Ogilvy, and Hubspot – trust Recraft.
Pinterest proves that social platforms can filter the GenAI slop

Source: Pinterest
Generative AI has spent the past two years truly, and in every sense of the word, proliferating.
ChatGPT was just the beginning; now, there’s a plethora of available text generators, image generators, video generators and audio generators, sometimes all bundled into one interface, sometimes spaced just a few tabs apart.
The resulting content — artificial, yet intended to look as realistic, as human-produced, as possible — has flooded every digital medium, from social media to web searches, emails and phone calls, boosting the threat of fraud and, generally, making it more difficult to sort out fact from fiction.
The scope of this reality incepted the creation of Adobe’s Content Authenticity Initiative, which aims to provide content credentials for media, allowing viewers to determine who created something, and when. But scaling the initiative has not been an easy challenge, as it requires the interest and active participation of each individual platform.
One platform has decided to take care of the problem itself.
Pinterest, responding to user complaints around a lack of transparency around AI-generated content, this week rolled out labels that will allow users to see whether a piece of content is AI-generated or modified.
Pinterest places these labels automatically on content through a combination of methods; first, it analyzes an image’s metadata, but since metadata can easily be stripped away, the platform has also developed a series of classifiers that “automatically detect GenAI content, even if the content doesn’t have obvious markers.”
Pinterest additionally set up an appeals system in case its classifiers get it wrong.
The platform added that it will soon “launch an experiment allowing users to select a ‘see fewer’ option(s) on GenAI Pins for certain categories that are prone to AI modification or generation, such as beauty and art, and will continue expanding into more areas.”
Pinterest is the first and only major platform to handle GenAI in such a manner, something compounded by the fact that it doesn’t use user content to train AI models, and isn’t pushing GenAI for on-platform content creation.


Sam Altman’s eye-scanning ID project launches in U.S. with six locations (CNBC).
Ai2’s new small AI model outperforms similarly-sized models from Google, Meta (TechCrunch).
Amazon takes aim at Cursor with new AI coding service (The Information).
Microsoft is getting ready to host Elon Musk’s Grok AI model (The Verge).
North Korea stole your job (Wired).

Your next world-class hire is just a click away. Athyna helps businesses scale faster and smarter by providing access to high-quality LATAM talent in tech, product, and data. With AI-driven matching and zero upfront fees, you can build your dream team affordably and efficiently. We streamline onboarding and support, letting you focus on growth. Hire top talent in days, not weeks.
The Big Tech earnings continue

Source: Apple
Following Microsoft and Meta’s performance the other night, the headlines on Thursday claimed that the AI trade had been reignited. Shares of Microsoft had their best day in years, up around 8%; together, Microsoft and Meta pushed the broader indices to yet another green day.
Amazon and Apple couldn’t keep the trend going.
Amazon reported earnings of $1.59 per share on $155.67 billion in revenue; both numbers were comfortably higher than analyst expectations. But AWS, Amazon’s cloud unit, only grew 17%, below analyst expectations.
But the company issued softer-than-expected guidance for the current quarter, citing “geopolitical conditions, tariff and trade policies and customer demand and spending (including the impact of recessionary fears).”
Shares fell as much as 4% in extended trading.
Importantly for the AI trade, Amazon did not indicate that it plans to reduce its planned $100 billion capex to build out AI infrastructure.
Apple, meanwhile, reported earnings of $1.65 per share on revenue of $95.4 billion, above expectations. Still, its Services unit (Apple TV+, iCloud subscriptions, Apple Music, etc.) only grew by about 11% — not as much as Wall Street wanted.
Shares fell by about 3%.
Apple Chief Tim Cook said that the company has seen “limited impact” from tariffs in the March quarter, though he cautioned that it’s “very difficult” to predict what will happen beyond June. In answer to an investor question regarding the timing for Apple Intelligence features and a GenAI-enabled Siri, he said: “we just need more time to complete the work so they meet our high quality bar."
Shares fell around 4% in after-hours trading.
S&P 500 and Nasdaq futures fell in response, indicating a red Friday morning for the major indices after a series of consecutive wins.
The excitement around Microsoft and Meta that supposedly reignited the AI trade has not stretched to Amazon and Apple, despite earnings beats across the board, despite decent guidance in the face of tariffs.
This is yet another indicator that the Mag7 excitement that dominated markets in 2023 and 2024 is, at best, much more fragile this year, and at worst, fleeting.
A reminder of the motivations that drive AI companies

Source: Nvidia
It’s easy, in the constant swell of fantastical narratives and futuristic predictions that buttress AI, to forget that the corporations behind them are just that — corporations.
Their mission statements might read heroic, but their primary driver is far less altruistic: there’s a lot of money on the line and a lot of money to be made — or lost — depending on how this whole AI race works out.
The giant startups have raised billions in funding, and now they’ve got investors to appease. So they need to make sure people buy their products.
The publicly-traded giants have to deal with Wall Street, and they doubly have to make sure that they don’t get supplanted by some startup; at the level of investment they’ve poured into being AI-dominant, they need to ensure they make it all back, and preferably, in a big way.
It’s always all about the money.
On the global scale, things are slightly, though not wholly, different.
In January, former President Joe Biden unveiled a sweeping set of chip diffusion rules that aim to divide every country in the world into a range of three tiers, tiers that will dictate how many U.S.-made GPUs they can buy.
“The United States must act decisively to lead this transition by ensuring that U.S. technology undergirds global AI use and that adversaries cannot easily abuse advanced AI,” Biden said at the time, warning that, in the wrong hands, the technology can “exacerbate significant national security risks.”
Those rules are set to take effect in about two weeks, and the current administration is considering revising them.
For Anthropic, the AI startup backed by Amazon, “maintaining America's compute advantage through export controls is essential for national security and economic prosperity as powerful new AI systems are developed in the coming years.”
Chip restrictions, according to Anthropic, are the only thing keeping Chinese developers at bay, and even then, it’s barely enough.
Anthropic, in a submission shared with the Trump Administration, suggested that the rules ought to be strengthened; export enforcement, they said, should recieve more funding, the no-license compute threshold for Tier 2 countries should be lowered and government-to-government chip agreements should be expanded — this, they wrote, would help alleviate chip smuggling and ensure the U.S. stays ahead.
Nvidia responded to Anthropic’s submission by saying that “American firms should focus on innovation and rise to the challenge, rather than tell tall tales that large, heavy and sensitive electronics are somehow smuggled in ‘baby bumps’ or ‘alongside live lobsters.’”
The comment shortly follows a statement from CEO Jensen Huang that the diffusion rules are looking at things from the wrong perspective: “I’m not sure what the new diffusion rule is going to be, but whatever it becomes, it really has to recognize that the world has changed fundamentally since the previous diffusion rule was released. We need to accelerate the diffusion of American AI technology around the world, and so the policies and the encouragement from the administration really needs to be behind that.”
For Nvidia, revenue from Singapore, China and Taiwan combined is about equal to its revenue from the U.S.

For Anthropic, a seller of generative AI products enabled by those GPUs, less competition is better; it wants to eat up as large a share of the market as possible, and OpenAI is competition enough on that front.
For Nvidia, a seller of AI-enabling GPUs, more GenAI competition is better, since it means that many more companies motivated to place massive orders for its chips.
While either company might talk about geopolitics, national security and the need to respond quickly to impending societal change, we would do well to remember that their political stances — and their technological visions — are directly motivated by their respective business models.
Sam Altman, for instance, is pushing generative AI as hard as anyone with OpenAI. But he also founded a company called WorldCoin, a cryptocurrency-type startup that aims, at least in part, to respond to the artificial world that Altman is directly ushering in.
“We wanted a way to make sure that humans stayed special and central, in a world where the internet was going to have lots of AI-driven content,” he said recently, neglecting to mention his direct responsibility for all that “AI-driven content.”
This is all a business venture, driven by people who see massive financial opportunities.
If that was the only reality underlying this industry, it would be enough of a source of skepticism on its own.


Which image is real? |



🤔 Your thought process:
Selected Image 2 (Left):
“Because AI has no personality.”
Selected Image 1 (Right):
“The shadows in the second image don't align with the figure in the foreground.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.