- The Deep View
- Posts
- ⚙️ IBM is trying to lock up the enterprise AI market
⚙️ IBM is trying to lock up the enterprise AI market

Good morning (or afternoon, now). Today marks exactly one year since I started at The Deep View.
It also marks my last day with the company.
Whether you’ve been a part of this journey from the beginning, or you just joined us, I can’t thank you enough for being a part of this community, reading in every morning, engaging with philosophically challenging subjects and thinking hard about AI.
Your support — and the support of the incredible team behind The Deep View — has meant so much to me. To say that it has been a pleasure would be a massive understatement.
If you want to keep track of what I’m up to next, feel free to Google (or AI Search?) away — but I’m sure you’ll be in good hands, here.
Best,
Ian Krietzberg, former Editor-in-Chief, The Deep View
In today’s newsletter:
🌎 AI for Good: Plastic eaters
👁️🗨️ MIT researchers unlock a potential breakthrough
📊 IBM is trying to lock up the enterprise AI market
AI for Good: Plastic eaters

Source: Unsplash
I don’t think I need the help of statistics to express just how dire our plastic pollution problem is, but I’ll give you some anyway: the world produces around 400 million tons of plastic waste every year. The U.S. is responsible for around 12% of that. About eight million tons of plastic enter the oceans every year, and every year, about 100,000 marine animals die due to plastic entanglements.
So. Not good.
Now, we have means of recycling some of this plastic waste, but only a very slim percentage of that massive total actually makes its way to recycling plants. And those plants emit all sorts of toxic chemicals, so, also not good.
The import of clearing all that plastic up, restoring health and balance to the natural world, has been obvious for a very long time. And for more than a decade, researchers have been studying and working to develop organic means of plastic recycling, which would be far more accessible, and less energy intensive, than industrial recycling plants.
In 2022, researchers at the University of Texas, Austin, leveraged a machine learning model to refine a plastic-eating enzyme, importantly enabling the enzyme to operate at low temperatures, a key barrier to wide-scale adoption and application.
The enzyme is able to break plastic down into tiny pieces, then chemically puts them back together, resulting in cleanly-acquired, newly-recycled plastic.
Still, the work facing the team — even armed as they are with this new enzyme — is enormous, involving scaling up production and putting it to use.
“The possibilities are endless across industries to leverage this leading-edge recycling process,” Hal Alper, a professor of Chemical Engineering at UT Austin, said at the time. “Beyond the obvious waste management industry, this also provides corporations from every sector the opportunity to take a lead in recycling their products. Through these more sustainable enzyme approaches, we can begin to envision a true circular plastics economy.”

Get your website AI-ready in minutes
Your app or website isn’t just meant for humans anymore. AI agents, APIs, and partner applications are waiting for access… and you can give it to them instantly by creating a free Descope account right here.
Your free account will give you immediate access to Inbound Apps, Descope’s tool for turning your app or website into an OAuth identity provider that helps you:
Securely connect your APIs with AI agents, M2M systems, and partner apps
Define granular user, tenant, and permission scopes
Create and display user consent screens and manage granted consents
No credit card, no trial, and no password needed – just create your free Descope account right here (or check out their demo microsite) to start your app’s journey to AI-readiness.
MIT researchers unlock a potential breakthrough

Source: Unsplash
Architecturally, the bulk of the “AI” we know today is built around transformer models.
The transformer was introduced by Google DeepMind researchers in 2017 as a breakthrough that acts as the beating heart of the large language models (LLMs) that support our current plethora of chatbots; the “GPT” in ChatGPT stands for generative pre-trained transformer.
But transformers have a number of limitations. Among them, they tend to struggle with longer sequences, with output becoming less robust the longer a sequence gets.
Though they don’t seem likely to displace transformers anytime soon, considering all the attention the attention mechanism has been getting, researchers have developed a model designed to better handle these longer sequences: state space models (SSMs). With fewer parameters than their LLM cousins, SSMs are quick, efficient and robust even across longer contexts.
Even though they’re designed for it, SSMs have weaknesses that show up when processing longer sequences of data. In part due to a recency bias, where they more heavily prioritize nearby context, they tend to become less efficient and less robust across very long sequences.
Two MIT researchers recently developed a new evolution of SSMs they call “linear oscillatory state-space models” (LinOSS), an approach that takes inspiration from neurological oscillation — rhythmic or repetitive neurological activity — a phenomenon that is present in biological brains.
"Our goal was to capture the stability and efficiency seen in biological neural systems and translate these principles into a machine learning framework," one of the scientists, T. Konstantin Rusch, said. "With LinOSS, we can now reliably learn long-range interactions, even in sequences spanning hundreds of thousands of data points or more."
It’s a breakthrough that could unlock plenty of additional, reliable and robust research into more complex modeling across, for instance, climate trends.
And it represents a non-transformer, non-LLM approach, something that has become rare indeed in all the ceaseless excitement over ChatGPT, despite its many consistent, prevailing and fundamental limitations.

Could This Company Do for Housing What Tesla Did for Cars?
Most car factories like Ford or Tesla reportedly build one car per minute. Isn’t it time we do that for houses?
BOXABL believes they have the potential to disrupt a massive and outdated trillion dollar building construction market by bringing assembly line automation to the home industry.
Since securing their initial prototype order from SpaceX and a subsequent project order of 156 homes from the Department of Defense, BOXABL has made substantial strides in streamlining their manufacturing and order process. BOXABL is now delivering to developers and consumers. And they just reserved the ticker symbol BXBL on Nasdaq*
BOXABL has raised over $170M from over 40,000 investors since 2020. They recently achieved a significant milestone: raising over 50% of their Reg A+ funding limit!
BOXABL is now only accepting investment on their website until the Reg A+ is full.


After all that: OpenAI said Monday that its non-profit will continue to control the company beyond its restructuring. The LLC that the non-profit owns will get converted into a public benefit corporation; the non-profit will have a controlling share in that corporation.

Google can train Search AI with web content even after opt-out (Bloomberg).
Google is going to let kids use its Gemini AI (The Verge).
Deepfake makers can now evade an unusual detection method (New Scientist).
US lawmaker targets Nvidia chip smuggling to China with new bill (Reuters).
Waymo plans to double robotaxi production at Arizona plant by end of 2026 (CNBC).
IBM is trying to lock up the enterprise AI market

Source: IBM
Writ large, The Enterprise is the golden goose of AI.
While OpenAI might advertise its copyright-questionable Studio Ghibli features, it — along with every other developer in the space — is far more interested in securing long-term enterprise adoption.
Considering the enormous expenses required to train and operate the generative AI models these companies sell, the clearest path to even a bit of an investment return involves the massive, organization-wide purchases that enterprises tend to make on technology.
Despite this effort, AI hasn’t had an easy time of the enterprise. Most AI experiments live and die as just that — experiments, with few graduating to actual deployment. Cost remains a problem, and enterprises remain concerned about unsolved reliability, privacy and security issues.
And while OpenAI and Anthropic might be prominent names in the generative AI space, there are a number of enterprise-specific AI startups that are operating a little under the radar; what I’ve heard from the folks that operate in this space is that these startups operate with a massive advantage over their developer competitors, since they understand intimately how to service an enterprise, namely, that servicing an enterprise goes far beyond offering a piece of software.
And IBM — alone among major developers in being focused solely on business-to-business operations — is intent on beating the entrenched players — both startups and major developers alike — to conquer AI in the enterprise.
For a while, now, IBM has been working on building what it calls “hybrid” technologies; in other words, integrating vital cloud technologies (private cloud and public cloud) with on-device computing, AI technologies, and, eventually, quantum technologies.
At its annual Think conference, the company unveiled a series of steps designed to further its hybrid approach, all while enabling it to lock up the enterprise.
The crux of this effort involves new agentic capabilities and offerings. Though everyone has a different definition when it comes to agents, IBM defines them as capable of “autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools.”
This morning, IBM unveiled new tooling that will enable customers to build new agents in just a few minutes. It also unveiled scaffolding that enables the integration, orchestration and critical observation of these agents.
And where integration across all a given company’s platforms remains challenging, IBM is introducing something called webMethods Hybrid Integration, an approach that IBM claims will aid cross-platform deployment, unlocking investment returns in the process.
To support all of this, IBM has been focusing for years on enhancing its cloud-related hardware; to that end, IBM launched IBM LinuxONE 5, what it calls the “most secure and performant Linux platform to date.” The new platform can process, according to IBM, roughly 450 billion AI inference operations per day and is powered by computing chips IBM designed to handle AI inference.
IBM brought in $14.5 billion in revenue for the first quarter of the year, and expects to rake in around $16 billion for the current quarter, despite the current macroeconomic conditions. The 54th largest company by market cap in the world, shares of IBM have gained more than 11% this year.

Everyone’s jockeying for advantage in this race to build muscle memory and gain trust.
You might have seen statements from a bunch of folks across social media that, if technical progress in AI were to stop today, right this second, it’d still take decades for us, as a society, to process and implement current technologies properly.
That might well be true. Kind of impossible to tell.
But I think that sentiment doesn’t quite capture a slightly larger perspective, that the rote technology itself doesn’t matter nearly as much as how it is used, especially when it comes to genuinely useful technology, and especially when it comes to high-stakes applications of that technology.
Capability is fine, but usefulness is something else.
The interesting thing is that smart engineering and scaffolding — observability platforms, hallucination detectors, guardrails and purposeful determinism, etc. — can make current technology useful.
And that is a sharper edge than a model that tops out another benchmark.
We’ll see how this race plays out, but I think in the end, the scaffolding will be more important than the tech at the center of those scaffolds.


Which image is real? |



🤔 Your thought process:
Selected Image 2 (Left):
“Weird arm angles in image 1, nobody stands around with their arms straight out if they are not involved...”
Selected Image 2 (Right):
“Seemed more real because of the net and the size of the people playing.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
*Boxabl Disclosure: This is a paid advertisement for BOXABL’s Regulation A offering. Please read the offering circular here. This is a message from BOXABL
*Reserving a Nasdaq ticker does not guarantee a future listing on Nasdaq or indicate that BOXABL meets any of Nasdaq's listing criteria to do so.