- The Deep View
- Posts
- ⚙️ Health tech VC talks AI: If you buy into the hype, you’ll lose
⚙️ Health tech VC talks AI: If you buy into the hype, you’ll lose
Good morning. I sat down with Dr. Ron Razmi, a former cardiologist and the co-founder of the health tech VC firm Zoi Capital, to discuss how he approaches investing in AI.
Like every other VC I’ve spoken with, Razmi noted that there is a lot of hype that you need to navigate in the health side of the industry. Having a background in medicine, he said, is vital in navigating that hype.
Read on for the full story.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
AI for Good: Enabling ecosystem accounting at scale
Source: UNEP
In 2021, the United Nations adopted the SEEA Ecosystem Accounting (SEEA EA) system. The system allows countries to track the details of their ecosystems. Specifically, it “allows the contributions of ecosystems to society to be expressed in monetary terms so those contributions to society’s well-being can be more easily compared to other goods and services.”
The Details: It allows nations to measure the health of their ecosystems and the contribution of these ecosystems to the economy, social well-being, jobs and livelihoods.
This can in turn enable policy focused on specific conservation efforts & the management of natural resources.
The AI of it all: A few weeks after adopting the system, the UN launched an AI tool — the ARIES for SEEA Explore — designed to help countries adopt SEEA EA. The system allows for customizable ecosystem accounting anywhere across the globe.
Why it matters: The UN said at the time that, in order to achieve sustainability goals, it is essential to build economic systems that value nature as a “central source of human wellbeing, environmental health and economic prosperity.”
Learn to Build No-Code AI
MindStudio's intuitive no-code platform makes AI accessible to every company — even those with limited dev resources.
We’ll train your team to build AIs in as little as a day.
Thought experiment: On language, thoughts and AI
Source: Unsplash
Last month, we wrote (again) about the notion that language and thought are not the same thing; new research is increasingly showing that language is nothing but a tool for communication. It is not indicative itself of intelligence.
This factors heavily into the world of AI, with machines and chatbots now able to produce prose calling into question what levels (if any) of intelligence have been achieved.
The reality, of course, is that words do not equal intelligence, and further that these chatbots are little more than statistical probabilistic generators that were trained on the corpus of the internet.
Grady Booch, a prominent AI researcher, recently challenged the notion that language alone is “enough to build a mind,” suggesting a simple thought experiment to disabuse people of that notion.
The experiment: “Imagine a time in your life when you felt a moment of awe, intense joy, or deep love.”
“Perhaps it was when you stood on the shore of an ocean at night, sensing its immensity,” he wrote. “It might have been as a child, laughing with abandon as you were flung about in a carnival ride. Remember looking into the eyes of a lover, when there was nothing at all in the cosmos but the two of you.”
“Now, write it down. Describe how you felt, what you tasted, what you smelled. Explain in words what it meant in all its fullness.”
“You can’t.”
Booch said that there are some human experiences that “defy codification in textual symbols. Even if you were to train an AI on the lyrics of every love song, every novel, every painting or movie or concert or dance, those symbols would fall short of capturing the subjective experience of being alive, of being you, in that moment.”
As the research is increasingly showing, language is just “one modality we use to reflect our inner state,” according to Booch. “Words are not thoughts or feelings, they are only one way we communicate our thoughts and feelings.”
My view: I deeply appreciate Booch’s approach here. There is wonder in the world and in our lives that can’t be written down; we are a concoction of the indescribable made physical. And often, we are beyond words.
Laying that out on such a simple level hopefully allows people to better understand that critical difference between language and thought; that understanding will hopefully help to peel back a layer of the all-powerful AI hype.
Hayden AI, which specializes in helping cities leverage AI tech, announced $90 million in Series C funding.
Digital health startup Regard raised $61 million in Series B funding.
Tesla’s future was all about the battery — until it wasn’t (The Information).
Google in talks to acquire cybersecurity startup Wiz for $23 billion (Reuters).
Why the current job market has been such a bad match for the college degree and recent grads (CNBC).
Meta is training its AI with public Instagram posts. Artists in Latin America can’t opt out (Rest of World).
Regrow hair in as few as 3-6 months with Hims’ range of treatments. Restore your hairline with Hims today.*
Whistleblowers call on SEC to take ‘swift and aggressive’ steps against OpenAI
Source: Unsplash
OpenAI whistleblowers have filed an official complaint with the U.S. Securities and Exchange Commission, alleging that the company has violated whistleblower protection laws.
The complaint was first reported by The Washington Post.
The details: Anonymous whistleblowers sent a letter — seen and published by the Post — to the SEC, asking the SEC to take “swift and aggressive” steps to address OpenAI’s alleged illegal conduct.
The letter alleges that OpenAI’s severance, non-disparagement and non-disclosure agreements all violate SEC Rule 21f-17(a), which refers to a specific clause of the SEC’s whistleblower incentive and protection law.
These agreements, according to the letter, “prohibited and discouraged … both employees and investors” from speaking with the SEC regarding potential securities law violations; the letter also alleges that OpenAI required employees to waive their rights to whistleblower compensation and receive prior consent before disclosing “confidential information” to the government.
“At the heart of any such enforcement effort is the recognition that insiders … must be free to report concerns to federal authorities,” the letter said. “Employees are in the best position to detect and warn against the types of dangers referenced in (President Joe Biden’s) Executive Order and are also in the best position to help ensure that AI benefits humanity, instead of having the opposite effect.”
The letter is calling on the SEC to require OpenAI to produce every employment agreement, investor agreement or contract including an NDA that it has ever signed for review by the SEC.
It also calls for the SEC to fine OpenAI for each individual violation and require the company to inform all employees, past and present, of its violations of the law.
The issue of OpenAI’s NDAs first came to light in May; CEO Sam Altman said he didn’t know about the clause (though the documents tell a different story) and said at the time that his team was working to rectify the issue.
An OpenAI spokesperson told the Post that its “whistleblower policy protects employees’ rights to make protected disclosures. Additionally, we believe rigorous debate about this technology is essential and have already made important changes to our departure process to remove nondisparagement terms.”
The complaint was initially sent to the SEC last month; it’s unclear if an investigation has been opened.
Join this 3-hour Power-Packed Workshop worth $199 at no cost and learn 20+ AI tools like:
Midjourney, Invideo, Humata, Claude AI, HeyGen and more.
Additionally, this workshop will help you:
Do quick excel analysis & make AI-powered PPTs
Earn over 20K per month using AI
Build your own personal AI assistant to save 10+ hours
Research faster & make your life a lot simpler & more…
Health tech VC talks AI: If you buy into the hype, you’ll lose
Source: Unsplash
Artificial intelligence is often discussed in a risk/reward framework. Many, including OpenAI CEO Sam Altman, have said that it is worthwhile to weather the risks since the rewards could include, for instance, “finding a cure for cancer.”
While there are many health-related applications of machine learning and AI active today — including searches for new drugs and treatments that might address some forms of cancer, ALS, and a number of other illnesses — the process isn’t as straightforward as it might seem.
Part of the reason behind this is that the AI we’re talking about in these applications isn’t the hypothetical general or super intelligence that Altman is trying to build (that doesn’t exist). The other element here is that, in healthcare, there are certain processes in place. And while AI can speed things along, it isn’t some magical solution.
I sat down with Dr. Ron Razmi, a former cardiologist and the co-founder of Zoi Capital, a venture capital firm focused on AI health solutions, to discuss the status of the market and the dangers of all this hype.
Investing in AI health solutions: Razmi said that the rapid rise and subsequent collapse of health tech companies is a pretty normal thing in this space, since the tech is relatively new and the sector is extremely complicated.
Because of those complexities, he believes it is a “requirement” to have a clinical background.
When he’s looking at deals, “seven or eight times out of 10, immediately as a physician I know this isn’t going to work.”
The challenges of the ‘curing cancer’ route: The idea of applying AI to speed up drug development, according to Razmi, is a “really good story. It’s convincing. AI is very good at looking at biological data and identifying relationships.”
The problem is that the inception of these drugs is a very lengthy process, AI or not.
Once the models are built and once the data is acquired, you’re “still looking at years of development. Even when you do find promising compounds, you still have to take it through phase one, phase two, phase three, human trials, etc.”
“That initial promise that got everybody excited, it’s true, but it’s going to take time.”
It’s a process that likely isn’t in line with the tighter time horizons most private investors are interested in.
But that’s not to say that there aren’t more immediate applications in the field; the applications Razmi is interested in, though, are a little less glamorous on the surface.
The other side of AI/health applications: In healthcare, documentation is both vital and time-consuming. And one thing Large Language Models (LLMs) are getting reliably good at is auto-generated transcriptions.
The idea of technology (reliably and securely) listening in the background and auto-generating notes from a patient visit is “an exciting use case,” according to Razmi.
“I know it's not curing cancer,” he said. “Nobody needs to look down on these cases that improve the quality of life for providers, unburdens them, because it's not immediately treating cancer. That's a great use case.”
He’s also excited by certain health-monitoring applications of AI that analyze and monitor a patient’s entire day; such systems are able to pick up on certain indicators of an impending illness before it comes in full force, enabling more impactful treatment.
The market VS the adopters: The key, Razmi said, to AI in healthcare, is identifying the right use case and delivering real, legitimate value to clinicians. Nothing else will work.
“People working in healthcare are getting up in the morning and they're going to their jobs and they're trying to get through their days. They're not thinking about innovation. They're not thinking about AI,” Razmi said.
“So no matter what a VC wants, or what an entrepreneur wants, the adoption of the stuff is in the hands of those users. This is one of those black-and-white situations where you can think whatever you want to think, buy into whatever hype you want to, but the market is going to teach you hard lessons.”
Which image is real? |
A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Your view on real-world AI use cases:
A lot of you (40%) seem very excited about the idea of a robot butler. And a little under 40% are curious/excited about AI applications in HVAC systems.
Robot butler:
“Robot butler is the dream. It can do all the chores: make the bed, wash the dishes, vacuum (okay they kinda already can do this), mop (this too, mostly), dust, put away the groceries, wash, dry (or hang on the line) and fold the laundry, clean the litterbox, mow the lawn, water the plants, put away the kids toys.”
Something else:
“Ditch the using of these massive "supercomputers" for AI. (Which I doubt will ever happen.) Use them instead to achieve better weather models. That's an Instant benefit for everyone.”
Do you work in healthcare? What applications of AI get you excited? |
*Hims Disclaimer: Based on studies of topical and oral minoxidil. Hair Hybrids are compounded products and have not been approved by the FDA. The FDA does not verify the safety or effectiveness of compounded drugs. Individual results may vary.