- The Deep View
- Posts
- ⚙️ The complicated statistics behind ‘safe’ self-driving cars
⚙️ The complicated statistics behind ‘safe’ self-driving cars
Good morning. We’re taking a deep dive today into Waymo’s newly-released safety statistics, which sound good but don’t quite tell the complete story (which, long story short, is that we don’t have nearly enough data).
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
Warp: Payroll and Compliance for Startup Founders
Steve Jobs is quoted as saying, “Design is not just what it looks like and feels like. Design is how it works.” There are dozens — hundreds, maybe — of payroll providers to choose from. So why would a startup attempt to build another?
Simple. Every other payroll solution out there aims to compete with features, instead of actively simplifying workload for the customer. Traditional payroll software has users paying for 50 features they won’t use and fails to do the basics (such as paying people and staying compliant with different state tax agencies) well.
Warp has said no to all the superfluous features that other software providers are focused on, and instead opts for a dead-simple interface that allows startups to get in, do the thing, and get out.
With Warp, you can process payroll in seconds, onboard new employees in minutes, and pay contractors with the click of a button.
Warp also puts state-tax compliance and other filings such as FinCEN BOI on autopilot, so founders and startups can focus on more consequential things.
Startups should spend more time in Notion and less time on the phone with the Pennsylvania Department of Revenue.
Get started now with Warp and get a $1,000 Amazon gift card when you run payroll.
currently migrating from rippling to @joinwarp and the warp team is completely cracked. i requested a setting and they shipped it in an hour
— rahul (@0interestrates)
6:20 PM • Jun 4, 2024
AI for Good: NASA’s high-tech satellite upgrades
Source: NASA
Even as technology has evolved dramatically over the past few decades here on Earth, up in space, there’s a lot of tech that has become outdated, according to NASA.
What happened: In light of this, Irvine Sensors Corporation is working with NASA to develop computer tiles that will enable space-based remote sensors to process data far more quickly.
The tiles, Stacked Miniaturized and Radiation Tolerant Intelligent Electronics (SMARTIE), boast 300 Gigaflops of computing power and 15 Theoretical Operations Per Second (TOPS) of artificial intelligence performance. As an offshoot, they also serve as a radiation shield.
Importantly, NASA said that the tiles consume less than 10 watts of power, meaning that satellites equipped with SMARTIE tiles would be lighter and more cost-effective than normal satellites.
“SMARTIE would have endless applications,” James Yamaguchi, Vice President of 3D Electronics and Mass Storage at Irvine said in a statement. “It could provide autonomy to single satellites or satellite constellations using AI, enable distributed sensors where parts of the instrument are set in different spacecraft, and perform complex operations usually done on the ground to reduce data throughput.”
AI drug discovery company is trying to challenge DeepMind
Source: Created with AI by The Deep View
Earlier this week, we talked about Google DeepMind’s AlphaFold and AlphaProteo biological AI models. On Tuesday, a new startup emerged — Chai Discovery — that says it does the same thing, just a little bit better.
The details: The model, a multi-modal foundation model for molecular structure prediction, is called Chai-1. The company released the model for free for both commercial and non-commercial use.
The developers said the model performed slightly better than DeepMind’s AlphaFold 3 on the PoseBuster’s benchmark (with a score of 77% versus 76%).
The developers said users can prompt the model directly with relevant, real-world lab data, which increases its efficacy by “double-digit percentage points.”
Chai, which was founded only six months ago, has raised $30 million in funding from Thrive Capital and OpenAI, according to Bloomberg. Jack Dent, one of the company’s co-founders, said Chai is making its first model free and hasn’t yet discussed plans to commercialize its tech.
The caveat here is that Chai-1 and similar models can speed up the drug discovery process by offering up candidates that then need to go through years of rigorous, human-led testing. This, according to one health VC, is why the drug discovery side of the AI/health intersection is a much more challenging investment than it seems.
Tech Workshop: Build a Voice AI Agent with Deepgram & Groq
Ready to write the code to build your own voice AI agent? Then this workshop is for you!
Our experts will guide you through each step and complex challenges in AI agent development like interruption handling, end-of-speech prediction, and low-latency function calling.
Plus, receive:
$1,000 Deepgram API credits
Access to a community of voice AI builders
📆 September 20 @ 9AM-12PM PT
Becoming ISO 42001 compliant shows your customers that you are taking the necessary steps to ensure responsible usage and development of AI, learn how with the ISO 42001 Compliance Checklist.*
Want to become an AI consultant? 200+ students have already started and scored their first client in as little as 3 days. Request early access to The AI Consultancy Project.*
Get high-quality meeting minutes, tasks, and decisions for all your online and offline meetings without awkward meeting bots. Save 10 hours every week and try Jamie now for free.*
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
Part 1: Waymo unveils new safety data
Source: Waymo
If you’re as chronically online as I am, you might have seen that Waymo, the robotaxi startup that actually seems to be pulling it off, recently published a new safety hub, tracking all of its safety statistics against human benchmarks. The data implies that, across a few different metrics, self-driving cars (at least, Waymo’s self-driving cars) are safer than human drivers, a point that Twitter took and ran with.
But the reality here is just a little more complicated than that. Part of this has to do with the scope and scale of the data at hand, which we don’t have enough of. Part of it has to do with the realities of the underlying AI architecture.
Let’s start with the data: Through June of 2024, Waymo’s total fleet has clocked 22 million vehicle miles without a human driver. This breaks down to 15.4 million in Phoenix, AZ and 5.93 million in San Francisco, CA.
In Phoenix and San Francisco, across those 22 million miles — and compared to a human driver over the same distance — Waymo reported 84% fewer airbag deployment crashes, 73% fewer injury-causing crashes and 48% fewer police-reported crashes.
Waymo also reported that 43% of its crashes in Phoenix and San Francisco had a change in velocity of less than 1 mile per hour, meaning they were minor.
According to journalist Tim Lee, Waymo has reported 200 total crashes. Of the 23 most severe accidents, 16 involved another car rear-ending the Waymo.
Part 2: The complicated statistics behind ‘safe’ self-driving cars
Source: Waymo
Here’s where it gets complicated. If I stopped there, things sound really positive. The problem is the available data doesn’t really make for an even comparison, and this comes across in a few different points.
First up, scale. 22 million total miles sounds like a lot. But Americans drive more than 3 trillion miles each year (a number that is 137,000 times larger).
But, you say, this new safety data compares humans across 22 million miles, not all 3 trillion. And you’re absolutely right. That brings me to my second point: geography.
In San Francisco, Waymo operates across a 55-square mile region (roughly half of the size of the Bay Area). Waymo operates across 315 square miles in Phoenix, an area that, in total, covers close to 500 square miles.
As Waymo itself notes in its safety hub, “all streets within a city are not equally challenging. A limitation of these presented benchmarks is that spatial and temporal differences between the Waymo and human benchmarks within these deployed cities have not been accounted for in the rates.” Waymo did note that its vehicles tend to operate in the denser parts of its deployment areas.
An element of this disparity also involves quantity. Waymo told me that it has a total fleet of 700 vehicles, with around 300 in San Francisco and 200 in Phoenix. The company said that not all vehicles are on the road at the same time.
It’s not clear how many daily vehicle miles its fleets clock, or the average distance of each Waymo ride.
All we know is that, across San Francisco, Phoenix, Los Angeles, CA and Austin, TX, Waymo is delivering 100,000 paid trips per week.
By comparison, In Arizona alone, there are more than eight million registered vehicles. Arizona drivers in the Phoenix Metro Area — with a population of 5 million — drive about 10 billion miles each year, which equates to roughly 27 million miles each day.
In Phoenix, as in everywhere else, there are many, many more human-driven cars, covering far more miles, all the time. If everything scaled appropriately (similar mileage, number of vehicles and geographical area), it is not at all clear that Waymo’s safety stats would scale in kind.
As Missy Cummings, a former safety advisor to NHTSA and a mechanical engineering professor at George Mason University, has said: we do not have nearly enough data to “make a statistically strong claim. We’re not even close.” (In the above talk, Cummings explains the major limitations of the current neural networks on display here).
I believe in self-driving cars. As Waymo points out, lacking a human driver removes human fallability and chronic distraction. Humans are not great drivers, and in my own experience, they seem to be getting worse. (Even with that, NHTSA has reported a fatality rate of 1.26 per 100 million miles in the U.S. in 2023).
Now, the Waymo system — as evidenced by its numbers — is impressive, loaded up with a number of vital safety backstops, including radar and lidar. (Click here to learn about how Waymos work).
This is good! Plus, Waymo has been scaling cautiously and hasn’t yet reported any fatalities.
But at the end of the day, what’s happening here is pattern recognition, not reasoning. That’s what’s dangerous. And it’s dangerous for the same reason it’s dangerous for cops and schools to integrate generative AI; these systems are not reliable.
Months ago, we reported on a study that suggested autonomous vehicles are likely safer than human drivers. But it noted the caveat that they are more likely than humans — by significant rates — to get in crashes at dawn, dusk and during turns.
Waymo will continue to scale up its operations. I hope that its safety numbers scale in kind, but I find this unlikely, especially if and when they push east into states that deal with worse weather. The problem is that the rest of us are, willing or not, participants in this experiment.
I think we’ll get there. But it’s not as close as it seems. And we need to be skeptical and cautious in order to make it happen safely.
Which image is real? |
🤔 Your thought process:
Selected Image 2 (Left):
“Image 1 is just too fake.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on AI in finance:
A third of you don’t care if your finance professional uses genAI; 22% just don’t want to know about it and 18% use AI as your financial professional.
15% aren’t on board.
Something else:
“Must have acknowledgment along with redundancy in research references.”
How long do you think it'll take for safe self-driving cars to become the norm? |