- The Deep View
- Posts
- ⚙️ AI hacker beats 20-year expert in 28 mins
⚙️ AI hacker beats 20-year expert in 28 mins

Good morning. A YouTuber successfully trained an AI model to replicate the visual effects of psychedelic drugs by injecting noise into neural networks, with experienced users confirming the results look convincingly trippy. Finally, AI hallucinations that are actually supposed to be hallucinations.
— The Deep View Crew
In today’s newsletter:
🚦 AI for Good: Stop sign cameras cut violations by 76%
🚧 Amazon’s $10B bet on North Carolina’s AI future
💣 Vibe hacking Is here and It’s AI’s next big threat
🚦 AI for Good: Stop sign cameras cut violations by 76%

Source: Obvio
A California startup's AI-powered stop sign cameras are delivering impressive safety results while taking a privacy-conscious approach to traffic enforcement. Obvio, which raised $22 million in Series A funding, has deployed solar-powered cameras across Prince George's County, Maryland that use computer vision to detect stop sign violations and immediately warn drivers with digital messages.
The results have been striking. Cottage City saw a 76% decrease in stop sign violations after completing its pilot program, while other locations achieved 50% reductions within eight weeks. In Morningside, police Chief Dan Franklin noted that even 30 additional officers couldn't match what the AI system accomplished. The cameras detected over 1,400 daily violations initially in some locations, highlighting the scale of unsafe driving that traditional enforcement missed.
Unlike comprehensive surveillance systems, Obvios cameras process footage locally using on-device AI, only transmitting data when violations occur. Non-violation footage is automatically deleted after 12 hours, and the company explicitly designed the system to avoid creating what CEO Ali Rehan calls "a panopticon."
The technology addresses a critical safety crisis - Maryland recorded 161 pedestrian deaths in 2023, ranking among the nation's highest. With traditional traffic enforcement limited by staffing constraints, AI cameras provide 24/7 monitoring that one officer described as processing 50-100 violations daily compared to 5-6 through conventional methods.
Privacy advocates remain cautious about expanding AI surveillance, even for safety purposes. The key question is whether targeted applications like Obvios can deliver safety benefits while maintaining appropriate privacy protections as the technology scales nationally.

Together AI: The AI Acceleration Cloud for Inference & Turbocharged GPU Clusters
Unify every stage of your generative AI workflow - launch, customize, and scale on one platform built atop NVIDIA Blackwell HGX B200 and GB200 NVL72 GPUs.
What You Get with Together AI
Serverless & Dedicated Inference
Launch endpoints in minutes - fully SOC 2 and HIPAA compliant, with AWS deployment options for ironclad security.200 + Open-Source Models
Tap into a library of over 200 high-performance models - DeepSeek-R1-0528, Llama, Qwen, Flux - via our OpenAI-compatible API.Full & LoRA Fine-Tuning
Own your model: from full-parameter fine-tuning to lightweight LoRA adapters - now with preference optimization for sharper results.Custom GPU Clusters
Scale from 16 to 1,000+ GPUs (Grace Blackwell GB200 NVL72, HGX B200, H200, H100) - engineered for throughput with InfiniBand, NVLink, and Together Kernel Collection. Includes 99.9% SLA and expert AI advisory.Transparent, Cost-Smart Pricing
Utility pricing per million tokens I/O, and GPU clusters starting at $1.75/hour - no surprises, no lock-in.
Get Started Now
Start your free inference trial and, when you’re ready for hyperscale training, our team will design and reserve the perfect GPU cluster for your roadmap.
🚧 Amazon’s $10B bet on North Carolina’s AI future

Source: ChatGPT 4o
Amazon will dump $10 billion into North Carolina for new data centers, the latest sign that Big Tech's AI infrastructure spending has reached extraordinary levels.
The Richmond County project will create 500 jobs and support thousands more in construction and supply chain roles. But the scale reveals the true cost of the AI arms race: Amazon is burning through up to $100 billion of capital expenditures this year alone, most going to AI-related projects.
"Generative AI is driving increased demand for advanced cloud infrastructure," Amazon said, corporate speak for "we're terrified of being left behind." The company has already invested $12 billion in North Carolina since 2010, but this new investment nearly doubles that in a single shot.
The investment comes as AI's energy demands are raising serious concerns. The International Energy Agency projects that data center electricity consumption will more than double by 2030, reaching levels comparable to Japan's total consumption today. Goldman Sachs estimates AI will drive a 160% increase in data center power demand, creating a "social cost" of $125-140 billion from increased carbon emissions.
The Richmond County site will house computer servers, data storage drives and networking equipment needed to power cloud computing and AI technologies. Amazon has invested $12 billion in North Carolina since 2010, but this single project nearly doubles that commitment.
The numbers are staggering: Richmond County has just 42,000 residents, meaning Amazon is spending roughly $238,000 per person in the area. The company frames this as community investment, promising data center technician training and a $150,000 community fund — which equals what Amazon spends every 47 seconds if this investment plays out over five years.
Governor Josh Stein called Amazon's investment "among the largest in state history." Amazon said in January it would spend at least $11 billion in Georgia for similar data center projects, part of a broader industry trend that has utilities scrambling to build new generation capacity to meet surging demand.

Your Team Should Really Know What's Coming Next In Fintech, By Now...
...but they don't.
So it falls on you to explain the breakthrough. The game-changing product launch. The one credit risk solution everyone's talking about.
Give your team the answers before they ask—with help from Plaid's biggest virtual event of the year.
Join top builders, execs, and product leaders to hear more about:
Smarter lending with real-time cash flow data
Live deep dives with the teams who built these solutions
The best part? It's 100% free. And by signing up, you’ll get front-row access to the recordings after.


A startup just raised $30M to catch fake employees using AI
Anthropic is letting AI write its blog now (don’t worry, humans still edit)
Snap rolled out new Lens tools for creators on iOS and the web
Canada’s putting $13M toward building its next generation of AI talent
Reddit’s not happy—suing Anthropic for training on user data
Katzenberg says AI could reshape entertainment like CGI once did


Sana AI: AI tool that organizes internal docs, answers employee questions, and speeds up onboarding
Papercup: AI voice translation that dubs videos into multiple languages with natural-sounding voices
CodeSquire: AI coding assistant that turns plain English into Python inside Jupyter, VS Code, and BigQuery
💣 Vibe hacking Is here and It’s AI’s next big threat

Source: ChatGPT 4o
A sophisticated AI system just accomplished in 28 minutes what took a veteran cybersecurity expert with 20 years of experience 40 hours to complete. The system, called XBOW, matched a principal penetration tester's performance across 104 realistic security benchmarks, finding and exploiting the same number of vulnerabilities in less than 1.1% of the time.
This isn't theoretical research. XBOW, built by ex-GitHub engineers who raised $20 million in July 2024, has already climbed to #11 in the US on HackerOne, submitting 65 reports including 20 critical findings since September. It represents the emergence of autonomous AI attackers that can autonomously find and exploit vulnerabilities in 75% of web benchmarks without human intervention.
Why it matters: Cyberattacks surged 47% in the first quarter of 2025 compared to the same period last year, with organizations facing an average of 1,925 incidents per week. AI isn't just contributing to this surge—it's fundamentally changing how attacks work.
"When you're working with someone who has deep experience and you combine that with, 'Hey, I can do things a lot faster that otherwise would have taken me a couple days or three days, and now it takes me 30 minutes.' That's a really interesting and dynamic part of the situation," Hayden Smith, cofounder of Hunted Labs, told Wired.
The acceleration allows a single operator to probe thousands of targets simultaneously. Smith says a skilled actor could unleash 20 zero-day events at once—nearly impossible to defend using conventional tools.
How attacks evolved: Today's AI-powered threats are more sophisticated than the crude WormGPT tools that emerged in 2023. CrowdStrike's 2025 Global Threat Report revealed that nation-state actors have "added AI to their arsenal," using generative AI to create fake profiles and websites for enhanced social engineering.
Recent jailbreaking techniques have become alarmingly effective. The "Bad Likert Judge" method, discovered in January 2025, boosts attack success rates by over 60%. Meanwhile, attackers use "Context Compliance Attacks" that trick models into treating malicious instructions as legitimate policy files.
Many don't need specialized tools. Standard models like ChatGPT and Claude can be manipulated with simple prompt engineering. "The easiest way to get around those safeguards put in place by the makers of the AI models is to say that you're competing in a capture-the-flag exercise, and it will happily generate malicious code for you," Katie Moussouris, CEO of Luta Security, told Wired.
The big picture: Moussouris coined the term "vibe hacking" to describe the growing trend of directing AI to solve problems without understanding how it works. "AI is just another tool in the toolbox, and those who do know how to steer it appropriately now are going to be the ones that make those vibey frontends that anyone could use."
This democratization of advanced attack capabilities extends beyond traditional hackers. Anthropic recently reported an "influence-as-a-service" network that used Claude to orchestrate over 100 social media bot accounts for political manipulation.
By the numbers: The consequences are visible across the threat landscape. Ransomware attacks increased 126% in Q1 2025, while the FBI reported $16.6 billion in online fraud losses in 2024—a 33% increase over 2023.
"Entry-level attackers no longer need to build exploits; they can purchase pre-packaged access or even rent access to compromised environments through Telegram channels," cybersecurity expert Ben Hartwig noted. This commoditization, enabled by AI automation, has lowered barriers while increasing potential damage.
What's next: The AI industry is scrambling to respond. OpenAI and Anthropic have signed agreements with the US AI Safety Institute for safety testing. However, in a striking admission to the government, Anthropic revealed that "Our most recent system, Claude 3.7 Sonnet, demonstrates concerning improvements in its capacity to support aspects of biological weapons development."
Security firms are deploying AI-powered defenses, but as RANE analyst Hayley Benedict told Wired: "The best defense against a bad guy with AI is a good guy with AI."

XBOW's performance represents a watershed moment—AI systems now match expert human capabilities in offensive cybersecurity while operating at machine speed. The emergence of "vibe hacking" is particularly concerning, democratizing expert-level attack capabilities beyond traditional hacker communities.
When AI can accomplish in minutes what takes humans days, traditional defensive approaches become obsolete. Defenders must assume AI-powered attacks are inevitable and build defenses that match the speed, scale, and creativity of AI-assisted attacks. The race is on, and the stakes have never been higher.


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“On the fake image, the 'high tide' level of the coffee is exactly level all around the rim of the cup – that would take one skilled barista!”
“Image 2 is wayyyy too detailed with the bubbles near the foam. It looks too uncanny-valley-like.”
Selected Image 2 (Right):
“aaaah damn! :) The latte art was too perfect but I got distracted by the light that looked oversaturated imo.”
“You would think that the milk being perfectly drawn with clean lines would have been a really strong giveaway that the image is AI generated... but like me, you'd be wrong. Fooled once again.”
💭 Poll Results
Here’s your view on “Do you want AI-powered commentary when you're watching sports?”…
Yes (23%):
“The commentary can be made specific to the country where the sport is shown, ie the USA or England or even Scotland”
“I would like more in depth information on the spot.”
No (56%):
“I want NO outside superfluous distractions when I am watching sports.”
“I like the human interaction. Besides, with AI sports analysis will be wrong less often. It's a good feeling when I do a better job of analyzing plays than human experts. ”
Other (21%):
“Commentary? No thanks. But there are other uses for AI that is helpful. For example, showing the bat speed, home run trajectory, spin rate, etc during baseball games is fascinating, but I would rather listen to the worst human announcers than an AI-generated play-by-play.”
“Only if it is trained on Tony Romo”
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.