- The Deep View
- Posts
- ⚙️ AI "Coworkers"
⚙️ AI "Coworkers"

Good morning. A romance author accidentally left her AI prompt in the middle of a steamy dragon prince scene, asking the chatbot to "rewrite the passage to align more with J. Bree's style." The “mistake” has since been quietly scrubbed from Amazon, but not before readers captured screenshots for posterity. Nothing says authentic literary romance quite like forgetting to delete your instructions to ChatGPT.
“Please rewrite this intro to be funnier”
— The Deep View Crew
In today’s newsletter:
🧬 AI for Good: Generative Models Write DNA “Switches” for Precise Gene Control
🇮🇳 Sarvam AI’s flagship LLM stumbles with low uptake
🎯 Target doubles down on AI
🧬 AI for Good: Generative Models Write DNA “Switches” for Precise Gene Control

Source: Cell
Generative AI has officially crossed from words to genomes. In a study released May 8, scientists at Barcelona’s Centre for Genomic Regulation used an AI model to invent short DNA enhancers—regulatory “switches” that dial a gene on or off only in chosen cell types. When the team inserted these 250-letter sequences into mouse blood cells, the AI-crafted code lit up a fluorescent marker exactly where predicted, leaving neighbouring cells untouched.
“It’s like writing software, but for biology,” said first author Dr Robert Frömel, underscoring the leap toward programmable cell behaviour.
What happened:
The researchers trained the model on the largest ever library of synthetic enhancers (64 k sequences) and thousands of lab assays measuring gene-expression effects.
Given a text prompt (“activate this gene in stem cells becoming red-blood cells but not platelets”), the system generated novel DNA not found in nature, which lab tests confirmed performed as instructed
Why it matters: Target-specific enhancers promise treatments that tweak only diseased cells, limiting off-target effects. Designing switches by trial-and-error can take years; AI cuts that to hours, and the underlying code is freely shared, lowering the barrier for labs worldwide.Many disorders stem from mis-timed or mis-placed gene activity. AI-written regulatory DNA offers a new lever where drugs or protein editors fall short.

He’s Already IPO’d Once – This Time’s Different
Spencer Rascoff co-founded Zillow, scaling it into a $16B real estate giant. But everyday investors couldn’t invest until after the IPO, missing early gains.
"I wish we had done a round accessible to retail investors prior to Zillow's IPO," Spencer later said.
Now he’s doing just that. Spencer has teamed up with another Zillow exec to launch Pacaso. Pacaso’s co-ownership marketplace is disrupting the $1.3T vacation home market. And unlike Zillow, you can invest in Pacaso as a private company.
After 41% gross profit growth last year, surpassing $110M+ in four years, Pacaso is ready for what’s next. They even reserved the Nasdaq ticker PCSO.
Even better? You don’t have to wait – but time’s ticking. Invest in Pacaso for just $2.80/share by Thursday.
🇮🇳 Sarvam AI’s flagship LLM stumbles with low uptake

Source: IndiaAI
Sarvam AI, often touted as India’s largest AI startup with a valuation around $1 billion, launched its flagship large language model this week — but the rollout drew only 23 downloads in its first two days. The Bengaluru-based company’s new 24-billion-parameter model, Sarvam-M, was billed as a milestone for Indian language AI as part of India’s push for a homegrown AI ecosystem.
The disconnect between the company’s scale and the model’s reception was quickly noted in industry circles. Debarghya “Deedy” Das, a Menlo Ventures investor, called the low uptake “embarrassing” and argued there is “no real audience” for such an incremental effort. He contrasted Sarvam’s meager download count with an open-source LLM built by two Korean college students that amassed roughly 200,000 downloads in short order.
Sarvam’s leadership defended the release. Co-founder Vivek Raghavan described Sarvam-M as an “important stepping stone” toward a sovereign AI stack for India. A senior researcher at the startup noted the model achieved new benchmark results for several Indian languages, urging observers to review Sarvam’s technical report before dismissing it.
Sarvam AI was among the first startups selected under the national IndiaAI Mission to develop a foundational LLM, and its early stumble raises questions about whether domestic AI projects can meet lofty expectations or if the lukewarm response reflects an expectations mismatch and the need for patience in building homegrown AI.

Revenue Leaders: Close Your Execution Gap Before Q3 Ends
Your strategy is sound. Your execution is failing you daily.
While you and you're team is buried in CRM cleanup and missed handoffs, critical revenue signals are slipping through the cracks.
Momentum's AI Revenue Orchestration eliminates these execution gaps that CROs, RevOps leaders, and CIOs struggle with most.
In your 30-minute transformation session, we'll show you how companies like Ramp achieved 32% faster deal cycles—without adding headcount.
You'll receive a personalized execution gap assessment that identifies exactly where your revenue is leaking—plus a practical implementation roadmap with ROI projections specific to your tech stack.


Deel bolsters governance as it fights spying claims
DJI drones are everywhere. The U.S. may still ban them
Google is putting Its Gemini AI Into robots
Google’s Veo 3 AI video generator is a slop monger’s dream
Carnegie Mellon University is training a next generation of AI leaders


Devstral: Mistral’s open-source coding model
Mitsuko: Translate subtitles and transcribe audio for quite reasonable prices
AltPage: An AI SEO tool to help your site show up when people search for your competitors
Bagel: Multimodal AI that generates images, edits images, and understands images in one 7B parameter model
Probo: Compliance for Startups to get SOC2/ISO27001/HIPAA in a week
🎯 Target doubles down on AI

Source: ChatGPT4o
Target’s earnings slump hasn’t dulled its appetite for automation. After a rough quarter, the retailer created an “Enterprise Acceleration Office” to “more boldly leverage technology and AI,” according to COO Michael Fiddelke. Executives touted projects that will “modernize and streamline” inventory and allocation, building on last year’s Store Companion chatbot for staff and the Roundel data-ad platform.
Futurism described those earlier efforts as “disastrous,” noting facial-recognition lawsuits and surveillance backlash. Online chatter also claimed the company briefly tested AI-generated “digital humans” to answer corporate FAQs. No public evidence confirms the trial, and Target has never addressed the rumor. The episode, real or not, sits in a broader pattern: brands reach for photorealistic AI avatars, then scramble when transparency questions erupt.
When fake people get exposed: Sports Illustrated is the clearest case study. In late 2023 reporters found the magazine publishing reviews under bylines such as “Drew Ortiz,” whose headshot was for sale on an AI-headshot site. Staffers said “there’s a lot” of such non-existent writers. After inquiries, the publisher yanked the profiles and blamed an outside contractor. The SI Union called the practice “horrifying” and demanded answers.
LinkedIn faced a similar mess this year, deleting an Israeli marketing firm’s 100-plus AI “coworker” profiles. “People expect the people and conversations they find on LinkedIn to be real,” the platform said while purging the fakes.
Tech consultancies keep pitching “digital humans”—video avatars that read scripts from a language model—as cheaper, tireless reps. Analysts warn they also amplify brand-risk: a glitchy answer or a hidden AI face can shred trust faster than it saves costs.

Target’s instinct—to automate harder after a stumble—misses the point. AI itself isn’t the problem. The problem is bait-and-switch. Every hour saved by an unlabeled avatar is nullified once customers discover the ruse. Sports Illustrated lost credibility not because it tried AI, but because it tried it in secret. LinkedIn’s swift purge shows the baseline expectation: representations must be authentic or clearly flagged.
The smarter play is radical transparency. If Target wants to pipe large-language models into guest support, say so. Frame the bot as a tool, not a teammate. Spotlight real employees when a human touch matters. AI can slash drudgery and sharpen logistics, but only brands that respect the line between assistance and impersonation will keep public trust.
Target is betting that more AI will fix lagging sales. That bet pays only if the company pairs every new model with equal parts candor. Customers forgive glitches. They rarely forgive deception.


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“I have come to assume the pic with no personality (perfect but boring) is AI”
“The slightly out of focus flowers in the top right hand corner seemed authentic to me.”
Selected Image 2 (Right):
“I didn't trust the cityscape, it felt too representative and not realistic in its shapes and patterns. On the other hand, there was a rock in the actual fake image that just felt wrong. It seems too tall and long and placed in a way that does not seem like a choice a person would have made. I was mixed enough though that I finally just picked one. And failed.”
“Some weirdness around the architecture in the other image, I wasn't sure it all lined up”
💭 A poll before you go
Will other companies create fake AI "employees"? |
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
*Indicates sponsored content
*Pacaso Disclaimer: This is a paid advertisement for Pacaso’s Regulation A offering. Please read the offering circular at invest.pacaso.com. Reserving a ticker symbol is not a guarantee that the company will go public. Listing on the NASDAQ is subject to approvals. Under Regulation A+, a company has the ability to change its share price by up to 20%, without requalifying the offering with the SEC.