- The Deep View
- Posts
- Your Gemini prompts probably use less energy than you think
Your Gemini prompts probably use less energy than you think

Welcome back. Anthropic is in talks to raise up to $10 billion in new funding at a $170 billion valuation, doubling the original $5 billion target due to overwhelming investor demand. The Claude maker would nearly triple its valuation from just five months ago as Iconiq Capital leads the round.
1. Your Gemini prompts probably use less energy than you think
2. China deploys AI chatbot to space station, names it after mythical Monkey King
3. DeepSeek quietly drops V3.1 optimized for Chinese chips and priced to undercut OpenAI
ENVIRONMENT
Your Gemini prompts probably use less energy than you think

After years of speculation about AI's climate impact, Google has finally released hard numbers… each Gemini text prompt uses just 0.24 watt-hours of electricity. That's about one second of microwave time, far below the apocalyptic projections that have dominated headlines.
The disclosure represents the first detailed energy breakdown from a major AI company, ending months of educated guesswork about whether chatbots were environmental disasters or just another drop in our climate impact bucket. Google's figures suggest the latter — at least for now.
The company claims each prompt consumes:
0.24 watt-hours of electricity (equivalent to watching TV for 9 seconds)
0.03 grams of CO2 emissions (about 1/150th of what your smartphone battery produces when charging)
0.26 milliliters of water (roughly 5 drops)
These numbers reflect what Google calls a "full-stack" methodology that includes not just the AI chips but also idle capacity, cooling systems and data center overhead. The actual AI processors account for just 58% of the total energy cost, with the rest consumed by supporting infrastructure.
Google also claims dramatic efficiency improvements, with energy consumption per prompt dropping 33-fold over the past year. If accurate, this suggests Gemini was consuming about 8 watt-hours per prompt in May 2024, closer to earlier estimates about ChatGPT's energy appetite.
Tech companies face mounting pressure to quantify AI's environmental impact as data center construction accelerates globally. Google's disclosure provides a crucial benchmark, though one the company controls entirely.
Missing from the analysis are several key details: total daily query volumes, energy costs for image and video generatio, and the massive upfront training costs. The methodology also excludes Gemini's most energy-intensive features like Deep Research, which can generate responses equivalent to dozens of standard prompts.

0.24 watt-hours per prompt suggests individual AI use isn't the climate catastrophe some predicted. But Google's selective disclosure raises obvious questions about what they're not measuring.
Creating these models requires enormous computational resources that dwarf operational costs. Google's analysis is like measuring a car's environmental impact by only counting fuel consumption while ignoring manufacturing.
The bigger issue is scale… even tiny per-prompt costs become significant when multiplied across millions or billions of daily queries. Without total usage figures, we can't assess Gemini's aggregate impact or compare it meaningfully to other activities.
TOGETHER WITH ROCKET
Build powerful apps with a single prompt
Rocket is the world’s most advanced vibe solutioning tool. Just input a single prompt, and Rocket will deliver your production-grade product instantly.
Forget scaffolding or vibe coding drama: this is the easiest, most effective way to build an app.
Production-ready mobile and web apps from one prompt
Built-in security: Rocket won’t train on paid plans
Ready-to-use templates and industry-leading Figma-to-app
Perfect for solopreneurs, SaaS founders, product teams, agencies, and designers who need to ship fast.
CHATBOTS
China deploys AI chatbot to space station, names it after mythical Monkey King

China has deployed its first AI chatbot to the Tiangong space station, marking another milestone in the country's push to establish itself as a major space power. The system, called Wukong AI after the mythical Monkey King, went operational in mid-July and has already supported taikonauts during a complex spacewalk mission.
Built from a open-source AI model, Wukong operates through two modules, one aboard the station for immediate problem-solving and another on Earth for deeper analysis. Chinese engineers designed it specifically for aerospace operations, focusing its knowledge base on flight data and navigation rather than general conversation.
The AI's first major test came during a six-and-a-half-hour spacewalk where it helped taikonauts install debris protection and conduct routine inspections. According to Xinhua, the system provides "rapid and effective information support" for complex operations and fault handling, while also offering psychological support to crew members.
The International Space Station already uses Astrobee robots and CIMON for conversational support. What sets Wukong apart is its specific focus on space navigation and tactical planning rather than general assistance.
The deployment fits into China's 30-year strategy to establish itself as a major space power. Tiangong currently serves as a microgravity research platform, but China plans to expand it into a logistics hub for future lunar missions.
While the technical details remain sparse, Wukong represents China's continued push to develop indigenous space technology rather than relying on foreign systems. The choice of name — honoring a figure known for cunning and adaptability — suggests China's own ambitions for the system's capabilities.
TOGETHER WITH DELL
Dell’s Latest PC Brings AI To Your Desk
Advanced AI may be the future, but you won’t be able to make the most of it if your computer is living in the past. For a new generation of ideas, we need a new generation of PC’s… and that’s exactly what Dell has done with their latest and greatest: The Dell Pro Max with GB10.
The GB10 raises the bar for AI developer PCs thanks to:
A powerful NVIDIA software stack and Grace Blackwell superchip
The ability to support up to 200Bn parameter models
One Petaflop of FP4 computing power
In other words, it’s a pint-sized PC companion that can turn any desk into developer heaven – and it’s available now. Try the Dell Pro Max GB10 for yourself right here.
OPENSOURCE
DeepSeek quietly drops V3.1 optimized for Chinese chips and priced to undercut OpenAI

China's DeepSeek has quietly released V3.1, an updated version of its flagship AI model that experts say matches OpenAI's GPT-5 on several benchmarks while being strategically priced to undercut American competitors.
The Hangzhou-based startup announced the release through a low-key message in one of its WeChat groups, later uploading it to Hugging Face. The timing is pointed — just two weeks after OpenAI's GPT-5 launch, which fell short of industry expectations.
V3.1's standout feature is its hybrid architecture, combining fast responses with step-by-step reasoning in a single system. Unlike earlier DeepSeek models that separated instant answers from reasoning tasks, V3.1 handles both seamlessly — a capability that GPT-5 and recent models from Anthropic and Google also offer, but few open-weight models have achieved.
The model also includes:
Extended context window of 128k (a roughly 300 page book) in a single query, enabling longer conversations with better recall
685-billion parameters using standard "mixture-of-experts" architecture that most large models employ to keep costs down
Optimization for Chinese-made chips as part of Beijing's AI independence strategy
The model is specifically tuned for Chinese-made chips, part of Beijing's broader push toward AI independence from U.S. technology.
While U.S. companies remain hesitant to embrace DeepSeek's technology, it's gained traction globally, with some American firms building applications on DeepSeek's earlier R1 reasoning model that shocked markets in January.
OpenAI CEO Sam Altman recently admitted that competition from Chinese open-source models like DeepSeek influenced his company's decision to release open-weight models. "It was clear that if we didn't do it, the world was gonna be mostly built on Chinese open-source models," Altman said.
LINKS

Meta implements hiring freeze after splitting up its AI superintelligence team
Anthropic develops anti-nuke AI tool
Google scores six-year Meta cloud deal worth over $10b
Is the AI bubble about to pop?
Apple fitness chief accused of toxic workplace culture and harassment
Trump to tap Airbnb co-founder as first government design chief
Google to provide Gemini AI tools to federal agencies for $0.47
Ex-OpenAI staff launch fund as OpenAI ‘mafia’ spreads
Chinese unicorn Z.ai, Alibaba Cloud team up to deploy new AI agent for smartphone users

Basedash Self-Hosted: AI-powered BI tool now hostable on your own servers
Riff: Cursor for music production
ReadyBase: Enter a prompt and get a PDF generated in seconds
April: Reach inbox 0 by speaking with your email & calendar

Luisa, Bogotá: MLOps engineer, ex-Rappi, 3 yrs on LLMs — $46/h
Thiago, Porto Alegre: AI researcher, ex-Google, 2 yrs on multimodal models — $48/h
Andrés, Mexico City: Data scientist, ex-Mercado Libre, 4 yrs in NLP (transformer-based models) — $45/h
(Sponsored)
A QUICK POLL BEFORE YOU GO
Do you care about chatbot energy usage? |

“[The other image] had some strange lines that didn’t quite fit with the overall scene.” “It almost never happens anymore, but the other image had a mistake (black lampshade)” |
![]() | “[The other image]’s shadows look whacked. And the window behind the couch looks like it’s missing a frame — it looks like someone pasted an image there to act as a reflection. ” “I didn't think AI would keep everything in focus like that :(” |

The Deep View is written by Faris Kojok and The Deep View crew. Please reply with any feedback. Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.
Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.