• The Deep View
  • Posts
  • ⚙️ OpenAI is working on a Google Search killer

⚙️ OpenAI is working on a Google Search killer

Good morning. It was close, but the “no’s” have it — around 60% of you said in our poll yesterday that companies should not be allowed to train on publicly available content without asking (and then paying) creators.

As one reader put it — in a way that I agree with — “AI companies should not be allowed to profit off other’s work. If all generative AI and LLMs were free to use, regardless of version or capability, then maybe. But as long as they are for profit, they need to pony up.”

In today’s newsletter:

  • 💸 President Biden announces a $3.3 billion Microsoft AI investment 

  • 🛜 OpenAI is reportedly working on a Search Killer 

  • 🇨🇳 The U.S. is restricting the flow of AI tech to China

  • 📄 New Paper: The philosophy of AI 

Before we get into it, check out First Principles’ latest podcast episode, an interview with battery developer Base Power:

A new datacenter is coming to Wisconsin

Image Source: Microsoft

President Joe Biden on Wednesday announced a $3.3 billion investment by Microsoft to build a shiny new AI datacenter in Racine, Wisconsin. The datacenter will be built on the same land as the proposed (and failed) $10 billion investment from Foxconn back in 2018. 

Zoom in — The details of Microsoft’s plans:

  • The White House said that Microsoft’s investment will result in 2,300 construction jobs and about 2,000 permanent jobs once the facility is completed. 

  • Microsoft, in partnership with Gateway Technical College, will also develop a Datacenter Academy that will train 1,000 locals for datacenter and other related jobs by the end of the decade. 

  • Microsoft will also be building a Co-Innovation Lab in southeast Wisconsin. 

Zoom Out — The politics of it all:

  • In case anyone forgot, there is a presidential election coming up in November. And Biden’s approval rating has been low and dropping (last at around 37%). 

  • And U.S. voters, according to recent Reuters polls, see Donald Trump as a better option for the economy. 

  • The White House said that Microsoft is committed to creating “good-paying jobs” with growth pathways and benefits. The White House also said that Biden has already added 4,000 jobs in Racine and 177,000 across Wisconsin.

ChatGPT is the assassin … Google Search is the target

Image source: Google

OpenAI, according to a Bloomberg report, is developing a new iteration of ChatGPT that would enable the chatbot to search the internet and cite its sources. A source told Bloomberg that, once the new feature is enabled, user questions entered into ChatGPT will result in output based on websites, including blogs and Wikipedia. 

The pivot into the search arena – which, globally, was worth around $190 billion in 2022 – could place OpenAI in more direct competition with both Google (which has a nearly 90% global search market share) and Perplexity AI, which achieved a $1 billion valuation on the back of its chatbot-powered search engine. 

OpenAI didn’t respond to my request for comment. 

The battle for Search:

  • Google has been scrambling for months to shore up its search engine with AI integrations, whether users want them or not. Deepwater Management’s Gene Munster has said that once Google manages to complete the integration, Google will have a pretty big protection against these AI-fueled search upstarts. 

  • MNTN CEO Mark Douglas, however, told me last year that, since no one really has a moat in AI, Google Search as it stands today is protected by one thing: years and years of muscle memory 

Do you think a search-enabled ChatGPT could threaten Google's search monopoly?

Login or Subscribe to participate in polls.

The U.S. wants homegrown AI tech to stay where it is

Image Source: Huawei

The AI race has more participants than just startups and the Magnificent Seven. There is a simultaneous race going on between countries and governments.

And, if possible, the U.S. doesn’t want to help China out.

The Department of Commerce, according to Reuters, is considering a rule that would prohibit the export of proprietary and closed-source AI models to China. Sources said that the Department might turn to a threshold rule in Biden’s AI executive order — the rule forces developers to disclose model details to the Commerce Department if the computing power required to train a model exceeds a given threshold.  

The Department didn’t return a request for comment. 

The background: 

  • Two years ago, the U.S. published a set of export rules that prohibit China from acquiring certain semiconductor chips made with U.S. equipment. 

  • Earlier this year, the Biden Administration proposed a rule that would require U.S. cloud companies to disclose when foreign countries are using their data centers to train AI models. 

TOGETHER WITH Sana

Work faster and smarter with Sana AI

Meet your new AI assistant for work.

On hand to answer all your questions, summarize your meetings, and support you with tasks big and small.

Try Sana AI for free today.

💰AI Jobs Board:

  • Director of Artificial Intelligence: Signify · United States · Remote · Full-time · Director · (Apply here)

  • Artificial Intelligence Performance Engineer: AMD · United States · Santa Clara, CA · Full-time · Mid-senior level · (Apply here)

  • Artificial Intelligence Researcher: DeepRec.ai · United States · San Francisco Bay Area · Full-time · Mid-senior level · (Apply here)

🌎The Broader View:

  • TikTok filed a petition in federal court seeking to overturn the TikTok sell-or-ban bill, which was passed in April.

  • Movie theaters — AMC and Cinemark, specifically — still have yet to recover to pre-pandemic levels of foot traffic (MarketWatch).

  • Disney finally broke even in streaming, but it wasn’t enough to save the company’s stock (The Information).

 📊 Funding & New Arrivals:

TOGETHER WITH ENQUIRE AI

Enquire PRO is designed for entrepreneurs and consultants who want to make better-informed decisions, faster, leveraging AI. Think of us as the best parts of LinkedIn Premium and ChatGPT.
 
We built a network of 20,000 vetted business leaders, then used AI to connect them for matchmaking and insight gathering.

Our AI co-pilot, Ayda, is trained on their insights, and can deliver a detailed, nuanced brief in seconds. When deeper context is needed, use a NetworkPulse to ask the network, or browse for the right clients, collaborators, and research partners.

Right now, Deep View readers can get Enquire PRO for just $49 for 12 months, our best offer yet. Click the link, sign up for the annual plan, and use code DISCOUNT49 at checkout for the AI co-pilot you deserve.

The philosophy of artificial intelligence

This bit of news encapsulates my own personal interest in AI: it is a field that, impact aside, acts (or should act) as a cross-section between a bunch of fascinating disciplines, including computer science, linguistics, cognitive science, psychology, neuroscience and philosophy. 

Artificial intelligence, especially the study or creation of artificial general intelligence, requires each of these disciplines to work together. 

A preprint of a new paper – co-written by Raphaël Millière, philosopher of AI at Macquarie University, and Cameron Buckner, a philosopher of the mind at the University of Houston – dives into the philosophical side of things. The paper’s around 40 pages long, but here are some of the most interesting areas I think it explores. 

You can read the paper here

Created with AI by The Deep View.

The problem with benchmarks:

  • The authors claim that the benchmarks used to measure LLMs have a bunch of inherent flaws. One of these is that the benchmarks are gamified, meaning models are optimized to improve benchmark scores, rather than the attribute the benchmark was actually testing. 

  • There are also issues with data contamination and construct validity, or the degree that a given test accurately measures the construct it is trying to measure. 

The reality behind the ‘alien intelligence’ of LLMs:

Much of the conversation around LLMs has centered around their capacity as the first step on the path toward AGI, and it’s not hard to see why people make that assumption; humans associate language with intelligence. 

  • But the authors argue that it’s really difficult to determine if LLM output is more the result of regurgitation/retrieval or human-adjacent processing and cognition. 

  • Their “middle-ground” conclusion is that while most LLM behavior might be characterized as retrieval, not all LLM behavior fits that assumption. (As with most things in AI, the “middle ground” is usually where reality hangs out).

  • “While LLMs may accumulate information about the same syntactic and semantic properties as humans and even combine that information in flexible ways to create novel outputs, they nevertheless lack the agential stability required to imbue their utterances with determinate meanings that remain stable over time.”

Image 1

Which image is real?

Login or Subscribe to participate in polls.

Image 2

  • My-legacy.ai: A tool to help you with estate planning.

  • Venturefy: A tool to verify corporate relationships; the “blue check” for business.

  • Zipy: A debugging platform with user session replay and network monitoring.

Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter).

*Indicates a sponsored link

SPONSOR THIS NEWSLETTER

The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft, and many more.

If you want to share your company or product with fellow AI enthusiasts before we’re fully booked, reserve an ad slot here.

One last thing👇

That's a wrap for now! We hope you enjoyed today’s newsletter :)

What did you think of today's email?

Login or Subscribe to participate in polls.

We appreciate your continued support! We'll catch you in the next edition 👋

-Ian Krietzberg, Editor-in-Chief, The Deep View