• The Deep View
  • Posts
  • ⚙️ Sam Altman says AI will create the need for a new ‘social contract’

⚙️ Sam Altman says AI will create the need for a new ‘social contract’

Good morning. We hear you and agree — sometimes, the realities of the AI world exist on a spectrum that runs between frustrating and downright upsetting.

So, starting with our next edition, we’ll be making an effort to highlight more positive AI stories on a more regular basis, in addition to our normal coverage. If some iteration of AI has impacted your life (or someone you know) in a positive way, please get in touch.

In today’s newsletter: 

  • 🌎 Microsoft’s global datacenter expansion

  • 📄 Paper: Open source & the danger of ‘open washing’

  • 🧠 The impact of AI doomerism

  • 🗺️ Sam Altman says AI will create the need for a new ‘social contract’

Microsoft’s global datacenter expansion

Image Source: Microsoft

Microsoft said Monday that it is investing $3.2 billion to further grow its cloud and AI infrastructure in Sweden over the next two years. 

Key points: The company will be deploying some 20,000 advanced chips across three datacenter sites in Sweden, made up of Nvidia, AMD and Microsoft’s in-house AI chips. 

Microsoft’s global expansion: This latest Nordic infrastructure spend comes after Microsoft has made similar investments in the U.K., Germany and Spain in recent months. President Brad Smith told Reuters: “You will see some other announcements, probably more in the fall.”

Zoom Out – Microsoft’s datacenter pledge: The day before, Microsoft published a pledge to operate its global datacenters responsibly & sustainably. 

  • Part of this involves a promise to procure “100% renewable energy” globally by 2025, and to replenish more water than is consumed by its datacenters. 

  • Another part involves working with local communities both on digital upskilling & local environmental responsibility. 

Microsoft’s carbon emissions have grown by 30% since 2020 due to its increasing investments in AI. 

Paper: Open source & the danger of ‘open washing’

Abstract, Distorted view of Computer Motherboard.

Photo by Michael Dziedzic (Unsplash).

Open-washing is similar in concept to greenwashing, but the subject matter instead focuses on the tendency of companies to advertise “open” AI models that aren’t really open. 

Key points: A new study found that the bulk of AI models that claim to be “open” aren’t.

  • This is exacerbated by what the authors call the “release-by-blogpost” approach, where companies unveil their new “open” models in a blogpost complete with cherry-picked benchmark data & (usually) a simple absence of peer-reviewed technical documentation.

  • “When generative AI follows the release-by-blogpost model, it is reaping the benefits of mimicking scientific communication without actually doing the work,” the report reads. 

A table showcasing the “openness” of 40 text generators, with ChatGPT as a reference point (Figure 2).

Why it matters: The EU AI Act includes exceptions in transparency and other developer requirements for creators of “open-source” systems. Considering unclear definitions of open-source within the Act (& future possible legislation), there are considerable risks to open-washing. 

Yes, but a part of the AI landscape is a relentless debate between the value of open and closed models. The paper acknowledges this, saying that “full openness is not always the solution.”

  • “However,” the report reads, “open is better than closed in most cases and knowing what is open and how open it is can help everyone make better decisions.”

How do you feel about open-sourced AI?

Login or Subscribe to participate in polls.

Together with Kolena

Are you a machine learning professional looking to elevate the quality of your models?

Join Kolena ML engineer Saad Hossain and Head of Developer Relations Skip Everling in an engaging discussion on the effective use of metadata in machine learning, as well as advanced techniques for metadata hydration.

You can unlock the value of metadata for free on June 12 at 9:00 a.m. PT.

The impact of AI doomerism

Created with AI by The Deep View

What happened: Francois Chollet, an AI researcher at Google, said that he has heard from people who are “so sure” they’ll be dead (from AI) in 10-20 years that they have stopped planning for their future & saving for retirement. 

  • “Both AI Doomerism and Singularitariasm are similarly-shaped eschatological cults that drive otherwise normal people towards completely insane beliefs and behaviors,” he said. 

Zoom Out: His post followed a viral tweet that revealed screenshots from the PauseAI Discord in which users discussed the grief of believing that AI is on the brink of killing them. 

  • Researchers’ concern with doomerism is that it diverts attention to entirely hypothetical, non-scientific risks of, well, doom, rather than current harms.

Still, PauseAI’s big proposal is just to pause the training of models more powerful than GPT-4 until smart regulation can get a handle on things. 

Keep in mind that current AI is not intelligent, and that there is no science that supports the idea that we might someday create an out-of-control artificial superintelligence. And that AI does not need to be super-intelligent to cause harm.

It’s probably a good idea to keep contributing to your 401k.

💰AI Jobs Board:

  • Experienced lead consultant - AI Revenue Systems: MakerOps · United States · San Francisco Bay Area · Full-time · (Apply here)

  • AI Researcher: DeepRec.ai · United States · San Francisco Bay Area · Full-time · (Apply here)

  • ML Engineering Lead: Aira Technologies · United States · San Francisco Bay Area · Full-time · (Apply here)

 📊 Funding & New Arrivals:

  • Human Native AI — a company whose goal is to provide developers with access to high-quality, fully-licensed data — launched Monday, with $3.5 million in funding.

  • Ashby, an AI-powered recruiting platform, raised $30 million in funding.

  • AI weather balloon company Windborne completed a $15 million funding round led by Khosla Ventures.

🌎 The Broad View:

  • A fascinating thought experiment on consciousness and the self from late philosopher and cognitive scientist Daniel Dennet (MIT Press).

  • Spotify raised its premium prices again (CNBC).

  • Why politicians want to buy the news (Semafor).

*Indicates a sponsored link

Together with Brilliant

Unlock your AI potential

We talk a lot about two things here: Large Language Models (LLMs) and the steadily growing adoption of AI technology across global industries and businesses. 

The task of understanding LLMs & the concepts behind them, however, is often a challenging one. But that’s where Brilliant comes in. 

Offering bite-sized, personalized courses in everything from math to coding to LLMs, you can dive into the world of AI (at your own pace) and develop real, actionable knowledge in each of these critical areas. 

It’s fun, it’s interactive, and, most importantly, it’s easily accessible.

With Brilliant, you won’t get left behind by the boom of the AI craze. 

Join 10 million other proactive learners around the world and start your 30-day free trial today. Plus, readers of The Deep View get a special 20% off a premium annual subscription.

Sam Altman says AI will create the need for a new ‘social contract’

Photo by NASA (Unsplash).

Sam Altman, CEO of OpenAI, appeared at the UN’s AI for Good summit on May 30. The roughly 30-minute interview that resulted covered plenty of ground; here are a couple of main points that he discussed. 

The Helen Toner situation: Altman said he “respectfully but very significantly” disagrees with Toner’s recounting of the events that led to Altman’s failed ouster last year. 

  • Toner recently detailed some of the behind-the-scenes events that contributed to the board’s efforts to remove Altman; she said that Altman lied regularly to the board & created a toxic atmosphere at the company. 

  • Altman did not refute any of her points, merely saying several times that he disagrees with her “recollection of events.”

A whole new world: Altman said that he is sure that AI will do “more to help the poorest people than the richest people.” 

  • This is at odds with the concerns of many experts. Dr. Srinivas Mukkamala, an AI authority & Ivanti’s CPO, told me last year that AI will likely create enormous wealth for some groups while leaving “99% of the world’s population” behind. By further widening the gap between skilled and unskilled workers and countries, and by steeply reducing the cost of labor, he believes improperly regulated AI will create “so much inequality” that the problem will become unaddressable. 

Altman added that he thinks “some change” will be needed for the “social contract given how powerful we expect this technology to be. I do think the whole structure of society itself will be up for some degree of debate and reconfiguration.”

  • “I always try to think of it as an alien intelligence,” Altman said, referring to the AI that OpenAI is building. 

You can watch the whole interview here (it starts at 8:52 & he says plenty of interesting things). 

My thoughts: We have talked — a lot — about the roadblocks to synthetically replicating human intelligence, such as not understanding the brain, not understanding how intelligence works, how consciousness works, how those two relate. So, again, there is no current science that supports the idea that an artificial general/superintelligence is possible, let alone inevitable. 

Many of these terms are inherently intended to hype the capabilities of existing technology (& bring in VC funding). 

Funnily enough, I mentioned hype and alien intelligence in yesterday’s edition; seeing Altman literally refer to AI as an “alien” intelligence is frustrating. I’ll encapsulate why I think so with this note from yesterday’s newsletter:

  • Why semantics in AI matter: As computer scientist Jaron Lanier wrote last year: “The easiest way to mismanage a technology is to misunderstand it.” And inaccurate terminology is the first step on the road to misunderstanding.

Image 1

Which image is real?

Login or Subscribe to participate in polls.

Image 2

  • Varolio: AI-powered automation tool for email prioritization and centralized communication.

  • Zocket: A generative AI platform to create and deploy social media ads.

  • VoiceNotes: An AI-powered, multi-modal note-taking assistant.

Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter).

*Indicates a sponsored link

SPONSOR THIS NEWSLETTER

The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft and many more.

If you want to share your company or product with fellow AI enthusiasts before we’re fully booked, reserve an ad slot here.

One last thing👇

That's a wrap for now! We hope you enjoyed today’s newsletter :)

What did you think of today's email?

Login or Subscribe to participate in polls.

We appreciate your continued support! We'll catch you in the next edition 👋

-Ian Krietzberg, Editor-in-Chief, The Deep View