Hey Cogs,
Today we’ll discuss:
-
Why we started The Cognitive Revolution and what to expect
-
The pace of AI and should we worry it’s moving too fast?
-
The downside case according to OpenAI’s Sam Altman
The Cognitive Revolution
Humanity has recently invented machines that can reason and communicate, forever changing the nature of thought. The consequences are hard to predict but impossible to overstate.
From a macro-historical perspective, the biggest “events” in human history were the Agricultural Revolution, Industrial Revolution, and most recently the Digital Revolution. We are now entering a period of similar importance: The Cognitive Revolution.
The Cognitive Revolution is about talking to entrepreneurs, developers, and researchers to better understand AI technology itself, and to sharpen our sense of where the world is going. Our guests are already building the products and experiences that will soon become universal.
AI is deep – philosophically, technically, practically – and its impact deserves attention. If you’re new to AI, take heart that the recent intellectual history is only five years. You can jump in, rush straight to the front, and understand what’s happening now.
Our goal is to help listeners see around the corner to what the world will look like in the next few years as the latest AI technologies mature and reach mass audiences.
We just released our 2nd episode, where hosts Erik Torenberg and Nathan Labenz discuss comprehensive scenarios for our near future across a number of different industries and wrestle with the different opinions (doomsday/utopian) in the space.
Listen on Apple Podcasts | Spotify | Google Podcasts or Youtube.
Is AI Moving Too Fast?
OpenAI is moving really fast – although with "admirable constraint" – and we still don't know how the new dynamics will emerge from this AI ecology.
AI advancements are accelerating at an unprecedented rate (see below), and it's still unclear how it will all unfold. We are optimistic about the transformative potential of AI and its ability to shape a brighter future for all, but that doesn't come without any concerns.
It won't just be OpenAI or Google or Microsoft, we're going to see a convergence of all different types of AI come out simultaneously. And once it's out of the bottle, it's very hard to put it back in...
Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data, to generate personalized experiences at scale.
Lights Out, Game Over
During the interview, Nathan comments on one of Sam Altman’s interviews where he stated that the potential downside for artificial intelligence is “lights out for all of us” – meaning it could lead to the end of humanity.
Sam’s statement is particularly alarming when historically, even with the development of nuclear weapons, there has still been a possibility for some people to survive. In other words, the potential consequences of AI development may be more severe than any other catastrophic event that has occurred in human history.
He goes on to draw a comparison between the potential risks of AI development and the historical extinction of megafauna. He suggests that the extinction of megafauna was caused by the arrival of humans, who acted as an invasive species and wiped out much of the existing fauna.
His comparison to the extinction of megafauna raises a number of questions about the potential implications of AI development. One question that arises is whether AI could be considered an invasive species. The invasive species concept refers to the introduction of non-native species to an ecosystem, which often leads to negative consequences for the native species. In the case of AI, the introduction of a new type of intelligence to the world could have negative consequences for humanity and potentially other species.
Altman acknowledges that the potential risks of AI development are unprecedented, and it's not a given that a new form of intelligence would necessarily do bad things to humans. However, it's reassuring to know that someone in Sam’s position takes the potential risks seriously and gives them the proper attention.
We should heed his statement as a warning about the potential risks of AI development and highlight the need for careful consideration of the implications of this emerging technology.
Meaningful Links:
-
Sam Altman's Interview (“lights out for all of us”)
-
AI as a Positive /Negative Factor in Global Risk
-
Mitch Albom on ChatGPT
Notable Quotes from the Interview:
-
“I think with AI we are going to be able to delegate more and more cognitive work to these new systems. And ultimately that will reshape not just how we work but how we live in general.”
-
“AI changed my daily workflow and what I have experienced is really just a preview of what's to come for broader society.”
-
“The deeper I go, and the more I think about it, the more I firm up my conclusion:
AI is the real deal. This technology is going to change just about everything.”
-
“Is this like the rise of computers, mobile phones, even electricity? I think that it’s the best of those three, but I think the shift is actually even more profound than even the people who are paying a lot of attention it to think.”
-
“This is a new kind of intelligence, a kind of an alien intelligence that is very different from us. We should not think of it as being like us at all.”
-
“It would historically take somebody five years to determine the 3D structure of a protein from its DNA code. Now it takes a couple minutes to come up with a pretty high confidence structure for a protein. AI is going usher in a whole revolution in biology.”
-
“There's going to be a lot of activities that can be pulled apart. Atomized in various ways. Broken down into subparts. And the parts that are readily delegated to an AI will be.”
-
“Sam Altman recently said in an interview that the downside case for AI is lights out for all of us.”
-
“And they got much more scared, doomerish, and passive. We need to slow down, we need to be careful. And [Sam] said that this is a microcosm of what's happening in society in general — in that we've kind of lost faith in in technology and are much more cautious and conservative and less ambitious than we used to be.”
-
“I'm kind of expecting a whole new AI ecology where we’re not going to have one powerful thing from an OpenAI or a Google that's going to change the world but it's going to a be a convergence of all different types of AI coming out simultaneously.”
On the next episode of The Cognitive Revolution, we have Replika AI Founder Eugenia Kuyda. The episode will drop tomorrow Friday, February 17th.
If you want to open a few browser tabs in advance of our next episode to prepare:
-
Read about how one of Replika’s users is dating a chatbot
-
Banning of sexting chatbots
-
Are AI chatbots the therapists of the future
Until next time.