Twitter: @MoZ_Podcast | YT: @MomentofZenPodcast
March 11, 2023

E12: Effective Accelerationism and the AI Safety Debate with Bayeslord, Beff Jezoz, and Nathan Labenz

E12: Effective Accelerationism and the AI Safety Debate with Bayeslord, Beff Jezoz, and Nathan Labenz

Anonymous founders of the Effective Accelerations (e/acc) movement Bayeslord and Beff Jezoz join Erik Torenberg, Dan Romero, and Nathan Labenz to debate views on AI safety. We record our interviews with Riverside. Go to https://bit.ly/Riverside_MoZ + use code ZEN for 20%.

(3:00) Intro to effective accelerationism

(8:00) Differences between effective accelerationism and effective altruism

(23:00) Effective accelerationism is bottoms-up

(42:00) Transhumanism

(46:00) “Equanimity amidst the singularity”

(48:30) Why AI safety is the wrong frame

(56:00) Pushing back against effective accelerationism

(1:06:00) The case for AI safety

(1:24:00) Upgrading civilizational infrastructure

(1:33:00) Effective accelerationism is anti-fragile

(1:39:00) Will we botch AI like we botched nuclear?

(1:46:00) Hidden costs of emphasizing downsides

(2:00:00) Are we in the same position as neanderthals, before humans?

(2:09:00) “Doomerism has a unpriced opportunity cost of upside“

More shownotes and reading material released in our Substack: https://momentofzen.substack.com/

Thank you Secureframe for sponsoring (Use "Moment of Zen" for 20% discount) and Graham Bessellieu for production.

The player is loading ...
Moment of Zen

Anonymous founders of the Effective Accelerations (e/acc) movement @Bayeslord and Beff Jezoz (@BasedBeff) join Erik Torenberg, Dan Romero, and Nathan Labenz to debate views on AI safety. We record our interviews with Riverside. Go to https://bit.ly/Riverside_MoZ + use code ZEN for 20%.

(3:00) Intro to effective accelerationism

(8:00) Differences between effective accelerationism and effective altruism

(23:00) Effective accelerationism is bottoms-up

(42:00) Transhumanism

(46:00) “Equanimity amidst the singularity”

(48:30) Why AI safety is the wrong frame

(56:00) Pushing back against effective accelerationism

(1:06:00) The case for AI safety

(1:24:00) Upgrading civilizational infrastructure

(1:33:00) Effective accelerationism is anti-fragile

(1:39:00) Will we botch AI like we botched nuclear?

(1:46:00) Hidden costs of emphasizing downsides

(2:00:00) Are we in the same position as neanderthals, before humans?

(2:09:00) “Doomerism has a unpriced opportunity cost of upside“

More shownotes and reading material released in our Substack: https://momentofzen.substack.com/

Thank you Secureframe for sponsoring (Use "Moment of Zen" for 20% discount) and Graham Bessellieu for production.