For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Charts
Recent Episodes
Dec 5, 2024
AI Risk Special | "Near Midnight in Suicide City" | Episode #55
E55 • 92 mins
Nov 25, 2024
Connor Leahy Interview | Helping People Understand AI Risk | Episode #54
E54 • 145 mins
Nov 19, 2024
Human Augmentation Incoming | The Coming Age Of Humachines | Episode #53
E53 • 102 mins
Nov 19, 2024
AI Risk Update | One Year of For Humanity | Episode #52
E52 • 78 mins
Oct 23, 2024
AI Risk Funding | Big Tech vs. Small Safety I Episode #51
E51 • 66 mins