AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
Charts
- 78Increased by 112
- 78NEW
Episodi recenti
Dec 12, 2024
38.3 - Erik Jenner on Learned Look-Ahead
24 mins
Dec 1, 2024
39 - Evan Hubinger on Model Organisms of Misalignment
106 mins
Nov 27, 2024
38.2 - Jesse Hoogland on Singular Learning Theory
18 mins
Nov 16, 2024
38.1 - Alan Chan on Agent Infrastructure
25 mins
Nov 14, 2024
38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems
23 mins