Explore valuable resources from the AI Safety Fundamentals course, offering insights and knowledge on alignment strategies for artificial intelligence.
We Need a Science of Evals
20 mins • Jan 2, 2025
Charts
- 150NEW
- 93NEW
- 178Increased by 0
Recent Episodes

Jan 2, 2025
We Need a Science of Evals
20 mins

Jan 2, 2025
Introduction to Mechanistic Interpretability
12 mins

Jul 19, 2024
Illustrating Reinforcement Learning from Human Feedback (RLHF)
S3 E2 • 23 mins

Jul 19, 2024
Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
S3 E4 • 32 mins

Jul 19, 2024
Constitutional AI Harmlessness from AI Feedback
S3 E2 • 62 mins

Language
English
Country
United Kingdom
Feed Host
Website
Feed
Request an Update
Updates may take a few minutes.