BlueDot Impact

AI Safety Fundamentals: Alignment

Esplora risorse preziose dal corso AI Safety Fundamentals, che offre approfondimenti e conoscenze sulle strategie di allineamento per l'intelligenza artificiale.

Listen on Apple Podcasts

We Need a Science of Evals

20 mins • Jan 2, 2025

Episodi recenti

Jan 2, 2025

We Need a Science of Evals

20 mins

Jan 2, 2025

Introduction to Mechanistic Interpretability

12 mins

Jul 19, 2024

Illustrating Reinforcement Learning from Human Feedback (RLHF)

S3 E2 • 23 mins

Jul 19, 2024

Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

S3 E4 • 32 mins

Jul 19, 2024

Constitutional AI Harmlessness from AI Feedback

S3 E2 • 62 mins

Lingua
Inglese
Paese
Vietnam
Feed Host