Esplora risorse preziose dal corso AI Safety Fundamentals, che offre approfondimenti e conoscenze sulle strategie di allineamento per l'intelligenza artificiale.
We Need a Science of Evals
20 mins • Jan 2, 2025
Charts
- 153NEW
- 189Decreased by 115
- 135NEW
- 138Decreased by 58
- 196Increased by 0
Episodi recenti
![](https://files.podcastos.com/shows/prv3qt/jpeg256-3baeb02e.jpg)
Jan 2, 2025
We Need a Science of Evals
20 mins
![](https://files.podcastos.com/shows/prv3qt/jpeg256-3baeb02e.jpg)
Jan 2, 2025
Introduction to Mechanistic Interpretability
12 mins
![](https://files.podcastos.com/shows/prv3qt/jpeg256-3baeb02e.jpg)
Jul 19, 2024
Illustrating Reinforcement Learning from Human Feedback (RLHF)
S3 E2 • 23 mins
![](https://files.podcastos.com/shows/prv3qt/jpeg256-3baeb02e.jpg)
Jul 19, 2024
Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
S3 E4 • 32 mins
![](https://files.podcastos.com/shows/prv3qt/jpeg256-3baeb02e.jpg)
Jul 19, 2024
Constitutional AI Harmlessness from AI Feedback
S3 E2 • 62 mins
![](https://files.podcastos.com/shows/prv3qt/jpeg-b5168849.jpg)
Lingua
Inglese
Paese
Regno Unito
Feed Host
Sito web
Feed
Richiedi un aggiornamento
Gli aggiornamenti potrebbero richiedere alcuni minuti.