Buraxılışın müddəti 41 dəq.
2025 il
Highlights: #214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway
Podkast haqqında
Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.
So some — including Buck Shlegeris, CEO of Redwood Research — are developing a backup plan to safely deploy models we fear are actively scheming to harm us: so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier.
These highlights are from episode #214 of The 80,000 Hours Podcast: Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway, and include:
What is AI control? (00:00:15)One way to catch AIs that are up to no good (00:07:00)What do we do once we catch a model trying to escape? (00:13:39)Team Human vs Team AI (00:18:24)If an AI escapes, is it likely to be able to beat humanity from there? (00:24:59)Is alignment still useful? (00:32:10)Could 10 safety-focused people in an AGI company do anything useful? (00:35:34)These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!
And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.
Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong