Leopold Aschenbrenner, a former researcher at OpenAI, published a 165-page essay discussing his predictions about the future of AI. Aschenbrenner worked on OpenAI's superalignment team to help mitigate AI risks before being fired in April 2024. Aschenbrenner argues that AI progress is accelerating rapidly and models could reach human-level capabilities as early as 2027. He predicts an "intelligence explosion" could then occur where AI surpasses human intelligence quickly after reaching general intelligence. Managing superintelligent AI will be crucial to prevent potentially catastrophic outcomes if it is not properly aligned with human values.
Category
🗞
NewsTranscript
00:00 It's Benzinga, and here's what's on the block.
00:02 Leopold Aschenbrenner, a former researcher at OpenAI, published an 165-page essay discussing
00:08 his predictions about the future of AI.
00:11 Aschenbrenner worked on OpenAI's Super Alignment team to help mitigate AI risks before being
00:15 fired in April 2024.
00:18 Aschenbrenner argues that AI progress is accelerating rapidly and models could reach human-level
00:23 capabilities as early as 2027.
00:26 He predicts an intelligence explosion could then occur where AI surpasses human intelligence
00:31 quickly after reaching general intelligence.
00:34 Managing super-intelligent AI will be crucial to prevent potential catastrophic outcomes
00:38 if it is not properly aligned with human values.
00:41 For all things money, visit Benzinga.com.
00:43 [BLANK_AUDIO]