In a small room in San Diego last week, a man in a black leather jacket explained to me how to save the world from destruction by AI. Max Tegmark, a notable figure in the AI-safety movement, believes that 'artificial general intelligence,' or AGI, could precipitate the end of human life. I was in town for NeurIPS, one of the largest AI-research conferences, and Tegmark had invited me, along with five other journalists, to a briefing on an AI-safety index that he would release the next day. No company scored better than a C+. The threat of technological superintelligence is the stuff of science fiction, yet it has become a topic of serious discussion in the past few years. Despite the lack of clear definition'even OpenAI's CEO, Sam Altman, has called AGI a 'weakly defined term''the idea that powerful AI contains an inherent threat to humanity has gained acceptance among respected cultural critics. Granted, generative AI is a powerful technology that has already had a massive impact...
learn more