When the computer game Doom was released in 1993, its utility for science wasn't immediately clear. Since then, the first-person shooter has been used in many studies, from helping to improve artificial-intelligence models1 to investigating the effects of video games on memory and aggression2. It has also spawned a subculture in which fans and developers, including scientists, try to run the game on different devices ' from calculators to digital pregnancy tests. Last month, scientists in Australia reported that they had taught neurons grown on a silicon chip to play the game. The phrases 'Can it run Doom'' and 'It runs Doom' have become a popular Internet meme. Alon Loeffler, a synthetic-biological-intelligence scientist who was part of the team at biotechnology company Cortical Labs in Melbourne, Australia, which trained the neurons, says the team chose Doom because of the meme. He and his colleagues first taught neurons to play the classic video game Pong in 2021. Doom, with its more complex environment, was a natural next step, he says, because 'the Internet always asks, 'Can it play Doom'''...
MIT neuroscientists have figured out how the brain is able to focus on a single voice among a cacophony of many voices, shedding light on a longstanding neuroscientific phenomenon known as the cocktail party problem. This attentional focus becomes necessary when you're in any crowded environment, such as a cocktail party, with many conversations going on at once. Somehow, your brain is able to follow the voice of the person you're talking to, despite all the other voices that you're hearing in the background. Using a computational model of the auditory system, the MIT team found that amplifying the activity of the neural processing units that respond to features of a target voice, such as its pitch, allows that voice to be boosted to the forefront of attention. 'That simple motif is enough to cause much of the phenotype of human auditory attention to emerge, and the model ends up reproducing a very wide range of human attentional behaviors for sound,' says Josh McDermott, a professor of brain and cognitive sciences at MIT, a member of MIT's McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study....
Busy week, big checks, lots of AI and robotics. That, in ultra-brief synopsis form, characterized the general startup fundraising environment this week. Notably, the two largest global rounds were U.K.-based Nscale and Paris-based Advanced Machine Intelligence, which raised $2 billion and $1.03 billion, respectively. 1. (tied) Quince, $500M, e-commerce: Quince, an online fashion and home goods retailer with an affordable luxury theme, said it secured $500 million in Series E financing led by Iconiq Capital. The round sets a $10.1 billion post-money valuation for the 8-year-old, San Francisco-based company. 1. (tied) Nexthop AI, $500M, AI infrastructure: AI networking startup Nexthop AI raised $500 million in Series B funding led by Lightspeed Venture Partners, with Andreessen Horowitz joining as a major investor alongside other backers. The Santa Clara, California-based company develops switching technology built on open-source operating systems for AI and cloud networking. 1. (tied) Mind Robotics, $500M, robotics: Rivian spin-out Mind Robotics closed on a $500 million Series A round, co-led by Accel and Andreessen Horowitz. The Palo Alto, California-based company is developing an AI-enabled industrial robotics platform, with a focus on automating industrial and manufacturing tasks at scale....
For years, discussions about frontier AI models revolved around a familiar set of architectural questions. How many parameters does the model have' How many layers' Is it mixture-of-experts' What attention tricks were introduced' These questions still matter, but with GPT-5.4 something subtle has changed. The most interesting architectural innovations are no longer happening strictly inside the transformer. They are happening around it. GPT-5.4 represents a shift from a model-centric architecture to a system-centric architecture. The neural network is still the core intelligence, but it increasingly functions as the cognitive engine inside a much larger execution environment. Reasoning, memory management, tool usage, multimodal perception, and agentic behavior are now tightly integrated into the model's operational stack. The result is a system that looks less like a chatbot and more like a general-purpose cognitive runtime....