Video gamers were among the first to grumble when supplies of random access memory (RAM) chips began to run short last year, causing prices to soar. But the ongoing crisis ' which has been dubbed RAMmageddon and is expected to linger well into 2027 ' is affecting some scientists, as well. The shortage is driven by the rise of artificial-intelligence systems, which has created a voracious demand for high-speed memory chips. Over the course of 2025, some forms of RAM tripled in price, causing problems for resource-constrained laboratories that already faced barriers to accessing powerful computing tools. The shortage is also pushing researchers to develop more efficient algorithms and hardware, to reduce the amount of memory needed. 'Scientific research increasingly relies on large-scale computing infrastructure,' says Matteo Rinaldi, director of the Institute for NanoSystems Innovation at Northeastern University in Boston, Massachusetts. 'And many of these workloads require substantial memory capacity.'...
Advanced Machine Intelligence, a startup co-founded by computer science pioneer and former Meta AI chief Yann LeCun, said Tuesday that it has raised $1.03 billion to develop 'world models,' or AI designed to learn from and interact with the physical world. The funding for Paris-based AMI represents the largest seed round ever for a European startup and one of the region's largest fundings for an AI startup overall, per Crunchbase data. Bezos Expeditions, Cathay Innovation, Greycroft, Hiro Capital and HV Capital led the funding, which reportedly values AMI at $3.5 billion. 'My prediction is that 'world models' will be the next buzzword,' AMI Labs CEO Alexandre LeBrun told TechCrunch after the funding. 'In six months, every company will call itself a world model to raise funding.' His co-founder LeCun is considered one of the pioneers of the large language model approach to AI. In 2018, LeCun was one of the computer scientists who received the industry's prestigious A.M. Turing Award for his work on neural networks and learning algorithms....
Let me tell you something that took me an embarrassingly long time to truly internalize: the reason deep learning works as well as it does in 2026 is only maybe 40% algorithms. The rest is hardware. We got lucky ' spectacularly lucky ' that the GPU, a chip originally designed to make triangles pretty in Quake III, turned out to be almost exactly the right computational substrate for training neural networks. But 'almost exactly right' is doing a lot of heavy lifting in that sentence. The story of AI chips is the story of closing that gap, and it's one of the most fascinating engineering stories of our time....
Many engineering challenges come down to the same headache ' too many knobs to turn and too few chances to test them. Whether tuning a power grid or designing a safer vehicle, each evaluation can be costly, and there may be hundreds of variables that could matter. Consider car safety design. Engineers must integrate thousands of parts, and many design choices can affect how a vehicle performs in a collision. Classic optimization tools could start to struggle when searching for the best combination. MIT researchers developed a new approach that rethinks how a classic method, known as Bayesian optimization, can be used to solve problems with hundreds of variables. In tests on realistic engineering-style benchmarks, like power-system optimization, the approach found top solutions 10 to 100 times faster than widely used methods. Their technique leverages a foundation model trained on tabular data that automatically identifies the variables that matter most for improving performance, repeating the process to hone in on better and better solutions. Foundation models are huge artificial intelligence systems trained on vast, general datasets. This allows them to adapt to different applications....