Tonight, at Playground Global in Palo Alto, some very smart people who are building things you don't understand yet will explain what's coming. This is the final StrictlyVC event of 2025, and truly, the lineup is ridiculous. The series has traveled around the globe under the auspices of TechCrunch: Steve Case rented a theater in Washington, D.C.; we talked to Greece's prime minister in Athens; and Kirsten Green hosted us at the Presidio in San Francisco. The concept is always the same, though: Bring together people who are working on genuinely important developments in a smaller setting, before everyone else figures out they're important. One of our favorite moments was when, in 2019, Sam Altman told a StrictlyVC crowd that OpenAI's monetization strategy was basically 'build AGI, then ask it how to make money.' Everyone laughed. He wasn't joking. This time, we've got Nicholas Kelez, a particle accelerator physicist who spent 20 years at the Department of Energy building things that shouldn't be possible. Now he's tackling semiconductor manufacturing's biggest problem: Every advanced chip depends on $400 million machines that use lasers only one Dutch company knows how to make. (More galling to some: Americans invented the technology, then sold it to Europe.) Kelez is building the next generation in America using particle accelerator tech. It's as nerdy as it sounds but also exceedingly important in this moment. There is also growing competition chasing after the same prize....
On Wednesday evening at PlayGround Global in Palo Alto, some very smart people who are building things you don't understand yet will explain what's coming. This is the final StrictlyVC event of 2025, and truly, the lineup is ridiculous. The series has traveled around the globe under the auspices of TechCrunch. Steve Case rented a theater in D.C.; we talked to Greece's prime minister in Athens; and Kirsten Green hosted us at the Presidio in San Francisco. The concept is always the same, though: bring together people who are working on genuinely important developments in a smaller setting, before everyone else figures out they're important. One of our favorite moments was when, in 2019, Sam Altman told a StrictlyVC crowd that OpenAI's monetization strategy was basically 'build AGI, then ask it how to make money.' Everyone laughed. He wasn't joking. This time we've got Nicholas Kelez, a particle accelerator physicist who spent 20 years at the Department of Energy building things that shouldn't be possible. Now he's tackling semiconductor manufacturing's biggest problem: every advanced chip depends on $400 million machines that use lasers only one Dutch company knows how to make. (More galling to some: Americans invented the technology, then sold it to Europe.) Kelez is building the next generation in America using particle accelerator tech. It's as nerdy as it sounds but also exceedingly important in this moment. There is also growing competition chasing after the same prize....
On Monday, Anthropic announced Opus 4.5, the latest version of its flagship model. It's the last of Anthropic's 4.5 series of models to be released, following the launch of Sonnet 4.5 in September and Haiku 4.5 in October. As expected, the new version of Opus has state-of-the-art performance on a range of benchmarks, including coding benchmarks (SWE-Bench and Terminal-bench), tool use (tau2-bench and MCP Atlas) and general problem solving (ARC-AGI 2, GPQA Diamond). Anthropic also emphasized the Opus's computer use and spreadsheet capabilities, and launched a number of parallel products to showcase how the model holds up in those settings. Together with Opus 4.5, Anthropic will make its Claude for Chrome and Claude for Excel products ' previously in pilot ' more broadly available. The Chrome extension will be available to all Max users, while the Excel-focused model will be available to Max, Team and Enterprise users. 'There are improvements we made on general long context quality in training with Opus 4.5, but context windows are not going to be sufficient by themselves,' Dianne Na Penn, Anthropic's head of product management for research, told TechCrunch. 'Knowing the right details to remember is really important in complement to just having a longer context window.'...
A small-scale artificial-intelligence model that learns from only a limited pool of data is exciting researchers for its potential to boost reasoning abilities. The model, known as Tiny Recursive Model (TRM), outperformed some of the world's best large language models (LLMs) at the Abstract and Reasoning Corpus for Artificial General Intelligence (ARC-AGI), a test involving visual logic puzzles that is designed to flummox most machines. The model ' detailed in a preprint on the arXiv server last month1 ' is not readily comparable to an LLM. It is highly specialized, excelling only on the type of logic puzzles on which it is trained, such as sudokus and mazes, and it doesn't 'understand' or generate language. But its ability to perform so well on so few resources ' it is 10,000 times smaller than frontier LLMs ' suggests a possible route for boosting this capability more widely in AI, say researchers. 'It's fascinating research into other forms of reasoning that one day might get used in LLMs,' says Cong Lu, a machine-learning researcher formerly at the University of British Columbia in Vancouver, Canada. However, he cautions that the techniques might no longer be as effective if applied on a much larger scale. 'Often techniques work very well at small model sizes and then just stop working,' at a bigger scale, he says....