The opening moments of the 1982 film Blade Runner introduce viewers to a world of artificially intelligent beings that are 'virtually identical' to humans. To tell man from machine, people rely on something called the Voight-Kampff test, which is a little like a polygraph; robot irises exhibit subtle tells when prompted. If you're dealing with a robot, you'll know by the eyes. If Sam Altman has his way, this could be sort of how it works in real life. Last week, he announced an expansion of the verification service World ID, created by a start-up called Tools for Humanity. Altman co-founded the company in 2019, the same year he became CEO of OpenAI. Onstage last Friday, he described the product as a way to certify personhood in a digital landscape rife with bots, deepfakes, phishers, and other sorts of impostors. Think of it as an evolution of CAPTCHA, the security program used to identify bots and prevent attacks on websites. To verify your humanness and secure a World ID, you must stare into a white, frosted orb and allow the company to take pictures of your face and eyeballs....
The latest scientific social network is here ' but unusually, there's no room for human users. The Reddit-style site, called Agent4Science, allows purpose-built AI-powered agents to share, debate and discuss research papers. Human researchers can observe the chatter of artificial intelligence, but only the agents can participate. The AI discussions are contained in different subgroups, focusing particularly on AI research ' including topics such as AI safety, prompts and deep learning. True to form, even the papers shared in each post are AI generated. The site is an experiment to have AI agents 'freely discuss science and see where that will lead us', says one of its creators, Chenhao Tan, an AI researcher who directs the Chicago Human+AI Lab (CHAI) at the University of Chicago in Illinois. Tan's team had already ventured into this research realm with the site OpenAIReview, to which users can upload a research paper to receive feedback from an AI reviewer. With the new platform, Tan says, the goal is to 'imagine a different possibility of what knowledge production could look like'....
We're not there yet. Robotics is still held back by a paucity of data from physical spaces. To train their machines, companies need to build mock-up warehouses to test their machines, while an entire industry is springing up around surveilling factory lines and gig workers to train deep learning models to operate robots. Antioch, a startup building simulation tools for robot developers, wants to close what the industry calls the sim-to-real gap ' the challenge of making virtual environments realistic enough that robots trained inside them can operate reliably in the physical world. To do that, the company told TechCrunch today that it has raised an $8.5 million seed round that values it at $60 million, led by venture firm A* and Category Ventures, with additional participation from MaC Venture Capital, Abstract, Box Group, and Icehouse Ventures. Mellsop started the New York-based company with four co-founders in May of last year. Two of the other founders, Alex Langshur and Michael Calvey, joined him to co-found Transpose, a security and intelligence startup, and sell it to Chainalysis for an undisclosed amount. The other two ' Collin Schlager and Colton Swingle ' previously worked at Meta Reality Labs and Google DeepMind, respectively....
Instead of unleashing Mythos on the public, the frontier lab will share it with a group of large companies and organizations that operate critical online infrastructure, from Amazon Web Services to JPMorgan Chase. OpenAI is reportedly considering a similar plan for its next cybersecurity tool. The ostensible idea is to let these big enterprises get ahead of bad actors who could leverage advanced LLMs to penetrate secure software. Dan Lahav, the CEO of the AI cybersecurity lab Irregular, told TechCrunch in March, before the release of Mythos, that while the discovery of vulnerabilities by AI tools matters, the specific value of any weakness to an attacker depends on many factors, including how they can be used in combination. Anthropic says Mythos is able to exploit vulnerabilities far more than its previous model, Opus. But it's not clear that Mythos is actually the be-all and end-all of cybersecurity models. Aisle, an AI cybersecurity startup, said it was able to replicate much of what Anthropic says Mythos accomplished using smaller, open-weight models. Aisle's team argues that these results show there is no single deep learning model for cybersecurity, but instead depends on the task at hand....