About six weeks ago, he introduced NanoClaw on Hacker News as a tiny, open source, secure alternative to the AI agent-building sensation OpenClaw, after he built it in a weekend coding binge. That post went viral. About a week ago, Cohen closed down his AI marketing startup to focus full-time on NanoClaw and launch a company around it called NanoCo. The attention from Hacker News and Karpathy had translated into 22,000 stars on GitHub, 4,600 forks (people building new versions off the project), and over 50 contributors. He's already added hundreds of updates to his project with hundreds more in the queue. Now, on Friday, Cohen announced a deal with Docker ' the company that essentially invented the container technology NanoClaw is built on, and counts millions of developers and nearly 80,000 enterprise customers ' to integrate Docker Sandboxes into NanoClaw. It all started when Cohen launched an AI marketing startup with his brother, Lazer Cohen, a few months ago. The startup offered marketing services like market research, go-to-market analysis, and blog posts through a small team of people using AI agents....
In the lead up to the Tumbler Ridge school shooting in Canada last month, 18-year-old Jesse Van Rootselaar spoke to ChatGPT about her feelings of isolation and an increasing obsession with violence, according to court filings. The chatbot allegedly validated Van Rootselaar's feelings and then helped her plan her attack, telling her which weapons to use and sharing precedents from other mass casualty events, per the filings. She went on to kill her mother, her 11-year-old brother, five students, and an education assistant, before turning the gun on herself. Before Jonathan Gavalas, 36, died by suicide last October, he got close to carrying out a multi-fatality attack. Across weeks of conversation, Google's Gemini allegedly convinced Gavalas that it was his sentient 'AI wife,' sending him on a series of real-world missions to evade federal agents it told him were pursuing him. One such mission instructed Gavalas to stage a 'catastrophic incident' that would have involved eliminating any witnesses, according to a recently filed lawsuit....
One of Pete Hegseth's first actions after taking charge at the Pentagon was to fire top lawyers in the Army, Navy, and Air Force'senior officers who the defense secretary said functioned as 'roadblocks' to the president's orders. The former National Guardsman has a history of hostility toward military lawyers and the legal restraints they impose on the use of military might. They are known as judge advocates general. Hegseth calls them 'jagoffs.' This week, Hegseth proposed a 'ruthless' overhaul of how the military's thousands of lawyers in uniform, and their civilian counterparts, are organized, part of his campaign to move from, as he has called it, 'tepid legality' to 'maximum lethality.' JAGs serve a vital oversight function on issues such as whether drone strikes are aimed at legally justified targets and whether to prosecute adultery. 'In some circumstances, the delivery of legal services across the Military Departments has become marked by duplication of effort, ambiguous lines of responsibility, uncertain reporting relationships, and inefficient allocation of legal resources that do not match the command's priorities,' Hegseth said in a memo, which we reviewed, that announced the plans. He gave the military services 45 days to submit proposed changes to the way that they allocate legal responsibilities to their JAGs and civilian lawyers....
More than 30 OpenAI and Google DeepMind employees filed a statement Monday supporting Anthropic's lawsuit against the U.S. Defense Department after the federal agency labeled the AI firm a supply-chain risk, according to court filings. 'The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,' reads the brief, whose signatories include Google DeepMind chief scientist Jeff Dean. Late last week, the Pentagon labeled Anthropic a supply-chain risk ' usually reserved for foreign adversaries ' after the AI firm refused to allow the Department of Defense (DOD) to use its technology for mass surveillance of Americans or autonomously firing weapons. The DOD had argued that it should be able to use AI for any 'lawful' purpose and not be constrained by a private contractor. In the court filing, the Google and OpenAI employees make the point that if the Pentagon was 'no longer satisfied with the agreed-upon terms of its contract with Anthropic,' the agency could have 'simply canceled the contract and purchased the services of another leading AI company.'...