Moratorium critics find their voice
Meta goes for Scale; Midjourney hit by lawsuit; AI Safety rebranded
Good morning all,
Opposition to a proposed ban on state-level AI regulation continues to grow. Although big tech firms have mostly been silent on the matter (OpenAI lobbied publicly for such a moratorium), on June 5th, Anthropic CEO Dario Amodei came out against the legislation in a punchy op-ed for The New York Times. As Amodei put it: “Without a clear plan for a federal response, a moratorium would give us the worst of both worlds — no ability for states to act, and no national policy as a backstop.” (Amodei’s comments are said to have “angered staffers within the Trump administration,” and they certainly distance the company from its silent peers.)
Amodei is not alone in his thinking, either. Over 260 state lawmakers have asked Congress to remove the moratorium, and although the House didn’t give the ban much notice, the Senate looks like it will prove tougher. As per Punchbowl News, Republican Senators Ron Johnson and Rick Scott recently joined their colleagues Marsha Blackburn and Josh Hawley in opposition. (“It better be out,” said Hawley.) Reporter Diego Areas Munhoz noted on X that this makes its “chances of survival in reconciliation very slim, at best,” though we’ll only say the moratorium is dead if and when it’s actually removed.
In the meantime, we’ve got a whole platter of non-moratorium news for you, including:
Media titans gang up on Midjourney
The AI Safety Institute gets a rebrand
Apple takes AI reasoning down a peg
POLICY
From “Safety” to “Standards.” The Trump administration has rebranded the AI Safety Institute as the Center for AI Standards and Innovation, or CAISI. It remains within NIST, but it’s not totally clear how its role may change. CAISI’s announcement suggests continuity, focusing on activities like evaluating domestic and foreign AI systems, but Commerce Secretary Howard Lutnick put forward a more anti-regulatory tone in a statement to the press: “For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards.” (FedScoop)
Midjourney hit by major lawsuit. It’s another front in the AI copyright wars: Disney and Universal have united to sue popular image generator Midjourney. Their lawsuit describes the company as “the quintessential copyright free-rider and a bottomless pit of plagiarism,” requesting a preliminary injunction to stop alleged infringement as well as unspecified financial damages. (The Guardian)
23andMeAndWho? Anne Wojcicki, former CEO of 23andMe is buying back the company she co-founded. 23andMe went bankrupt earlier this year, and the pending sale to biotech firm Regeneron Pharmaceuticals raised bipartisan concern over the security of customers’ DNA data. It’s not immediately clear what happens to that data now, with Wojcicki’s new venture, a nonprofit public benefit corporation called TTAM, taking control of “substantially all of the Company’s assets, including the Personal Genome Service (PGS) and Research Services business lines and the Lemonaid Health business” for $305 million. (The Verge)
INDUSTRY
Meta goes for Scale. Meta is reorganizing its troublesome AI efforts once again. The company is investing $14.3 billion in Scale AI and poaching the startup’s 28-year-old CEO Alexandr Wang for a leadership role in a new “superintelligence” lab. The lab will be furnished by employees from Scale AI as well as talent lured away from rivals like DeepMind. Meta hopes to leapfrog AI rivals after its most-recent open-source language model, Llama 4, underperformed expectations. Scale AI itself started out offering data labelling services to accelerate AI training and has since diversified into evaluation and alignment. Meta’s stake in Scale AI does not confer majority control, a move designed to appease regulators currently scrutinizing Meta’s alleged antitrust behavior. (The New York Times)
Apple leaves its AI behind glass. In contrast with its biggest rivals, Apple did not put AI front and center at its 2025 WWDC conference. Yes, there was a scattering of new AI features (like live translation across messages, calls, and FaceTime, and a new screen-analysis mode for Apple Intelligence) but the focus was very much on a revamped look for its software dubbed Liquid Glass. In an interview with The Wall Street Journal, Apple execs Craig Federighi and Greg Joswiak defended the company’s track record on AI, saying planned updates to features like Siri had “unacceptable” error-rates and didn’t yet meet Apple’s standards. (The Guardian)
OpenAI forced to remember conversations. OpenAI is being forced to store user chats, including deleted conversations, as part of an ongoing lawsuit filed against the company by The New York Times and other plaintiffs. The order will affect free, Pro, Plus, Team, and API chats. OpenAI is appealing the decision, with CEO Sam Altman describing it as “an inappropriate request that sets a bad precedent.” (The Verge)
RESEARCH
“The Illusion of Thinking.” A group of researchers from Apple have published a striking paper probing the capabilities of so-called Large Reasoning Models, or LRMs. It’s generated frothy debate in the AI world, particularly on the question of whether or not we can reach AGI with current capabilities. Let’s dive in:
The researchers tested OpenAI’s o1 and o3, DeepSeek’s R1-3, and Anthropic’s Claude 3.7 Sonnet. These models try to simulate human reasoning by breaking down problems into detailed, step-by-step queries, sometimes known as “chain-of-thought” prompting.
The systems were asked to solve classic puzzles like the Tower of Hanoi and river crossing puzzle, each of which can be made progressively harder by adding additional steps or components.
When these puzzles became suitably complex, even these state-of-the-art LRMs “face a complete accuracy collapse,” said the researchers. In fact, these models were unable to solve the puzzles even when given extra time for “thinking” or provided with direct algorithmic solutions.
For skeptics of current AI capabilities, the paper offers evidence for a big criticism of AI systems: that they’re unable to generalize outside of their training data. If a “reasoning” model can’t solve a puzzle even when given the solution, they say, how can it be said to be reasoning at all?
But even those who agree with the paper’s broad conclusions wouldn’t argue that it answers definitively the question of AI intelligence. Do puzzles like the Tower of Hanoi really capture what it means to “reason” about the world? And even if LLMs don’t reason like humans, does that mean they’re not useful?
For more, see Ars Technica’s write-up; critical evaluations from Sean Goedecke and Nathan Lambert; and a supportive one from Gary Marcus.
Building opposition to the moratorium. ARI has launched a full court press to stop the proposed AI state law moratorium attached to Congress’s budget and reconciliation bill. This week, ARI launched a new website at NoAILawBan.org, highlighting opposition to the measure from federal lawmakers, national leaders, and coalitions.
Last week, ARI organized a letter signed by 260 state lawmakers from across the country urging Congress to reject the measure. The Washington Post described our letter from state lawmakers as “the most broad-based opposition yet” to the AI moratorium proposal (Washington Post). ARI also launched a public petition campaign, helping to gather 25,000 petitions against the AI moratorium proposal (Axios).
Sam Altman: “We are past the event horizon” (Sam Altman)
Meta cracks down on nudify apps (The Verge)
How Washington Has Tried to Control China’s Tech (The New York Times)
AI Therapy Bots Are Conducting 'Illegal Behavior’ (404 Media)
Chinese firms freeze AI during crucial exams (The Washington Post)
Amazon is making an OpenAI movie (The Hollywood Reporter)
Yoshu Bengio launches non-profit for “safe by design” AI (Time)
Anthropic’s Claude Gov is built for US military and intelligence (The Verge)
Vibe-coding is the new DIY (The Financial Times)
Anthropic CEO says AI could wipe out half of entry-level white-collar jobs (Axios)
Mattel partners with OpenAI for AI toys (The Register)
“Your chatbot friend might be messing with your mind” (The Washington Post)
Off to a great start!
I look forward to receiving The Output.