The moratorium flops
The Senate decides; AI copyright wars; and ChatGPT's "cognitive debt"
Happy Tuesday folks,
The votes are in, and in the early hours of this morning, the AI state law moratorium went down in flames as an overwhelming majority in the Senate passed an amendment stripping the provision from the budget bill. It was a decisive end to what has been a chaotic few days.
Late last week, opposition mounted against the moratorium, which would have required states to freeze any AI regulation or lose access to federal broadband funding. Then, over the weekend, Sens. Cruz and Blackburn struck a tentative deal tweaking its language and reducing the freeze from 10 years to five. Late Monday, though, Sen. Blackburn withdrew her support, following an outpouring of opposition from children’s groups and conservative leaders, and instead championed an amendment that would strike the AI law moratorium from the bill altogether. This amendment proved to be a winner: sailing through by a vote of 99 to 1.
At the time of publication for this edition of The Output, the fate of the Big, Beautiful Bill itself still hangs in the balance, but, after such a significant defeat, the moratorium is widely viewed as out of the picture and unlikely to be revived.
However things fall, we’ll have all the reaction in our next edition of the newsletter, including any next steps Sen. Cruz might take on a promised standalone moratorium bill. But keep reading for the rest of what the AI world has to offer, including:
The latest updates on the AI copyright wars
Meta shares intimate chatbot conversations
And is ChatGPT making you more stupider?
POLICY
AI labs win on fair use. Two significant lawsuits in the AI copyright wars were settled last week. In the most important, district judge William Alsup found that Anthropic was acting lawfully under “fair use” when it trained its systems on millions of copyrighted books. In a parallel case, Meta won a suit against a group of authors, with district judge Vince Chhabria finding that the claimants hadn’t sufficiently proved that AI-generated content would cause them “market harm.” The tech firms didn’t entirely escape censure: Alsup ruled that Anthropic had pirated seven million books, requiring a separate trial to assess damages, and Chhabria said Meta won only because the authors were unable to prove market harm. But these judgements certainly strengthen AI firms’ position against copyright owners. (Wired: 1, 2)
New York’s AI safety battle. In other regulatory battles, earlier this month the New York legislature passed the RAISE Act (Responsible AI Safety & Education) requiring companies designing frontier AI models to meet certain safety requirements. The act has yet to be signed into law by Governor Kathy Hochul but is currently subject to some last-minute lobbying. The Chamber of Progress claims RAISE will “saddle small AI developers with excessive compliance costs and new regulatory uncertainty,” while Encode makes the case that the bill will only apply to the biggest AI labs and require testing for a “narrowly-defined set of highly severe risks.” With the law poised to enact a first-of-its-kind AI safety framework and the tech industry lobbying hard against the legislation, all eyes are now on New York Governor Kathy Hochul to see if she signs the bill into law.
G7 retreats on AI governance. The 51st G7 summit was held in Alberta, Canada this month, but AI regulation was not high on the agenda. The G7 previously pushed global coordination on AI (as with 2023’s Hiroshima Process) but this year’s meeting followed a trend set by the Trump administration and Paris AI Action Summit: economic growth mustn’t be stymied by regulation. The Leaders’ Statement on AI does mention the need for “secure, responsible, and trustworthy AI” but leans more heavily on delivering “unprecedented prosperity.” (TechPolicy)
INDUSTRY
Meta’s AI not-so-confidential. Users of Meta’s dedicated AI chatbot app have been unwittingly sharing private conversations with the world. Chats on the Meta AI app are private by default but can be published to a public “Discover” feed. Many users seem unaware of this, and have been sharing conversations about everything from financial troubles to relationship advice to requests for AI-generated sexual imagery. These incidents not only show the trust users are placing in AI chatbots, but how this trust might be betrayed — accidentally or not. (The Washington Post)
Midjourney does video. Midjourney is one of the most popular AI image-gen systems, known for its sophisticated and artistic style. It’s now launched its own video general model, V1, which is available for $10 a month. Midjourney says total costs end up being up to 25 times cheaper than rivals, with V1 competing with OpenAI’s Sora, Runway’s Gen 4, Google’s Veo 3, and Adobe’s Firefly. (TechCrunch)
Tesla’s bumpy taxi service. Tesla has launched its first automated “Robotaxi” service in Austin, Texas — with mixed results. Not only is the service limited in various ways (it’s geofenced to 30 square miles with human “safety monitors” present in all cars) but widespread reports of driving errors are already attracting the attention of the National Highway Transit Safety Administration. (The Guardian)
RESEARCH
Thinking critically about ChatGPT. A new study from MIT that recorded brain activity while using ChatGPT has sparked debate over whether or not AI is “making us dumber.” Our quick conclusion: whether or not you account for AI influence, our inability to read scientific papers certainly isn’t making us any smarter. Let’s dive in:
The study (which has yet to be peer-reviewed) took 54 subjects aged 18 to 39, divided them into three groups, and asked them to write SAT essays using either ChatGPT, Google, or with no external help at all. After writing three essays, the groups switched: ChatGPTers wrote essays without help, and brain-only writers got access to ChatGPT. EEG tests recorded brain activity, interviews tested subjects’ recall of the essays they produced, and the researchers concluded that “LLM users consistently underperformed at neural, linguistic, and behavioral levels.”
Pretty damning, right? Yes and no. There are plenty of caveats to remember here. The study had a small sample size, took place over a relatively short time period (four months), and also had some questionable design choices. For example, the group that switched from AI assistance to brain-only writing performed worse than their peers, leading researchers to conclude they accumulated “cognitive debt” by using AI in the first place. But, this group only took a test without AI once (as opposed to three initial essays) meaning they had less experience overall at writing the essays.
That being said, it shouldn’t be a surprise that writing an essay with the help of a chatbot requires less mental activity than writing one without assistance. Similarly, it’s obviously going to be harder to remember words you haven’t actually written than ones you did. The study doesn’t prove anything definitive about AI’s impact on our cognitive ability, but it does suggest what’s already clear — that using AI to complete certain tasks is less mentally engaging.
As Congress’s reconciliation bill neared a vote in the Senate, ARI continued its push to educate lawmakers on the moratorium’s risks and to elevate voices opposed to the measure.
On Monday morning, ARI led a coalition letter with over 130 groups, including many kids online safety advocates, urging the Senate note to pass the Blackburn-Cruz deal (Politico). Last Thursday, ARI hosted a press conference featuring Republican state lawmakers opposed to the AI state law moratorium. Legislators from states including Utah, South Carolina, Ohio, Tennessee, Wisconsin, and Montana joined the event, urging Congress to strip the provision from the budget bill. Watch the full press conference here.
“Swedish PM calls for a pause of the EU’s AI rules” (Politico)
TikTok sale deadline extended for third time (Reuters)
DeepSeek is aiding Chinese military and intelligence (Reuters)
OpenAI wins $200 million contract with Defense Department (CNBC)
xAI faces lawsuit over datacenter gas turbines (TechCrunch)
WhatsApp banned from House staffers’ devices (Axios)
Elon Musk’s lawyers claim he “does not use a computer” (Wired)
OpenAI pulls io hardware startup site after copyright claim (CNBC)
“Employers Are Buried in A.I.-Generated Résumés” (The New York Times)
Apple is reportedly exploring an acquisition of Perplexity AI (Bloomberg)
BBC threatens legal action against Perplexity AI (BBC News)




