AI Preemption: Error 404—Support Not Found
A new AI regulation super PAC; Google's great week; and jailbreaking LLMs with poetry
Good morning folks,
We hope you’re recovered from any holiday indulgences because we have a fresh dose of AI news to get your week started! The first order of business is the big story of the last two weeks: the tentative return of an AI moratorium, after its 99-to-1 failure in the Senate in July.
The latest round of the moratorium fight (now AI preemption) started with reporting on November 17 that House Majority Leader Steve Scalise (R-LA) was interested in including AI preemption in this year’s National Defense Authorization Act (NDAA). Things escalated quickly. President Trump weighed, in posting that “overregulation [on AI] by the States is threatening to undermine this Growth Engine,” and that the US should have “one Federal Standard instead of a patchwork of 50 State Regulatory Regimes.” And the next day, news reports said the White House was preparing legislative action on the topic, with a number of publications publishing a leaked draft of an executive order that would suppress states’ ability to pass AI legislation.
The order had a number of mechanisms to achieve this goal, including letting the federal government withdraw funding in response to “onerous AI laws,” and the establishment of an “AI Litigation Task Force” to sue states deemed to be obstructing AI growth. The legal justification for such lawsuits would be that regulatory legislation on AI interferes with “interstate commerce,” which is the purview of the federal government. As we’ve seen with past executive orders, though, this could certainly be challenged in the courts, but the threat of lost funding to states — even if temporary — could force compliance. As Charlie Bullock, a senior research fellow at the Institute for Law and AI, told The Verge: “Even if [a state] can win a court case to make them give us that funding eventually, it would take a long time. States might be incentivized not to pass legislation contrary to the policy of the order.”
The backlash to the leaked order was immediate, including from Republicans and MAGA commentators, who criticized the proposed legislation as a capitulation to certain elements of the tech industry. A snap YouGov poll found that 57% of Americans oppose efforts to preempt state-level AI legislation and only 19% support them, while coalitions of more than 290 state lawmakers and 40 faith leaders signed open letters against the preemption.
The current state of play remains in flux. Following the leak of the draft, sources told Reuters that the White House had put the EO on hold. Meanwhile in Congress, lawmakers supporting an AI law moratorium are scrambling to put together a preemption package that can muster the support needed to pass, while Democrats (both moderate and progressive) continue to line up against the measure. It’s almost certain we’ve not seen the last of the preemption yet, but we have other matters to cover in The Output too, including:
A new super PAC for AI regulation
Google rallies with Gemini 3
And how to jailbreak LLMs with poetry
POLICY
Battle of the PACs. Two former congressmen, Chris Stewart (R-Utah) and Brad Carson (D-Okla.), are planning to raise at least $50 million for Republican and Democratic super PACs supporting candidates “committed to defending the public interest against those who aim to buy their way out of sensible AI regulation.” The intent is to provide a counterweight to super PACs like Leading the Future, supported by VC firms like Andreessen Horowitz, which want to combat AI regulation. Carson (founder and president of ARI) said he hopes his new funding group, Public First, will be a “rallying point for a pretty large community of people” who want AI safeguards: “This issue is one that transcends party labels.” (The Wall Street Journal)
The Genesis Mission for AI science. Trump has signed an executive order marshaling federal resources under the Department of Energy to support scientific research. The initiative, dubbed the Genesis Mission, tasks federal agencies with integrating scientific data and AI infrastructure to accelerate breakthroughs in areas including manufacturing, biotech, and nuclear fusion. White House science adviser Michael Kratsios compared the effort to the Apollo program, though news reports noted the initiative does not currently have a public budget. (Scientific American)
OpenAI blames “misuse” for suicide. In response to a lawsuit filed by the family of Californian teenager Adam Raine, who committed suicide after prolonged conversations with ChatGPT, OpenAI said in a court filing that the “tragic event” was caused by “misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.” Lead counsel for the Raine family said OpenAI’s response was “disturbing” and that the company rushed this particular version of ChatGPT to market “without full testing.” (NBC News)
INDUSTRY
Google’s big week. Google has released its latest foundation model, Gemini 3, declaring that the launch constitutes a “new era of intelligence.” That’s big talk, but Google has the data to back it up, with the model topping the LMArena Leaderboard and setting new records on various benchmarks (including a top score on Humanity’s Last Exam without external tools). The model quickly won some high-profile endorsements, including from Salesforce CEO Marc Benioff, who posted: “Holy shit. I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back. The leap is insane.” Even more telling: Sam Altman recently sent a memo to OpenAI staff warning of “rough vibes” ahead due to Google’s acceleration in AI development. Positive reactions to Gemini also led to a significant rally for Alphabet shares, with the company now close to a $4T market cap. (The Verge)
Nano Banana Pro. As a follow-up, Google released an upgrade for its state-of-the-art image generation model, Nano Banana. The Pro version is better at creating coherent infographics, can track 14 distinct visual references in each generation, and outputs images at 4K resolution. However, initial testing also found that the model seems to have relatively lax guardrails. With only a little creative prompting, journalists were able to create images that “ignore copyright protections, subvert historical truths, and distort reality, making them ripe for abuse.” (CNET)
Tracking the fakes. At the same time, Google also added the ability for Gemini to detect at least some AI-generated content: specifically, images created or edited by Google’s own tools. This uses the company’s invisible AI watermarking tech SynthID, though Google reportedly plans to expand this capability to the industry-wide C2PA standard, too. (The Verge)
RESEARCH
Curses from verses. Turning prompts into poetry functions as a “universal single-turn jailbreak technique” for LLMs, according to a pre-print paper from Italian researchers. The group tested 1,200 malicious prompts spanning a range of subjects, from password cracking to CBRN research. In their initial tests of models from Google, OpenAI, Anthropic, and others, prompts in prose were successful in bypassing guardrails 8% of the time. But, when these prompts were converted into “semantically parallel” poetry, the success rate rose to 62% on average. As the researchers noted in what must be a fairly unique citation in AI papers: “In Book X of The Republic, Plato excludes poets on the grounds that mimetic language can distort judgment and bring society to a collapse.” Apparently, poetry maintains some of this power still. (The Register)
An end to scaling. Appearing on the Dwarkesh Podcast, famed AI researcher Ilya Sutskever said that 2020-2025 had been the “age of scaling” for AI (when adding more compute and data guaranteed improvements) and that reaching AGI once again requires fundamental research. This isn’t to say that AI has hit a wall, as many critics suggest, but that new approaches are needed for the next stage of development. Sutskever also said his mysterious startup, Safe Superintelligence, was raising money at a $32b valuation. (Dwarkesh Podcast)
State lawmakers push back against preemption. ARI took a leading role in organizing the 290+ lawmakers who last week spoke out against federal preemption of state AI laws. With the NDAA moving quickly through Congress, ARI worked on a timeline of less than a week to build a coalition of hundreds of state lawmakers from more than 40 states, including policymakers across the political spectrum. Since its release, the letter has secured coverage in Punchbowl, Politico, The Hill, Fast Company, Platformer, Newsmax, State Affairs, The Verge, Pluribus and other state and national outlets across the country.
Faith leaders weigh in. In addition to our work organizing state lawmakers, ARI helped to support a coalition letter of faith leaders opposed to the preemption of state AI laws (Politico). The letter, led by the Word & Wisdom coalition, highlighted the consequences of freezing state laws as rapid innovation in AI poses risks to the dignity of work, human intellect, the boundaries of consent, and the social compact. The letter includes voices from many different denominations, with leaders from groups including The National Association of Evangelicals, Mormon Women for Ethical Government, the Jewish Earth Alliance, and Liberty Counsel.
AI pioneer Yann LeCun officially leaves Meta (The New York Times)
OpenAI’s first AI gadget could arrive in “less than” two years (The Verge)
AI toy bear chats to children about adult topics (The New York Times)
Jeff Bezos will co-CEO new AI startup Project Prometheus (Wired)
Four charged with illegally smuggling AI chips to China (Reuters)
Larry Summers leaves OpenAI board after Epstein disclosure (Politico)
TikTok lets users adjust exposure to AI content (The Guardian)
OpenAI’s GPT-5.1-Codex Max can tackle 24-hour coding problems (OpenAI)
“Sir Tim Berners-Lee doesn’t think AI will destroy the web” (The Verge)
The chilling effect of AI on entry-level jobs (New York Magazine)
How the EU botched its attempt to regulate AI (The Financial Times)





