Too big to backstop?
More personality for GPT-5.1; EU ponders AI Act pause; Anthropic reports agentic AI hackers
Good morning folks,
As AI capex accounts for an increasingly large slice of the US economy, tricky questions are being raised about the involvement of the US government in the sector’s success. Case in point was a recent off-hand comment by OpenAI CFO Sarah Friar on November 5th in which she suggested the need for a federal “backstop” for the company’s investments.
Critics leapt on the quote as evidence of OpenAI’s precarious position, and the company quickly walked it back. “OpenAI is not seeking a government backstop for our infrastructure commitments,” said Friar on LinkedIn. “I used the word ‘backstop’ and it muddied the point.” CEO Sam Altman added on X: “We believe that governments should not pick winners or losers,” and noted that OpenAI was not trying to become “too big to fail” (a common criticism of the company’s web of financial investments). Other figures like White House AI advisor David Sacks also weighed in. “There will be no federal bailout for AI,” said Sacks. “The U.S. has at least 5 major frontier model companies. If one fails, others will take its place.”
Clearly, there’s significant hostility to the idea of government guarantees for AI speculation, but that doesn’t mean no involvement at all. Already, we’ve seen the Trump admin take a 10% stake in Intel and accelerate the use of federal land for datacenters, while the CHIPS Act offers 25% tax credits for investments in semiconductor manufacturing (something Altman alluded to following the Friar fallout as “super different than loan guarantees”). As the government resumes normal operations following the shutdown, there will certainly be more efforts in this arena. See, for example, the recent push by Senator Edward J. Markey (D-Mass) and other lawmakers for FERC to “ensure just and reasonable [electricity] rates for all Americans,” citing AI and cryptomining as “fueling a rising demand for energy.”
Friar’s comments may have been misjudged, but they highlight a complex and necessary relationship between the AI sector and government funding. Expect to see more on this from us in the future, but for now read on for more news including:
EU considers pausing parts of the AI Act
Anthropic warns of new era of AI cyberattacks
And can we — should we — build datacenters in space?
POLICY
AI jobs reports. Sens. Josh Hawley (R-MO) and Mark Warner (D-VA) have introduced a bill that would require thorough reporting on the impact of AI on the job market. The senators cite warnings made by Anthropic CEO Dario Amodei that AI could eliminate half of entry-level white-collar jobs in the coming years as motivation for the bill. Leading economists, meanwhile, have called for the US Department of Labor (DOL) to gather more high-quality data on AI’s jobs impact. Under the Hawley-Warner bill, federal agencies and certain companies would have to report the number of positions lost, gained, or retrained due to AI to the DOL. (Axios)
Stability AI wins UK copyright case. The UK’s High Court has rejected a copyright claim by Getty Images against AI firm Stability AI, finding that although Stability trained its Stable Diffusion system on Getty’s images, it did not store this data directly and so the model’s weights do not constitute “infringing copies.” This closely-watched case was watered down as it progressed (there’s a good overview of the details here) but this final judgement strengthens the copyright position of AI firms. Getty Images has a similar case pending against Stability AI in California. (The Guardian)
Blackwell blacklisted? As we noted in our last issue, Nvidia’s Blackwell chips seem to be off the table for Chinese customers, to the benefit of US leadership in the AI sector. In more recent comments on the matter, Nvidia CEO Jensen Huang said: “There are no active discussions. Currently, we’re not planning to ship anything to China.” The company does have a license to sell its less-powerful H20 chips in China, but Beijing has banned those chips from state-backed data centers.
EU struggles on AI. The European Commission is considering pausing parts of its landmark AI Act in response to both pressure from the Trump administration and increasing awareness that the EU seriously lags China and the US on AI. Possible changes include giving AI companies that breach the rules a one-year “grace period,” and delaying fines for violations of transparency requirements, according to a report from The Financial Times. EU officials are also considering changes to the bloc’s GDPR privacy regulations, says Politico, which first came into effect in 2018.
INDUSTRY
GPT-5.1 offers more personality. Last Wednesday, OpenAI quietly released GPT-5.1, with two variant models: “Instant” for everyday use and “Thinking” for advanced reasoning. These aren’t major updates and so the emphasis is not on SotA benchmarks but better communication. To aid this, OpenAI is offering users eight different personalities for GPT-5.1: Default, Friendly, Efficient, Professional, Candid, Quirky, Cynical, and Nerdy. With some 800 million users, it’s no surprise OpenAI needs to differentiate its model to cater to different tastes, but will people even care enough to experiment with these options? Personally, I’m waiting for the company to add my favourite characters: Sleepy, Bashful, Sneezy, and Dopey. (Ars Technica)
ElevenLabs signs celeb talent. Oscar-winning actors Michael Caine and Matthew McConaughey have agreed to have their voices cloned by leading AI audio firm ElevenLabs. The company’s Iconic Voice Marketplace lets customers license AI-generated voices of famous figures from Maya Angelou to John Wayne. The terms of the license are negotiated off-platform with rights-holders, while ElevenLabs provides the tech. Hollywood and the AI industry have a complicated relationship right now, but moves like this show the growing attraction of AI tools. (Deadline)
Anthropic disrupts Chinese hacking. Anthropic has released a report on how its AI tools were used by a Chinese state-sponsored hacking group to infiltrate a number of global targets. It’s not the first time AI has been used to assist hackers, but Anthropic says the group used its AI’s agentic capabilities to “an unprecedented degree” — not just generating malicious code, but executing attacks directly. The company estimates that human labor accounted for only 10% to 20% of the work of the campaign, and that the incident shows “the barriers to performing sophisticated cyberattacks have dropped substantially.” (Axios; The New York Times)
RESEARCH
AI in space? “The Sun is by far the largest energy source in our solar system, and thus it warrants consideration how future AI infrastructure could most efficiently tap into that power.” So says Google in a pre-print paper describing Project Suncatcher, its latest moonshot (sunshot?) which explores the feasibility of launching datacenters into space. There are obvious benefits to this, foremost being the availability of solar energy, but also major challenges, from launch costs, to data transmission, to radiation-proofing your chips (there’s no magnetosphere in space!). It sounds like a wild idea, but Google isn’t the only company exploring this possible solution to Earthly energy constraints. Nvidia is backing various startups in this sector, and Elon Musk has suggested SpaceX could adapt its Starlink satellites for similar purposes.
Should AI design viruses? In September, researchers from Stanford University posted a paper that described using generative AI to rework the genome of a simple bacteriophage (ΦX174), creating novel viruses capable of killing E. coli bacteria. The work has been hugely controversial, with scientists arguing over the safety and potential benefits of such research. The Washington Post has a great report putting this work in context, though there’s little scientific consensus: “The feat has ignited a debate over what these [AI-designed viruses] represent. It’s the biology equivalent of asking what it means when AI writes a poem in the style of Emily Dickinson. Is AI inventing art or derivatively riffing? Does the distinction matter?”
The hard decisions on AI and national security. Over the last month, ARI Senior Policy Director Morgan Plummer launched a new op-ed series in War on the Rocks, a leading defense policy publication, outlining four critical AI policy choices that US lawmakers must get right. The first piece warns against overreliance on voluntary commitments and highlights the stakes of explicitly determining how AI is deployed in the US military. Read the first op-ed in War on the Rocks.
This month, War on the Rocks published the second piece in the series, Warfighters, Not Engineers, Decide What AI Can Be Trusted, which dives into the first of four policy choices Morgan urges policymakers to decide: who defines “trustworthy AI” in the Pentagon? In this sharp op-ed, Morgan argues that unless the military reorganizes AI acquisition around operators’ trust and battlefield reality, the Department of Defense risks making AI systems into yet another over-engineered, under-deployed product. More essays are forthcoming in the series, which aims to bring national security and governance perspectives into sharper focus.
Softbank sells $5.8b Nvidia stake to go all-in on OpenAI (The Wall Street Journal)
Microsoft to ship 60,000 Nvidia chips worth $15.2b to UAE (Associated Press)
Google pulls AI model after fabricating senator’s assault allegations (The Verge)
“China is going to win the AI race,” says Jensen Huang (The Financial Times)
OpenAI pushes back against court order for chat logs (OpenAI)
UK se to ban deepfake ‘nudification’ apps (Politico)
AI country artist claims top spot on digital Billboard chart (The Register)
Coca-Cola’s AI holiday ad generates backlash on demand (The Verge)
Waymo kills beloved SF cat; leaders call for AV regulation (The Hill)
Russian humanoid falls over in hyped debut (The New York Times)





Couldn't agree more. Your critical analysis here is spot on, building on your previous insights into market dynamics. The nuance between a pure 'backstop' and strategic investment, like the CHIPS Act, is critical. Really help clarify the different approaches.