The chips are down for Nvidia
TAKE IT DOWN moves on up; OpenAI's alphanumeric soup; "delete all IP law"
Good morning folks,
To channel Steve Ballmer for a second: export controls, export controls, export controls! That’s what’s been on my mind this week after the US government introduced license requirements for Nvidia’s H20 chip (as well as the MI308 from Advanced Micro Devices, and their equivalents). There’s more on the impact of this move below, but the reasoning from the White House is clear enough. As a spokesperson for the Commerce Department put it: “The Commerce Department is committed to acting on the president’s directive to safeguard our national and economic security.” At any rate, in addition to export controls, export controls, export controls (sorry, I can’t stop) we’ve got much more news in this edition of The Output, including:
A rundown of OpenAI’s new models (yes, all of them)
Zuck goes head-to-head with the FTC
And the AI Index puts numbers on AI growth
POLICY
Bargaining chips. Last week, the US government confirmed it was introducing export licenses for Nvidia’s H20 chip — a restriction called for by a number of groups, including ARI. Nvidia presented the H20 as compliant with export controls despite the fact it’s 20% faster at inference tasks than the already-banned H100. (A fact that did not escape Chinese AI firms, which reportedly placed $16 billion in orders for the H20.) The new licensing regulation is bad for Nvidia’s bottom line. The company forecast a $5.5bn hit to revenue as a result and its share price fell nearly 7% after the news broke. But for those who believe export controls on chips are the best way to slow Chinese development of AI, the new licensing requirement is worth it. At any rate, Nvidia’s Jensen Huang was working hard to keep the company in good graces of both the US and China. On April 14th, Nvidia pledged to manufacture AI supercomputers “entirely in the US” (a move the White House credited to the “Trump Effect”) while later that week Huang made a visit to Beijing, where he reportedly met with DeepSeek founder Liang Wenfeng. “We hope to continue to cooperate with China," Huang told reporters, according to Reuters.
Meta vs FTC. Mark Zuckerberg took the stand last week at the beginning of an antitrust trial brought by the Federal Trade Commission that could see the breakup of Meta’s $1.4tn empire. So far, the focus has been on Zuckerberg’s thinking before his 2012 purchase of Instagram and 2014 purchase of WhatsApp, and whether or not Meta’s strategy is “buy or bury,” as per the FTC’s arguments. The trial is set to continue for two months, and is sure to be read as a bellwether for the current administration’s approach to US tech. Some have speculated that Trump could intervene in the case, while FTC chair Andrew Ferguson said only that he would “obey lawful orders.” (NPR)
“delete all IP law.” That was the total content of a recent post on X by Jack Dorsey; a comment which attracted much chatter, including agreement from Elon Musk. As The Washington Post notes, one context for this argument could be the ongoing battles between AI companies and content creators over fair use access to copyrighted data for training purposes. As OpenAI recently wrote in a comment that captures the sentiment in much of the tech industry, if “American companies are left without fair use access, the race for AI is effectively over.” (The Washington Post)
INDUSTRY
OpenAI serves alphanumeric soup. Look, OpenAI knows its model names are confusing, but the last two weeks have seen the company release a throng of LLMs, including GPT-4.1, 4.-1 mini, 4.1 nano, o3, and o4-mini. Here’s the breakdown:
o3 and o4-mini. These are the company’s new best-ever reasoning models, able to access every other tool within ChatGPT, including web search, analyzing uploaded files, Python code execution, and reasoning about visual data. o3 is the flagship with SoTA results on benchmarks; o4-mini is the optimized, cost-efficient option. These releases remind me of Apple’s habit of announcing each new iPhone as the “best ever.” It’s not necessarily an incorrect claim, but it’s one that sometimes hides diminishing returns. Nevertheless, some people (like Tyler Cowen) are claiming that with these latest models they can really feel the AGI.
GPT‑4.1, GPT‑4.1 mini, and GPT‑4.1 nano. These are new systems aimed squarely at devs. These are only available via the company’s API and offer improvements in “areas that developers care most about: frontend coding, making fewer extraneous edits, following formats reliably, adhering to response structure and ordering, consistent tool usage, and more.”
In other product news, OpenAI also added new memory capabilities to ChatGPT and is reportedly working on its own social network, similar to X. If social networks provide good training data AI, owning the well rather than siphoning what you can from the open web is probably a smart idea.
Meta’s models miss the mark. On April 5th, Meta released its latest family of Llama models. However, the response from the AI community has not been positive. There were rumors of training contamination (which Meta denied) and criticism that the company used an unreleased model to get a high score on LM Arena benchmark. When Meta used an unmodified system it ranked below older models from OpenAI, Anthropic, and Google. Surveying the company’s AI work on the Interconnects Substack, Nathan Lambert offers a downbeat summary: “The time between major versions is growing, and the number of releases seen as exceptional by the community is dropping.”
Claude gets into the weeds. Anthropic has added new research capabilities and a Google Workspace integration to Claude, allowing the system to cross-reference your personal docs and emails with research from the web. That means you can ask Claude to do things like summarize your work meetings and look for ways to move forward on issues it finds using information from the internet. The company is also reportedly preparing to launch a voice assistant feature to match OpenAI’s.
Ilya raises a couple billion. Ilya Sutskever, former chief scientist at OpenAI, has reportedly raised $2bn for his new venture, Safe Superintelligence Inc. Sutskever left OpenAI last year after the failed coup against Sam Altman. His new startup is less than a year old with no product or revenue, but is now valued at $32bn. Proof there’s still plenty of juice in the AI venture world if your name is right! (The Financial Times)
RESEARCH
2025’s AI Index is here. If you’re not familiar, the AI Index is an annual report that collates data on AI progress. It’s an invaluable source for anyone trying to keep track of changes in the AI world (and to therefore predict what will happen next). Here are some of the key findings from this year’s report:
Performance improves. AI systems continue to produce better scores on benchmarks like MMMU, GPQA, and SWE-bench. On SWE-Bench, AI solved 4.4% of coding problems in 2023 and 71.7% of problems in 2024. Harder tests are still a challenge (e.g. BigCodeBench where the top AI score is 35.8% versus a 97% human standard) while frontier performance is converging (the Elo score difference between 1st and 10th models on the Chatbot Arena Leaderboard was 11.9% in 2024 and 5.4% in 2025).
China catches up. In 2023, top Chinese systems lagged behind their US counterparts. On MMLU, MMMU, MATH, and HumanEval, they were outpaced by 17.5%, 13.5%, 24.3%, and 31.6% respectively. In 2024, these margins narrowed to 0.3%, 8.1%, 1.6%, and 3.7%. In 2023, China also produced more AI publications (23.2%) and more citations (22.6%) than any other country.
Models get bigger. Research has found that “training compute for notable AI models doubles approximately every five months, dataset sizes for training LLMs every eight months, and the power required for training annually.” This is despite the fact that AI hardware has become more energy efficient, increasing 40% annually, which is offset by a huge uptake in usage.
Investment hits new heights. Private investment in AI increased in 2024 by 44.5% with acquisitions up 12.1% from the previous year, with total “corporate AI investment” reaching $252.3bn. Generative AI fuelled much of that increase, up 18.7% in 2023 and constituting $33.9bn of total investment.
TAKE IT DOWN moves on up. For the first time in recent history, major tech legislation is nearing passage in Congress. The bill, called the TAKE IT DOWN Act, is designed to stop the spread of non-consensual intimate images (NCII), including deepfakes, online. TAKE IT DOWN has passed the Senate, passed committee markup in the House this month, and now is heading towards a floor vote before the full House of Representatives.
But as the legislation heads towards the finish line, critics have raised new concerns about the TAKE IT DOWN Act’s constitutionality and its targeted approach to the removal of NCII. In response, ARI this month partnered with former Principal Deputy Assistant Attorney General for Legislative Affairs Slade Bond to provide a detailed legal analysis finding that the TAKE IT DOWN Act is firmly constitutional, narrowly targeted, and unlikely to chill protected speech. (Axios)
The EU plans €20bn investment in supercomputers (The Guardian)
Hugging Face buys robotics startup Pollen (Hugging Face)
Former OpenAI employees back lawsuit to protect non-profit status (CNBC)
Palo Alto crosswalks hacked with AI-generated Musk and Zuckerberg (The Verge)
Google rolls out latest AI video generator for subscribers (Google)
Samsung’s spherical companion robot Ballie gets AI upgrade (The Verge)
“I Tested The AI That Calls Your Elderly Parents If You Can't Be Bothered” (404 Media)
Meta starts training AI on EU data; offers users opt-out (Meta)
Runway’s Gen-4 Turbo makes 10 second videos in 30 seconds (Runway)
“Wikipedia is giving AI developers its data to fend off bot scrapers” (The Verge)