What Google’s antitrust ruling means for AI
Anthropic's $1.5bn settlement; parental controls for ChatGPT; Google's Nano Banana
Good morning all,
The world of tech antitrust is curious. On the one hand, the stakes are potentially huge, challenging the structure and revenue of the world’s biggest corporations. On the other, when remedies are imposed they often seem too little and too late. Despite this, landmark cases tell us a lot about the relations between state and industry — as well as how they might develop in future. The case in point is Google’s recent close shave with antitrust law.
In August 2024, a federal judge concluded that Google had illegally maintained a search monopoly, leading to the possibility that the search giant would be broken up. But last Tuesday, Judge Amit P. Mehta delivered the state’s remedies, which were notably lenient. Yes, Google will be forced to share some search data with rivals, but it won’t be forced to sell off Chrome or Android and isn’t barred from paying competitors like Apple to make Google their default search engine (though it can no longer make such contracts exclusive).
Although the case was filed in 2020, before the current AI boom, both the government and Google argued for AI’s relevance. The government said Google’s monopoly in search could create one in AI, while Google said AI threatens its business like never before. Judge Mehta favored Google’s arguments. “The emergence of GenAI changed the course of this case,” he wrote. “For the first time in over a decade, there is a genuine prospect that a product could emerge that will present a meaningful challenge to Google’s market dominance.”
This seems unarguably true, but the remedies still leave Google well positioned to take on rivals like OpenAI and Perplexity. With Chrome and Android in its stable, Google can continue to use these platforms to push AI products in front of users. As an op-ed in The Financial Times noted: “the search company looks to have been left well-placed to become one of the leaders — and possibly the dominant player — in the emerging AI market.”
Beyond antitrust, though, we’ve got plenty more AI news in The Output today, including:
Anthropic’s $1.5bn settlement with authors
Meta spins up its own AI super PAC
OpenAI introduces parental controls for ChatGPT
And much more…
POLICY
AI lessons at the White House. Last week, President Trump hosted tech leaders for a roundtable and dinner. The meetings produced pledges from Amazon, Microsoft, Google and others to fund AI education tools and underscored the closeness between the Trump administration and tech, with CEOs taking turns to thank the president. What will they get in return? Well, there’s plenty Trump can offer, but top of the wish list is scrapping EU and UK digital regulations, which the US continues to push as key negotiating items in trade talks. (The Wall Street Journal)
Anthropic pays authors $1.5bn. Anthropic has agreed to pay a $1.5bn settlement to authors and publishers after a judge found the company had illegally downloaded and stored millions of books. The case is a milestone in the ongoing legal battles between AI companies and copyright holders. Notably, judge William Alsup ruled that Anthropic’s use of the books for training purposes was shielded under fair use. Anthropic has not admitted to any wrongdoing in agreeing to the settlement. (Wired)
Another AI super PAC. In our previous newsletter we noted the launch of Leading the Future, a super PAC network with $100m in funding that seeks to oppose AI regulation. Well, just one day later, details emerged of a new super PAC funded by Meta that will target pro-AI candidates for state office in California. (Politico)
Deepfake laws in (nearly) every state. Michigan is the latest state to ban non-consensual deepfake pornography, meaning that 48 US states now have regulations pertaining to deepfakes on their books. That’s in addition to the TAKE IT DOWN Act signed into law earlier this year that requires platforms to remove non-consensual intimate imagery within 48 hours’ notice. (404 Media)
INDUSTRY
AI image editing goes bananas. Google has updated its Gemini app with a powerful image editor dubbed Nano Banana that works via text prompt. Natural language image editing isn’t new, but Nano Banana’s results represent a serious maturation of the tech. It doesn’t just handle minor edits like removing objects, but major changes like clothes, locations, and blending photos together. Photoshop is still safe for now given its fine-grained control, but tools like Nano Banana open up advanced and swift image editing to amateurs. (The Washington Post)
Parental controls for ChatGPT. OpenAI will introduce parental controls for its chatbot “within the next month,” allowing parents to control how ChatGPT interacts with younger users and “receive notifications when the system detects their teen is in a moment of acute distress.” A lawyer representing parents suing OpenAI over ChatGPT’s alleged involvement in the suicide of their 16-year-old son criticized the update: “Rather than take emergency action to pull a known dangerous product offline, OpenAI made vague promises to do better.” (OpenAI)
OpenAI cooks its own chips. OpenAI is co-designing custom AI chips with US firm Broadcom, reports The Financial Times and The Wall Street Journal. The chips (dubbed XPUs to differentiate them from off-the-shelf GPUs) will reportedly ship next year for OpenAI’s use only.
Meta does the mess around. A $14.3bn acquisition of Scale AI. A rebranded, reorganized AI lab. A poaching spree. A manifesto. All that and Meta still can’t retain its top AI talent, according to reports from Wired and The Verge. The details are complicated (some of those researchers who “left” reportedly never even started, for example) but it’s clear Meta’s AI efforts still aren’t ship-shape.
Anthropic now valued at $183 billion. Anthropic has raised $13bn in a Series F funding round, valuing the company at $183bn post-money. The company’s last raise was $3.5bn for a $61.5bn valuation in March. The new funding will be used for international expansion, safety research, and enterprise. Anthropic said it now serves “over 300,000 business customers,” with Claude Code a major driver of growth, generating more than $500m in run-rate revenue. (TechCrunch)
RESEARCH
Beware vibe-hackers. Anthropic’s most recent Threat Intelligence report makes for worrying reading, describing the many ways that agentic AI systems are being weaponized. One of the most concerning is vibe-hacking: using coding assistants like Claude Code to sniff out and exploit system vulnerabilities. The attacks themselves aren’t necessarily novel, but AI expands accessibility. “[W]hat would have otherwise required maybe a team of sophisticated actors, like the vibe-hacking case, to conduct — now, a single individual can conduct, with the assistance of agentic systems,” Jacob Klein, Anthropic’s head of threat intelligence, told The Verge. In one case study, Claude not only helped write the ransomware that stole a target’s data but also the “psychologically targeted extortion demands” that followed the attack.
Grok’s political journey. An interesting analysis here from The New York Times on how the political beliefs of X chatbot Grok have shifted over the last year. In general, the chatbot has begun to espouse more rightwing views, particularly when unsatisfactory responses are identified by Elon Musk, but variations in its replies show the difficulty in rigidly constraining the political ideologies of LLMs.
Rewiring the job market. On Thursday, September 11, join a virtual panel with digital economy experts Erik Brynjolfsson and Bharat Chandar to discuss the impact of artificial intelligence on the jobs landscape. Following a jobs report that indicates a national slow-down in hiring, the panel will discuss Brynjolfsson and Chandar’s recent research, which finds clear evidence of AI’s impact on jobs for entry-level workers. Register here.
America First chips policy. Last week, the Senate released draft text of the National Defense Authorization Act (NDAA), including bill text for the GAIN AI Act, legislation that would require chip sellers to fulfill purchases from US-based customers before selling advanced AI chips to geo-political competitor nations. The legislation comes as the Trump Administration has rescinded export controls on the sale of advanced H20 chips to China and as the Administration considers a similar rollback on controls preventing the sale of Nvidia’s Blackwell chips to China. ARI endorsed the GAIN AI Act and celebrated its inclusion in the NDAA.
“How ‘Clanker’ Became an Anti-A.I. Rallying Cry” (The New York Times)
Anker’s AI voice recorder is the size of a coin (The Verge)
YouTube applied AI enhancement without users’ consent (BBC News)
How AI spending is feeding back into the wider economy (The New York Times)
Tesla’s new Master Plan has more fluff than facts (The Verge)
Google adds AI language learning to Translate app (The Keyword)
Anthropic to stop selling AI to Chinese-owned groups (The Financial Times)
Warner Bros. is third studio to sue Midjourney (Variety)
Australian lawyer penalized for using AI-generated citations (The Guardian)
Taco Bell’s AI drive-thru confounded by 18,000 cups of water (The Verge)
“China seeks to triple output of AI chips in race with the US” (The Financial Times)





