- AI Odyssey
- Posts
- Microsoft’s AI chip spree + UK’s copyright clash + AI joins boxing 🥊💻
Microsoft’s AI chip spree + UK’s copyright clash + AI joins boxing 🥊💻
From Microsoft’s GPU dominance to UK copyright debates and AI judging Fury vs. Usyk, the AI race heats up.
It's Wednesday, and Google's AI is stepping into high-stakes arenas…
The tech giant now permits its generative AI to make automated decisions in critical sectors like healthcare and housing, provided human oversight remains in place.
By the way, this is our final edition of the year. We've packed it with extra insights and depth—definitely one to bookmark.
1: Microsoft's AI chip spree outpaces rivals
What happened: Microsoft grabbed nearly 485,000 Nvidia Hopper GPUs this year—more than twice what Meta managed. It’s a bold move to dominate AI infrastructure, boosting OpenAI and Azure while staying ahead of Google, Amazon, and even ByteDance.
Why it matters:
Nvidia’s GPUs are the lifeblood of AI systems, powering 43% of global server spending in 2024.
Microsoft’s aggressive spending gives it a lead in the AI arms race, but competitors aren’t far behind.
The numbers:
Microsoft spent $31 billion on servers, compared to Amazon’s $26 billion.
Meta, Google, and Amazon collectively bought fewer GPUs than Microsoft alone.
ByteDance and Tencent snagged 230,000 GPUs each despite U.S. restrictions.
Zoom out: Big Tech isn’t just buying chips—they’re building their own. Google’s TPUs, Meta’s AI chips, and Amazon’s Trainium hardware are gaining ground. Microsoft’s Maia chips lag behind, with only 200,000 deployed.
What’s next: Microsoft’s massive chip haul secures its position for now, but rivals’ homegrown hardware could shift the balance in this high-stakes AI race.
Sponsored by Pipedrive
AI isn’t just a shiny new trend—it’s the engine driving smarter decisions, better customer experiences, and serious growth. Whether it’s automating the boring stuff or uncovering game-changing insights, businesses everywhere are starting to catch on.
Pipedrive’s State of AI in Business Report 2024 dives into how 500 companies are using AI right now. The highlights?
ChatGPT is the go-to tool, and it’s not even close.
Why some businesses are still dragging their feet.
AI use cases that are actually moving the needle (hint: it’s not just chatbots).
Whether you’re skeptical or all in on AI, this report is packed with insights to help you navigate the future of business tech.
See how businesses are making AI work for them.
🚀 Don’t miss out: Get a 14-day trial and 20% discount, by clicking here.
2: UK weighs AI copyright opt-out scheme, sparking creative industry backlash
What’s happening: The UK government is proposing a new copyright exemption allowing AI firms like Google and OpenAI to use copyrighted works for training—unless creators explicitly opt out. Creatives argue this threatens their livelihoods, while tech groups welcome the move as a step toward resolving AI’s legal grey zones.
Key details:
The proposal: AI firms could freely train models on copyrighted material unless creators reserve their rights through an opt-out process.
Pushback: Critics, including Sir Paul McCartney and 37,000 creators, say it risks undermining the £126bn UK creative sector, benefiting only major rights holders.
Transparency: The government may require AI companies to disclose training data sources and clarify content use, possibly via new legislation.
Right of personality: A consultation will explore protections against unauthorized AI use of celebrity voices and likenesses, following controversies like Scarlett Johansson’s dispute with OpenAI.
What’s next: The consultation aims to balance AI innovation with fair compensation for creators, but skepticism remains over whether small-scale creatives will see meaningful benefits.
3: Anthropic’s Krieger on AI agents: “They’ll learn to work with us”
What’s happening: Anthropic’s chief product officer, Mike Krieger, shared insights at the Axios AI+ Summit about the evolving role of AI agents, predicting they’re at least a year away from full autonomy. Drawing comparisons to Tesla’s self-driving cars, he said the future involves users gradually trusting AI to take on more tasks while still staying engaged when needed.
Key takeaways:
AI learning curve: Krieger acknowledged the difficulty many users face in writing effective prompts, emphasizing that future models should understand intent without requiring technical know-how.
AI in action: He shared personal examples of using Anthropic’s Claude AI to simplify tasks, like preparing holiday cards, and envisions it handling more complex processes next year.
Balance is key: AI agents must find the right frequency for user check-ins, asking for input only when absolutely necessary.
4: Fury vs. Usyk 2 to debut AI judge for unbiased scoring
What’s happening: The highly anticipated rematch between Tyson Fury and Oleksandr Usyk will introduce an AI judge to boxing. While the AI’s scores won’t impact the official decision, the experiment aims to address the long-standing issue of controversial human judging in the sport.
Key details:
How it works: The AI judge, branded by Ring Magazine under new owner Alalshikh, is designed to eliminate bias and human error. It analyzes the fight and delivers scores for each round.
Not official yet: The AI’s results will be presented as a "fourth judge," offering transparency and comparison to the human judges’ decisions.
Why it matters: Human judges in boxing have often faced criticism for questionable scorecards, sparking accusations of corruption. If successful, the AI system could bring greater clarity and fairness to scoring while potentially becoming a future standard.
What’s next: All eyes will be on the AI’s scoring during Saturday’s fight in Riyadh. Whether fans embrace it—or reject it—could shape the role of AI in combat sports.
5: YouTube partners with CAA to fight AI mimicry
The news: YouTube is teaming up with the Creative Artists Agency (CAA) to give celebrities and creators tools to detect and remove AI-generated content mimicking their likeness on the platform.
Key details:
Starting early next year, the system will allow celebrities and athletes to identify AI-generated replicas of their faces or voices and submit takedown requests.
The feature will later roll out to top creators and professionals, aiming to manage these issues “at scale.”
YouTube is also testing tech to detect AI-generated singing voices that imitate artists, bolstering its ongoing efforts to label and moderate AI content.
Why it matters: The rise of generative AI has made it easier to create convincing fake content. By partnering with CAA—known for its CAAVault, which scans and stores clients’ digital likenesses—YouTube is pushing to protect talent and creators from unauthorized AI mimicry, especially as music labels and creators grow more vocal about AI-related concerns.
What’s next: YouTube plans to extend these tools to its broader creator base while continuing to refine its policies and technologies for managing AI-generated content.
AI READS 🗒️
Fortune: Top AI labs aren’t doing enough to ensure AI is safe, a flurry of recent datapoints suggest.
AXIOS: Anthropic's new weapon to detect abuse.
The Information: Nvidia says it could build a cloud business rivaling AWS. Is that possible?