Neural Dispatch: Sir Demis Hassabis gives us a reality check, and decoding OpenAI’s Oracle deal


ALGORITHM

This week, we are going to have a conversation about OpenAI’s big deal with Oracle, what it means for their existing alignment with Microsoft, and why xAI has let go of an entire team of “generalist AI tutors”.

I think that we are maybe 5 to 10 years away from having an AGI system," says Deepmind CEO Sir Demis Hassabis
I think that we are maybe 5 to 10 years away from having an AGI system,” says Deepmind CEO Sir Demis Hassabis

OpenAI’s $300 billion bet on Oracle’s cloud empire

Oracle_Larry_Ellison
Oracle_Larry_Ellison

Hold onto your GPUs, folks! OpenAI just signed a $300 billion, five-year deal with Oracle for computing power, for OpenAI’s AI infrastructure beginning in 2027. Oracle stock went soaring (but have stabilised since). Honestly, it’s the kind of number that makes you wonder if someone accidentally added an extra zero. But nope, this is real, and it’s part of the ambitious $500 billion Project Stargate infrastructure push. Here’s what’s wild: this deal sets the foundation to develop 4.5 gigawatts of U.S. data center capacity. To put that in perspective, that’s more electricity than some entire countries use.

Oracle’s stock shot up 43% after announcing the deal — though it’s balanced out since. It’s a fascinating pivot for Oracle, from being one of the largest cloud infrastructure providers, to now utilising access to Nvidia’s AI chip hardware, to build further partnerships with Google, Meta and xAI. For instance, Oracle says Google’s Gemini models will soon be on their cloud infrastructure.

xAI’s Friday night massacre and a billionaire’s pivot

Speaking of pivots, Elon Musk’s xAI has reportedly laid off about 500 workers from its data annotation team (these are the people otherwise tasked with training Grok) this past Friday night—because nothing says ‘strategic realignment’ quite like firing people via email on a Friday evening? The probable story is xAI is shifting from “generalist AI tutors” to “specialist AI tutors” to improve Grok’s training, which means everyone already there had to be let go. The company claims it plans to hire 10 times more specialist AI tutors, which sounds great if you happen to be a specialist rather than one of the 500 people who just got generalist-ed out of a job. The timing feels particularly brutal—workers were told they’d be paid through their contracts or until November 30, but their system access was cut immediately. Nothing like digital exile to really drive home a training message.

Microsoft and OpenAI: It’s complicated (But, really complicated)

OpenAI and Microsoft
OpenAI and Microsoft

This was weird. “OpenAI and Microsoft have signed a non-binding memorandum of understanding (MOU) for the next phase of our partnership. We are actively working to finalise contractual terms in a definitive agreement,” a post on X from the OpenAI Newsroom account a few days ago, seems to say a lot without saying much. In fact, it looks like a desperate call for some attention. The contractual terms in a definitive agreement aren’t yet in place, and still, something had to be announced.

The context here is understood if you go a bit deeper. While the specifics are still emerging, there is some level of friction with OpenAI’s desire for independence, and Microsoft on the back of massive investment harbours serious partnership expectations. What’s fascinating is how this plays out against the backdrop of that Oracle deal, the latter not exactly known for playing nice with Microsoft. OpenAI diversifying its cloud partnerships sends a pretty clear message about not wanting to be too dependent on any single infrastructure provider—even one that’s invested billions in your company.

PROMPT

Gemini 2.5 Flash
Gemini 2.5 Flash

What is this excitement about Nano Banana, a nickname for Google’s Gemini 2.5 Flash imaging AI tool, that’s actually stuck?

Sometimes the quirkiest codenames hide the most useful tools. Google’s latest image-editing model, Gemini 2.5 Flash Image, nicknamed Nano Banana during testing, has proved quite popular on social media in the past few days.

How to use it: Nano Banana is built for natural language editing. Instead of wrestling with complex Photoshop workflows, you can simply type “make the background a sunset beach” or “turn this outfit into a business suit” and it gets the job done. Early users insist this model is better able to hold characters and objects consistent across edits, which is a big deal for content creators who need repeatable shots or visuals, often including generated objects or characters. The model is available through the Gemini app, as well as for developers via the Gemini API, Google AI Studio, and Vertex AI.

Why it matters: The core experience is designed to be fast and user-friendly, and Google is layering SynthID watermarking, so every generated or edited image carries an invisible authenticity stamp. In practice, Nano Banana shines at background swaps, style changes, and multi-image blending. Pose and angle edits are possible, but still a bit hit-or-miss, for this still isn’t a full-fledged 3D studio yet. Still, the promise is clear. Instead of waiting 10–15 seconds for clunky edits, you get something lightweight, responsive, and surprisingly fun to use. The funny name may stick, but the real story is how Google is betting on accessible, reliable image editing at scale — a smart counterweight to the giant-model arms race. Should the likes of Adobe and Canva be worried?

THINKING

“You often hear some of our competitors talk about, you know, these modern systems that we have today to have PhD intelligences. I think that’s nonsense, they are not PhD intelligences. They have some capabilities that are PhD-level, but they’re not in general capable and that’s what exactly what general intelligence should be of performing across the board at the PhD level. In fact, as we all know interacting with today’s chatbots, if you pose the question in a certain way, they can make simple mistakes with even high school maths and simple counting. That shouldn’t be possible for a true AGI system. I think that we are maybe 5 to 10 years away from having an AGI system that’s capable of doing those things. Another thing missing is continual learning, the ability to teach the system something new or adjust its behaviour in some way. A lot of these core capabilities are still missing and maybe scaling will get us there, but I feel if I was to bet, I would think there are probably one or two missing breakthroughs that are still required” – Sir Demis Hassabis, at the All-In Summit.

The context: Sir Demis is perhaps the only AI leader who could have taken the PhD Intelligence delusion head on, and burst the bubble AI companies have only been too happy to inflate (alongside inflating their valuations and funding). The DeepMind CEO made an elaborate statement that should make everyone in the AI world pause and think, and it doesn’t come any clearer than saying that any claims suggesting today’s chatbots have PhD-level intelligence are complete nonsense. And honestly, it’s about time someone with Sir Demis’ credentials said it out loud. This realism about AI capability inflation comes at a crucial moment, when we’re drowning in marketing materials that breathlessly proclaim each new model as approaching or exceeding human expert performance, while anyone who’s spent serious time with these systems knows they’re simultaneously brilliant and completely clueless in ways that no human PhD would ever be.

A reality check: The problem with the “PhD intelligence” framing isn’t just that it’s wrong, it’s that there is a fundamental misunderstanding of what intelligence actually is. It isn’t about just knowledge accumulation, but also development of judgment, an ability to navigate uncertainty in situations as well as a capacity for genuine insight within a specialised domain. Current AI systems are essentially sophisticated pattern matching engines tuned to deliver human-like outputs (or at least we think they are; the way we thought a Nokia 5800 XpressMusic in 2008 was a ‘smartphone’) without anything close to human-like understanding. Hassabis understands this better than most because DeepMind has been at the forefront of achievements and honest assessments. When the team behind AlphaGo, AlphaFold, and Gemini says we’re overselling current capabilities, that’s expertise. And we better listen. Unrealistic expectations, misplaced trust and risks that have long term implications. More so, prevents us from seeing what these systems actually are good for. Current large language models are powerful tools for pattern recognition, text generation, and certain types of reasoning. Yet, they’re not human-equivalent intelligences, and pretending they are, only leaves us standing in a fool’s paradise.

Neural Dispatch is your weekly guide to the rapidly evolving landscape of artificial intelligence. Each edition delivers curated insights on breakthrough technologies, practical applications, and strategic implications shaping our digital future.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top