GPT-5 Will Have Ph.D.-level Intelligence, OpenAI Says "Wait Till 2025"
Smarter than a doctoral candidate, later than your ex's text back.
After months of wild speculation and AI enthusiasts foaming at the mouth, OpenAI's Chief Technology Officer, Mira Murati, has finally spilled some beans on the much-hyped GPT-5 (or whatever its final name will be).
From Toddler to PhD: The Evolution of GPT
In a recent interview with Dartmouth Engineering, Murati laid out the AI evolution timeline:
GPT-3: The toddler of AI. Adorable, prone to nonsensical babbling, and occasionally surprising you with moments of accidental brilliance.
GPT-4: The smart high schooler. Can solve complex problems, write decent essays, and probably roll its eyes at you when you ask it to explain TikTok trends.
GPT-5: The PhD candidate. Specializing in making humans feel increasingly obsolete, one task at a time.
But before you start planning your "Welcome to Skynet" party, pump those brakes. Murati clarified that this "PhD-level" intelligence is task-specific. So while GPT-5 might be able to write a dissertation on quantum mechanics, it'll still struggle to fold laundry or understand why humans find cat videos so endlessly fascinating.
The Long Wait for AI Overlords
Here's where things get interesting (or disappointing, depending on your stance on the robot apocalypse). Despite earlier rumors of a 2023 or 2024 release, Murati dropped the bombshell that GPT-5 is still a solid year and a half away. That's right, folks – we're looking at late 2025 or early 2026 before this digital brainiac graces us with its presence.
So, what's the holdup? The answer lies in GPT-5's unprecedented scale and complexity. AI advisor Alan Thompson reveals that GPT-5, codenamed "Gobi" or "Arrakis," boasts a staggering 52 trillion parameters - far exceeding GPT-4's 1.5-2 trillion.
Meanwhile, the race towards Artificial General Intelligence (AGI) continues to heat up. Anthropic's latest offering, Claude 3.5 Sonnet, is turning heads with its performance. It scored an impressive 67.2% on graduate-level reasoning tests (GPQA), outperforming human PhDs who averaged 65%. It's also showing superiority over mainstream models like GPT-4 in language understanding (MMLU), mathematics, and multimodal visual capabilities.
Thompson estimates we're about 75% of the way to achieving AGI, with the final quarter still to go. However, it's worth noting that some scientists, including Fei-Fei Li, argue that large language models fundamentally lack the perceptual abilities crucial for true general intelligence. These abilities, rooted in physiological states that allow us to experience hunger, taste food, and observe the world, may be beyond the reach of purely mathematical models, regardless of their size.
This complexity and the ongoing debates in the field might explain why OpenAI is taking its time with GPT-5. After all, when you're potentially reshaping the landscape of artificial intelligence, it pays to be thorough.
Safety First, Skynet Second
Of course, with great power comes great responsibility, and Murati didn't shy away from addressing the elephant in the room: AI safety. She argues that guiding a more intelligent AI system is actually easier – just tell it what not to do, like "don't turn humans into batteries" or "avoid world domination schemes."
Murati also emphasized that AI safety isn't just OpenAI's problem – it's a group project for all of society. So governments, content creators, and that guy who keeps trying to teach his Roomba to dance all need to chip in.
However, OpenAI is grappling with growing skepticism and controversy over its approach to AI safety. Some former employees and critics argue that the company prioritizes flashy product releases over robust safety protocols. For instance, Ilya Sutskever and Jan Leike, who led OpenAI’s superalignment team, left the company because they were unhappy with its safety culture. Moreover, OpenAI has been accused of using strict confidentiality agreements to silence internal criticism.
These issues underscore the significant challenge OpenAI faces in balancing technological advancements with safety and ethical considerations. Despite these controversies, Murati and OpenAI’s leadership insist that they are committed to continually enhancing safety measures to ensure the responsible development of AI technology.
The Wait Continues
While we're all twiddling our thumbs waiting for GPT-5, let's not forget that the current AI landscape is already pretty mind-blowing.
So, whether you're excited about our future AI overlords or stockpiling canned goods in your bunker, one thing's for sure – the next few years in AI development are going to be one wild ride. Buckle up, keep your hands and feet inside the vehicle at all times, and maybe start being extra nice to your digital assistants. You know, just in case.