Everybody Loves Anthropic
GPT faces unsubscriptions as Claude becomes the new favorite: what did Anthropic get right?
OpenAI continues to experience significant personnel shifts, with co-founder John Schulman being the latest to join competitor Anthropic. Schulman's move, driven by a desire to focus on AI alignment research and hands-on technical work, follows similar departures of other key figures like Ilya Sutskever and Jan Leike. The exodus has extended to Product Lead Peter Deng, while President Greg Brockman remains on extended leave. With only three of eleven co-founders remaining, OpenAI's current direction has come under scrutiny.
In stark contrast to OpenAI’s “tumultuous” year, Anthropic has demonstrated remarkable growth and stability, not just in attracting former OpenAI talent but also in receiving widespread acclaim for its latest products.
What’s happening at Anthropic?
Founded in 2021 by siblings Dario and Daniela Amodei, former senior executives at OpenAI, Anthropic has quickly become a fierce competitor. The Amodei siblings left OpenAI due to disagreements over the company’s direction and established Anthropic with a focus on creating the best AI models possible. Since then, the two San Francisco-based companies have been in a heated race to develop the most advanced AI systems.
Anthropic’s latest model, Claude 3.5 Sonnet, has set new benchmarks across various domains, including graduate-level reasoning capabilities (GPQA tests), undergraduate-level knowledge retention (MMLU tests), and programming skills (HumanEval tests). This model has consistently outperformed competitors while maintaining the speed and cost-effectiveness of the mid-range Claude 3 Sonnet.
AI-enhanced productivity with Claude 3.5
One of the most praised aspects of Claude 3.5 Sonnet is its superior code generation capabilities. Many programmers have found it indispensable for writing code, with some, like the CEO of Ottogrid.ai, claiming that 50% of their code is now written by Claude, with expectations to increase this to 80% next year.
Anthropic has also introduced a groundbreaking new feature called Artifact, which allows users to run and debug code directly within the AI conversation interface. This groundbreaking tool provides real-time code execution, interactive operations, and visualization previews, potentially revolutionizing application development workflows.
Unlike its competitors, Anthropic is not just focused on creating smarter AI; it’s also dedicated to making AI easier to use, more intuitive, and seamlessly integrated into existing workflows. This approach could give Anthropic a significant advantage in an industry often criticized for prioritizing benchmarks over real-world applicability.
As AI models continue to narrow their performance gaps, building an ecosystem around the model becomes key to retaining customers. In programming, Artifact offers developers a new, more efficient workflow, potentially changing the game.
Anthropic’s steady growth vs. OpenAI’s overexposure
Anthropic’s measured release strategy stands in stark contrast to OpenAI’s frequent criticism for overhyping its products. The “palace intrigue” that led to Sam Altman’s dismissal by OpenAI’s board was only possible due to the company’s unique structure, where a non-profit board oversees a for-profit company without shareholder accountability.
Anthropic’s structure is more traditional, with a shareholder-accountable board. However, Anthropic is also a Public Benefit Corporation (PBC), meaning its board has legal room to prioritize the safe development of transformative AI over profit, making it harder for shareholders to sue.
Anthropic takes pride in its unique structure and believes that the events at OpenAI won’t happen at Anthropic. Yet, this structure is still an experimental design. Harvard Law Professor Noah Feldman, who advised on Anthropic’s early governance, acknowledges that even the best designs can sometimes fail, but he remains hopeful for Anthropic’s success.
Aside from its structure, another significant difference between OpenAI and Anthropic is the latter’s structured approach to ensuring AI systems adhere to specific ethical standards and behavior guidelines. This commitment to AI safety and controllability is central to Anthropic’s mission, as it strives to develop explainable, reviewable, and steerable AI that serves humanity. Anthropic’s “Constitutional AI” (CAI) method, which involves introducing a set of guidelines or a “constitution” early in the training process, is one of the ways it ensures its AI aligns with ethical standards.
While this commitment comes with greater challenges, Anthropic remains steadfast in its unique structure and mission. The company has raised over $7 billion in the past year, primarily from tech giants like Amazon and Google. As these companies, along with Microsoft and Meta, race to dominate the AI space, Anthropic will need more funding to maintain its momentum. It must continue to launch superior products and demonstrate substantial profit potential to meet investor expectations and secure the enormous capital needed to build top-tier models.
However, if Anthropic can maintain its current steady growth trajectory, it might pave a new path where AI can develop safely, free from the pressures of the market, and benefit society as a whole.