In the bustling streets of San Francisco, a new kind of advertisement has been catching the eye of tech-savvy passersby:
"Rent H100s by the week. Or the day. Or the hour. 3.2TB/s InfiniBand, k8s / Slurm, that kind of thing."
This fresh take on "compute power rental" has sent ripples through the AI community, making the lofty Nvidia H100 chips suddenly accessible to the masses.

What's going on here?
The San Francisco Compute Company, a startup that emerged from obscurity just six months ago, has recently secured $12 million in seed funding led by Alt Capital, helmed by Jack Altman, brother of OpenAI's Sam Altman. This investment values the company at approximately $70 million, positioning it as a competitive player in the AI infrastructure landscape.
SF Compute's mission is clear: democratize access to high-performance computing resources for AI development. By offering flexible, short-term rentals of top-tier GPUs, the company is breaking down barriers that have long favored only the most well-funded and well-connected giants in the AI space.
The startup provides two key services:
Short-term compute resource rentals: Unlike traditional providers requiring long-term yearly contracts, SF Compute offers GPU rentals by the week, day, or even hour. This flexibility allows users to scale their computing power dynamically based on actual needs.
Compute capacity trading platform: SF Compute is developing a marketplace where users can buy and sell computing resources on-demand, further reducing costs and access barriers.
Why it matters?
In the AI world, computing power can make or break a startup. Traditionally, only the big fish could secure the necessary resources. SF Compute's model flips the script, offering an "Airbnb-like" approach to AI computing.
This democratization of access is particularly crucial for early-stage companies, academic researchers, and short-term projects requiring burst computing capacity without long-term financial commitments.
SF Compute's founders, Alex Gajewski and Evan Conrad, know this pain all too well. When trying to launch their AI music startup, they hit a wall: high-end GPUs cost at least one million with only year-long contracts on offer.
Seeking a solution, they proposed a collective GPU rental system to other startups. The response was overwhelming – 170 AI companies jumped on board within weeks.
Recognizing the huge market need, Alex and Evan pivoted. They abandoned their music startup to become what they and others desperately needed: a flexible, affordable GPU cloud provider for AI companies of all sizes. SF Compute thus emerged, turning a personal challenge into an industry-wide solution.
Future prospects
While SF Compute faces competition from established players like Lambda Inc and CoreWeave, its focus on ultra-flexible rental terms and competitive pricing for smaller entities sets it apart. The upcoming launch of its compute resource trading platform could further cement its position in the market.
Jack Altman, the lead investor, sees a world of possibilities. He envisions VCs and others with long-term GPU deals using the platform to buy and sell access, with potential customers coming from all corners of the tech world.
With plans to double its engineering team to 30 and enhance its service capabilities, SF Compute is poised to play a significant role in shaping the future of AI infrastructure access.
In a field where computing power can determine success, SF Compute's model could be the key to unlocking innovation for a new generation of AI startups and researchers.