Amol Avasare’s Post

We’ve signed an agreement with SpaceX to use all of the compute capacity at their Colossus 1 data center in Memphis 🚀 This gives us access to more than 300 megawatts (>220,000 NVIDIA GPUs) within the month, and we're flowing through this additional capacity straight back to our users. From today, we’re doubling Claude Code’s 5-hour rate limits on Pro, Max, Team, seat-based Enterprise. We’re also getting rid of the peak-hour rate limit cut we made on Pro and Max a few weeks ago. We’re also bumping API rate limits considerably on Claude Opus. Appreciate everyone sticking with us, and we’ll be continuing to push on compute deals and efficiency to give more back to our users. LFG. https://lnkd.in/gbcPj6ZG

My first instinct... Get in bed w dogs... Wake up with fleas. I guess well see. Im sure you're aware there are a lot of people who feel this ruins anthropics position of the "ethical" one. Look at all the yes men in the comments here. Lol. If you ever need some honesty. 👋

Thanks Amol - Quick question - what's the current delta between Peak and available capacity as of May '26? My maths says we are going to bottleneck capacity in three months, and we cant bring on the power fast enough (GE backlogs for gas turbines go out to 2030 for example), construction times on Data Centres are on average 25% behind schedule (that's being generous). Is there a more efficient way to leverage compute given the supply challenges?

Like
Reply

ove your posts. AI is the future. My partners and I are sitting on a 400 megawatt site full gas permitted water fiber and all I get is non stop calls with lookie loos with no money trying to get a sneak peek lol. Let me know if you know anyone who may be real and have a true respect and interest. The property can handle up to 750000 sq ft for a data center site

Like
Reply

Rate limit cuts during peak hours were the kind of friction that makes power users start hedging. Open weight backups, multi-provider setups, fallback workflows. Removing them is a bigger trust signal than the additional capacity itself. The compute deal is the headline. The rate limit reversal is the retention play.

Like
Reply

This is huge. Grateful to see compute being reinvested back into the developer and builder community. Expanding access at this scale unlocks a completely new level of experimentation, creativity, and innovation that many teams previously had to constrain because of limits and cost.

Like
Reply

This is exciting — thank you! One question though: Memphis is still US infrastructure. Are there plans to expand compute capacity to other regions — India, Europe, elsewhere? Users do wonder about resilience: if something disrupts a major internet route, does part of the world simply lose access? Geographic distribution feels like the next frontier after raw capacity.

Like
Reply
Dan Ivanus

Translating AI into plain language | Exploring adoption, human impact, and organizational reality | 15+ years on the business-technology bridge

5d

Doubling rate limits the same week you announce the capacity deal — that’s a rare “we said it, we did it” moment. Whether the ceiling stays ahead of demand as Claude Code usage scales, or just moves up and the constraint returns in a few months — that’s the thing I’ll be watching.

Like
Reply

Considering xAI hasn't figured out how to properly utilize a massive GPU cluster, you're their science experiment. If you can't figure out how to use it, give it away -- for a fee -- to your favorite competitor.

Like
Reply

The AI war isn't won with code, it's won with compute. This is a massive power play.

Like
Reply

How you’re thinking about the latency and data residency implications for financial services clients like insurance and banking who need to keep certain workloads onshore in Australia?

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories