
Muskpage X
@elonmuskxa0
ID: 275260082
31-03-2011 23:11:28
236 Tweet
46 Takipçi
4,4K Takip Edilen



230k GPUs, including 30k GB200s, are operational for training Grok @xAI in a single supercluster called Colossus 1 (inference is done by our cloud providers). At Colossus 2, the first batch of 550k GB200s & GB300s, also for training, start going online in a few weeks. As Jensen









