Kolda Terry
@koldaterry69786
ID: 1864504851708760065
05-12-2024 02:59:38
4 Tweet
25 Followers
30 Following
230k GPUs, including 30k GB200s, are operational for training Grok @xAI in a single supercluster called Colossus 1 (inference is done by our cloud providers). At Colossus 2, the first batch of 550k GB200s & GB300s, also for training, start going online in a few weeks. As Jensen