Emad Barsoum (@emadbarsoumpi) 's Twitter Profile
Emad Barsoum

@emadbarsoumpi

Corporate Vice President, AI at AMD.

ID: 2868345836

calendar_today21-10-2014 01:11:34

642 Tweet

447 Followers

539 Following

Sharon Zhou (@realsharonzhou) 's Twitter Profile Photo

AMD Mi300 GPUs have 192GB of VRAM *each*, or 1.5TB on a 8x GPU node 🀯 To compare, Nvidia H100 GPUs only have 80GB each. The absurd amount of memory is for your model weights, and for your long contexts (KV cache). MI300 GPUs available on the AMD Developer Cloud today at

AMD Mi300 GPUs have 192GB of VRAM *each*, or 1.5TB on a 8x GPU node 🀯

To compare, Nvidia H100 GPUs only have 80GB each.

The absurd amount of memory is for your model weights, and for your long contexts (KV cache).

MI300 GPUs available on the AMD Developer Cloud today at
Sharon Zhou (@realsharonzhou) 's Twitter Profile Photo

Excited to be part of the keynote at PyTorch Conference this year! 🫢 Open-source is so important for the community πŸ™‚

Emad Barsoum (@emadbarsoumpi) 's Twitter Profile Photo

Two papers accepted in EMNLP 2025, one on LLM agent as research assistant and another a Tic-Tac-Toe style games for benchmark reasoning ability, proud of AMD team!!! AI at AMD arxiv.org/abs/2501.04227 arxiv.org/abs/2506.10209

Emad Barsoum (@emadbarsoumpi) 's Twitter Profile Photo

At AMD , we’re building LLMs, vision-language, and text-to-video models that run on open software and fully open. Watch here β†’ youtu.be/HgmVn7_bcgQ Try these models: Instella-3B LLM: bit.ly/418KZkZ Nitro Diffusion (Text-to-Image): bit.ly/3UY7UMe

Sharon Zhou (@realsharonzhou) 's Twitter Profile Photo

I kinda love these new algos that can be used for *both* RL training and (training-free) test-time reasoning - in this case, reranking. YES

I kinda love these new algos that can be used for *both* RL training and (training-free) test-time reasoning - in this case, reranking.

YES
Anush Elangovan (@anushelangovan) 's Twitter Profile Photo

The next AMD Dev challenge is online. Multi-GPU kernels. The last dev challenge got us more kernel tokens for training kernel LLMs than what was available on the internet. Open Science, Open Models, Open Kernels FTW.

Anush Elangovan (@anushelangovan) 's Twitter Profile Photo

scraping some PyTorch Nightly UT logs: (Not done yet.. so WIP) And yes our commitment to the Quality and Performance of PyTorch on ROCm is unconditional and if there are gaps, fixed.. they will be.

scraping some PyTorch Nightly UT logs:

(Not done yet.. so WIP)

And yes our commitment to the Quality and Performance of PyTorch on ROCm is unconditional and if there are gaps, fixed.. they will be.