Arif Ahmad (@arif_ahmad_py) 's Twitter Profile
Arif Ahmad

@arif_ahmad_py

All things AI, Computer Science and Circuits! Prev. @GoogleDeepMind and @Nvidia

ID: 1354158503733178368

calendar_today26-01-2021 20:05:46

5,5K Tweet

412 Followers

7,7K Following

Freda Shi (@fredahshi) 's Twitter Profile Photo

I received a review like this five years ago. It’s probably the right time now to share it with everyone who wrote or got random discouraging reviews from ICML/ACL.

I received a review like this five years ago. It’s probably the right time now to share it with everyone who wrote or got random discouraging reviews from ICML/ACL.
Jeremy Howard (@jeremyphoward) 's Twitter Profile Photo

@levelsio Sadly this is often the way these things work - contributions from small independent researchers get lost in the noise of big tech companies and prestigious universities.

Joseph Imperial (@josephimperial_) 's Twitter Profile Photo

NeurIPS D&B track in a nutshell: (1) An LLM-generated benchmark dataset (2) used to test performance of LLMs (3) evaluated via LLM-as-a-judge

NeurIPS D&B track in a nutshell:

(1) An LLM-generated benchmark dataset
(2) used to test performance of LLMs
(3) evaluated via LLM-as-a-judge
Qiyue Gao (@qiyuegao123) 's Twitter Profile Photo

🤔 Have OpenAI o3, Gemini 2.5, Claude 3.7 formed an internal world model to understand the physical world, or just align pixels with words? We introduce WM-ABench, the first systematic evaluation of VLMs as world models. Using a cognitively-inspired framework, we test 15 SOTA

Arif Ahmad (@arif_ahmad_py) 's Twitter Profile Photo

VLMs are often used for planning across different world modelling scenarios. Check out this recent work by Qiyue Gao which helps highlight some of their limitations and strength.

Arif Ahmad (@arif_ahmad_py) 's Twitter Profile Photo

Classic Unity/Unreal = hand-built assets + rigid solvers; neural game engines like Mirage can run interactive sandboxes from a prompt. 1993 GPUs solved graphics; physics and world assets stayed on the CPU. 2025 GPUs run giant diffusion models that have learnt the geometry,

Eric Xing (@ericxing) 's Twitter Profile Photo

I have been long arguing that a world model is NOT about generating videos, but IS about simulating all possibilities of the world to serve as a sandbox for general-purpose reasoning via thought-experiments. This paper proposes an architecture toward that arxiv.org/abs/2507.05169

Zhiting Hu (@zhitinghu) 's Twitter Profile Photo

Some critical reviews and clarifications on different perspectives of world models. 🔥🌶️ Stay tuned for more on PAN — its position on the roadmap towards next-level intelligence, strong results, and open-sources❗️🧠

Mohammad Nomaan Qureshi (@qunomaan) 's Twitter Profile Photo

🚀 Excited to share that I’ve joined the amazing team at Skild AI! I’m already blown away by the energy, the quality of work, and the level of ambition here. The mission, vision, and results speak for themselves. This is just the beginning. More to come soon! 👇

Utkarsh Mall (@utkarshmall13) 's Twitter Profile Photo

📢Excited to share that I’ve joined @MBZUAI as an Assistant Professor of Computer Vision this fall! If you’re interested in CV4Science: building the next generation of foundation models & discovery tools for science, consider applying to MBZUAI. I’ll be recruiting PhD students!

📢Excited to share that I’ve joined @MBZUAI as an Assistant Professor of Computer Vision this fall!

If you’re interested in CV4Science: building the next generation of foundation models & discovery tools for science, consider applying to MBZUAI. I’ll be recruiting PhD students!
Taylor W. Killian (@tw_killian) 's Twitter Profile Photo

#K2Think (🏔️💭) is now live. We're proud of this model that punches well above its weights, developed primarily for mathematical reasoning but has shown itself to be quite versatile. As a fully deployed reasoning system at k2think.ai you can test it for yourself!

miru (@miru_why) 's Twitter Profile Photo

Scalable GANs with Transformers arxiv.org/abs/2509.24935… hse1032.github.io/GAT authors train latent-space transformer GANs up to XL/2 scale, and report SotA 1-step class-conditional image generation results on ImageNet-256 after 40 epochs (*with REPA in discriminator)

Scalable GANs with Transformers

arxiv.org/abs/2509.24935…
hse1032.github.io/GAT

authors train latent-space transformer GANs up to XL/2 scale, and report SotA 1-step class-conditional image generation results on ImageNet-256 after 40 epochs (*with REPA in discriminator)