Anjali Gupta (@anjaliwgupta) 's Twitter Profile
Anjali Gupta

@anjaliwgupta

PhD @NYU_Courant, B.S. @Yale

ID: 1142867284203098114

linkhttp://www.anjaliwgupta.com calendar_today23-06-2019 18:49:28

17 Tweet

73 Followers

129 Following

Pavel Izmailov (@pavel_izmailov) 's Twitter Profile Photo

I am recruiting Ph.D. students for my new lab at New York University! Please apply, if you want to work with me on reasoning, reinforcement learning, understanding generalization and AI for science. Details on my website: izmailovpavel.github.io. Please spread the word!

I am recruiting Ph.D. students for my new lab at <a href="/nyuniversity/">New York University</a>! Please apply, if you want to work with me on reasoning, reinforcement learning, understanding generalization and AI for science.

Details on my website: izmailovpavel.github.io. Please spread the word!
Xichen Pan (@xichen_pan) 's Twitter Profile Photo

We find training unified multimodal understanding and generation models is so easy, you do not need to tune MLLMs at all. MLLM's knowledge/reasoning/in-context learning can be transferred from multimodal understanding (text output) to generation (pixel output) even it is FROZEN!

We find training unified multimodal understanding and generation models is so easy, you do not need to tune MLLMs at all.
MLLM's knowledge/reasoning/in-context learning can be transferred from multimodal understanding (text output) to generation (pixel output) even it is FROZEN!
David Bau (@davidbau) 's Twitter Profile Photo

Dear MAGA friends, I have been worrying about STEM in the US a lot, because right now the Senate is writing new laws that cut 75% of the STEM budget in the US. Sorry for the long post, but the issue is really important, and I want to share what I know about it. The entire

Anjali Gupta (@anjaliwgupta) 's Twitter Profile Photo

Excited to present “Thinking in Space: How MLLMs See, Remember, and Recall Spaces” at #CVPR2025 as an Oral paper alongside my amazing co-authors Shusheng Yang Jihan Yang (supervised by Saining Xie)! We’ll be speaking at 10:45am on Saturday, June 14, in the Davidson Ballroom!

Willis (Nanye) Ma (@ma_nanye) 's Twitter Profile Photo

Come and check out our paper, Inference-Time Scaling for Diffusion Models Beyond Denoising Steps, at Poster Session 1 at #CVPR2025, slot 226, happening right now!

Come and check out our paper, Inference-Time Scaling for Diffusion Models Beyond Denoising Steps, at Poster Session 1 at #CVPR2025, slot 226, happening right now!
Anjali Gupta (@anjaliwgupta) 's Twitter Profile Photo

Come check out our poster for "Thinking in Space: How MLLMs See, Remember, and Recall Spaces" at 10:30am today, ExHall D Poster #287!

Come check out our poster for "Thinking in Space: How MLLMs See, Remember, and Recall Spaces" at 10:30am today, ExHall D Poster #287!
Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

We have developed a new tactile sensor, called e-Flesh, with a simple working principle: measure deformations in 3D printable microstructures. Now all you need to make tactile sensors is a 3D printer, magnets, and magnetometers! 🧵

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

+1 for "context engineering" over "prompt engineering". People associate prompts with short task descriptions you'd give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window

Martin Marek (@mrtnm) 's Twitter Profile Photo

We trained thousands of language models to study the effect of batch size. We found several surprising and practical results!