Anshul Nasery (@anshulnasery) 's Twitter Profile
Anshul Nasery

@anshulnasery

PhD student at @uwcse | Previously Pre-Doctoral Researcher at @GoogleAI, undergrad at @iitbombay

ID: 1656228614898323462

linkhttps://anshuln.github.io/ calendar_today10-05-2023 09:24:16

26 Tweet

392 Followers

1,1K Following

Anshul Nasery (@anshulnasery) 's Twitter Profile Photo

Thanks AK for sharing our work. We propose Peekaboo - a training-free method for incorporating spatio-temporal control in video diffusion models. Work done in collaboration with Yash Jain Harkirat Behl Vibhav Vineet . Longer thread + code coming soon!

Shreyas Havaldar (@_toolazyto_) 's Twitter Profile Photo

Come check out our spotlight work at Room 252 - Room 254 (Level 2) and let's talk more about fairness and distribution shifts :) Amazing conversations at the morning poster session AFME 2024 See you at the talk at 13:00 and the afternoon poster at 16:50 to 1730! #NeurIPS2023

Come check out our spotlight work at Room 252 - Room 254 (Level 2) and let's talk more about fairness and distribution shifts :)
Amazing conversations at the morning poster session <a href="/afciworkshop/">AFME 2024</a>
See you at the talk at 13:00 and the afternoon poster at 16:50 to 1730!
#NeurIPS2023
Rachit Bansal (@rach_it_) 's Twitter Profile Photo

Extending an LLM for new knowledge sources is tedious—fine-tuning is expensive/causes forgetting, LoRA is restrictive. Excited to share our work where we show that an LLM can be efficiently *composed* with specialized (L)LMs to enable new tasks! arxiv.org/abs/2401.02412 🧵(1/8)

Extending an LLM for new knowledge sources is tedious—fine-tuning is expensive/causes forgetting, LoRA is restrictive.

Excited to share our work where we show that an LLM can be efficiently *composed* with specialized (L)LMs to enable new tasks!

arxiv.org/abs/2401.02412

🧵(1/8)
Anshul Nasery (@anshulnasery) 's Twitter Profile Photo

While large-scale models are pushing the boundaries of video generation, spatio-temporal control of their outputs is still a problem. Peekaboo (now accepted at #CVPR2024) provides a simple plug-and-play solution to tackle this! 🔗- jinga-lala.github.io/projects/Peeka…

Kabir (@kabirahuja004) 's Twitter Profile Photo

📢 New Paper! Ever wondered why transformers are able to capture hierarchical structure of human language without incorporating an explicit 🌲 structure in their architecture? In this work we delve deep into understanding hierarchical generalization in transformers. (1/n)

📢 New Paper!

Ever wondered why transformers are able to capture hierarchical structure of human language without incorporating an explicit 🌲 structure in their architecture?

In this work we delve deep into understanding hierarchical generalization in transformers.

(1/n)
Rachit Bansal (@rach_it_) 's Twitter Profile Photo

Looking forward to presenting this work at #ICLR2024 next week in Vienna! 🇦🇹 Please stop by our poster on 8th (10:45am) if you are interested in efficient, modular, decentralized development of large models!

lovish (@louvishh) 's Twitter Profile Photo

🚨 New Paper 🚨 Evaluations can have a lot of variance, throwing off model comparisons especially during pre-training. In our latest work, “Quantifying Variance in Evaluation Benchmarks”, we explore this phenomenon in depth. A thread [1/n]

🚨 New Paper 🚨

Evaluations can have a lot of variance, throwing off model comparisons especially during pre-training. In our latest work, “Quantifying Variance in Evaluation Benchmarks”, we explore this phenomenon in depth.

A thread [1/n]
Scott Geng (@scottgeng00) 's Twitter Profile Photo

Will training on AI-generated synthetic data lead to the next frontier of vision models?🤔 Our new paper suggests NO—for now. Synthetic data doesn't magically enable generalization beyond the generator's original training set. 📜: arxiv.org/abs/2406.05184 Details below🧵(1/n)

Sriyash Poddar (@sriyash__) 's Twitter Profile Photo

How can we align foundation models with populations of diverse users with different preferences? We are excited to share our work on Personalizing RLHF using Variational Preference Learning! 🧵 📜: arxiv.org/abs/2408.10075 🌎: weirdlabuw.github.io/vpl/