Saba (@saba_a96) 's Twitter Profile
Saba

@saba_a96

MSc @Mila_Quebec and @UMontrealDIRO

ID: 1323917770376073217

calendar_today04-11-2020 09:19:40

114 Tweet

101 Followers

135 Following

P Shravan Nayak (@pshravannayak) 's Twitter Profile Photo

Join us at CVPR 2025 for our workshop VLMs-4-All: Vision-Language Models for All! 🌍✹ We're tackling the challenge of building geo-diverse, culturally aware VLMs. If you're passionate about inclusivity in AI, we’d love your participation! #CVPR2025 #VLMs4All

Juan A. RodrĂ­guez đŸ’« (@joanrod_ai) 's Twitter Profile Photo

Thanks for sharing AK!! Check out our website (starvector.github.io) and code (github.com/joanrod/star-v
) for more details! đŸ’« Release thread: x.com/joanrod_ai/sta


P Shravan Nayak (@pshravannayak) 's Twitter Profile Photo

🚀 Super excited to announce UI-Vision: the largest and most diverse desktop GUI benchmark for evaluating agents in real-world desktop GUIs in offline settings. 📄 Paper: arxiv.org/abs/2503.15661 🌐 Website: uivision.github.io đŸ§” Key takeaways 👇

Amirhossein Kazemnejad (@a_kazemnejad) 's Twitter Profile Photo

Introducing nanoAhaMoment: Karpathy-style, single file RL for LLM library (<700 lines) - super hackable - no TRL / Verl, no abstractionđŸ’†â€â™‚ïž - Single GPU, full param tuning, 3B LLM - Efficient (R1-zero countdown < 10h) comes with a from-scratch, fully spelled out YT video [1/n]

Introducing nanoAhaMoment: Karpathy-style, single file RL for LLM library (&lt;700 lines)

- super hackable
- no TRL / Verl, no abstractionđŸ’†â€â™‚ïž
- Single GPU, full param tuning, 3B LLM
- Efficient (R1-zero countdown &lt; 10h)

comes with a from-scratch, fully spelled out YT video [1/n]
Milad Aghajohari (@maghajohari) 's Twitter Profile Photo

I wish this existed when I started working on RL for LLM. Thus, created it. Other codebases are industry first: Complex, Ray, Unhackable, Multi-Node oriented... The best RL for LLM codebase for academia and comes with a 5h implementation video starting from an empty notebook.

Juan A. RodrĂ­guez đŸ’« (@joanrod_ai) 's Twitter Profile Photo

Excited to be at ICLR 2025 in Singapore this week! 🇾🇬 Want to connect? Ping me! 📝 Main Conference Papers 📄 BigDocs 📅 Thu, Apr 24 | ⏰ 10:00–12:30 SGT 📍 Hall 3 + 2B | Poster #280 Open dataset for training multimodal models on document + code tasks. 🔗

Mohammad Pezeshki (@mpezeshki91) 's Twitter Profile Photo

I'm presenting our recent work on "Pitfalls of Memorization" today at ICLR Number #304 at 3pm.. Come say hi! iclr.cc/virtual/2025/p


I'm presenting our recent work on "Pitfalls of Memorization" today at ICLR
Number #304 at 3pm..
Come say hi!
iclr.cc/virtual/2025/p

Aishwarya Agrawal (@aagrawalaa) 's Twitter Profile Photo

My lab’s contributions at #CVPR2025: -- Organizing VLMs4All - CVPR 2025 Workshop workshop (with 2 challenges) sites.google.com/corp/view/vlms
 -- 2 main conference papers (1 highlight, 1 poster) cvpr.thecvf.com/virtual/2025/p
 (highlight) cvpr.thecvf.com/virtual/2025/p
 (poster) -- 4 workshop papers (2 spotlight talks, 2

My lab’s contributions at #CVPR2025:

-- Organizing <a href="/vlms4all/">VLMs4All - CVPR 2025 Workshop</a> workshop (with 2 challenges)
sites.google.com/corp/view/vlms


-- 2 main conference papers (1 highlight, 1 poster)
cvpr.thecvf.com/virtual/2025/p
 (highlight)
cvpr.thecvf.com/virtual/2025/p
 (poster)

-- 4 workshop papers (2 spotlight talks, 2
Mehar Bhatia (@bhatia_mehar) 's Twitter Profile Photo

Excited to be at my first #CVPR2025 this week and organising a workshop for the first time! Come join us at the VLMs4All - CVPR 2025 Workshop workshop on June 12 (Thursday) in Room 104E, Music City Centre, Nashville. 📅 Workshop schedule: sites.google.com/view/vlms4all/


Mila - Institut québécois d'IA (@mila_quebec) 's Twitter Profile Photo

Mila researchers are heading to #CVPR2025! Join Aishwarya Agrawal and members of her lab to know more about learning language-compatible visual representations, building culturally-aware vision language models and more at poster sessions, workshops and talks in Nashville.