Sparsity in LLMs Workshop at ICLR 2025 (@sparsellms) 's Twitter Profile
Sparsity in LLMs Workshop at ICLR 2025

@sparsellms

Workshop on Sparsity in LLMs: Deep Dive into Mixture of Experts, Quantization, Hardware, and Inference @iclr_conf 2025.

ID: 1870258556156727297

linkhttps://www.sparsellm.org calendar_today21-12-2024 00:03:15

21 Tweet

165 Takipçi

15 Takip Edilen

Dan Alistarh (@dalistarh) 's Twitter Profile Photo

Our QuEST paper was selected for Oral Presentation at ICLR Sparsity in LLMs Workshop at ICLR 2025 workshop! QuEST is the first algorithm with Pareto-optimal LLM training for 4bit weights/activations, and can even train accurate 1-bit LLMs. Paper: arxiv.org/abs/2502.05003 Code: github.com/IST-DASLab/QuE…

Vimal Thilak🦉🐒 (@aggieinca) 's Twitter Profile Photo

Check out this post that has information about research from Apple that will be presented at ICLR 2025 in 🇸🇬 this week. I will be at ICLR and will be presenting some of our work (led by Samira Abnar) at SLLM Sparsity in LLMs Workshop at ICLR 2025 workshop. Happy to chat about JEPAs as well!

Yani Ioannou @ ICLR 2025 ✈️ (@yanii) 's Twitter Profile Photo

I will travelling to Singapore 🇸🇬 this week for the ICLR 2025 Workshop on Sparsity in LLMs (SLLM) that I'm co-organizing! We have an exciting lineup of invited speakers and panelists including Dan Alistarh, Gintare Karolina Dziugaite, Pavlo Molchanov, Vithu Thangarasa, Yuandong Tian and Amir Yazdan.

Sparsity in LLMs Workshop at ICLR 2025 (@sparsellms) 's Twitter Profile Photo

Sparse LLM workshop will run on Sunday with two poster sessions, a mentoring session, 4 spotlight talks, 4 invited talks and a panel session. We'll host an amazing lineup of researchers: Dan Alistarh Vithu Thangarasa Yuandong Tian Amir Yazdan Gintare Karolina Dziugaite Olivia Hsu Pavlo Molchanov Yang Yu

Sparse LLM workshop will run on Sunday with two poster sessions, a mentoring session, 4 spotlight talks, 4 invited talks and a panel session. 

We'll host an amazing lineup of researchers: <a href="/DAlistarh/">Dan Alistarh</a> <a href="/vithursant19/">Vithu Thangarasa</a> <a href="/tydsh/">Yuandong Tian</a> <a href="/ayazdanb/">Amir Yazdan</a> <a href="/gkdziugaite/">Gintare Karolina Dziugaite</a> Olivia Hsu <a href="/PavloMolchanov/">Pavlo Molchanov</a> Yang Yu
Harshay Shah (@harshays_) 's Twitter Profile Photo

If you’re at #ICLR2025, go watch Vimal Thilak🦉🐒 give an oral presentation at the @SparseLLMs workshop on scaling laws for pretraining MoE LMs! Had a great time co-leading this project with Samira Abnar & Vimal Thilak🦉🐒 at Apple MLR last summer. When: Sun Apr 27, 9:30a Where: Hall 4-07

Ashwinee Panda (@pandaashwinee) 's Twitter Profile Photo

our workshop on sparsity in LLMs is starting soon in Hall 4.7! we’re starting strong with an invited talk from Dan Alistarh and an exciting oral on scaling laws for MoEs!

our workshop on sparsity in LLMs is starting soon in Hall 4.7! we’re starting strong with an invited talk from <a href="/DAlistarh/">Dan Alistarh</a> and an exciting oral on scaling laws for MoEs!
Shiwei Liu (@shiwei_liu66) 's Twitter Profile Photo

Our ICLR 2025 Workshop on Sparsity in LLMs (Sparsity in LLMs Workshop at ICLR 2025) kicks off with a talk by Dan Alistarh on lossless (~1% perf drop) LLM compression using quantization across various benchmarks.

Our ICLR 2025 Workshop on Sparsity in LLMs (<a href="/sparseLLMs/">Sparsity in LLMs Workshop at ICLR 2025</a>) kicks off with a talk by <a href="/DAlistarh/">Dan Alistarh</a> on lossless (~1% perf drop) LLM compression using quantization across various benchmarks.
Diego Calanzone @ ICLR 🇸🇬 (@diegocalanzone) 's Twitter Profile Photo

Presenting in short! 👉🏼 Mol-MoE: leveraging model merging and RLHF for test-time steering of molecular properties. 📆 today, 11:15am to 12:15pm 📍 Poster session #1, GEM Bio Workshop GEMBio Workshop Sparsity in LLMs Workshop at ICLR 2025 #ICLR #ICLR2025

Ashwinee Panda (@pandaashwinee) 's Twitter Profile Photo

a PACKED hall for Yuandong Tian‘s talk at our sparsity in LLMs workshop -not surprising! we have another oral right after this, and then we’ll have the first of 2 poster sessions before lunch! ICLR 2026

a PACKED hall for <a href="/tydsh/">Yuandong Tian</a>‘s talk at our sparsity in LLMs workshop -not surprising! we have another oral right after this, and then we’ll have the first of 2 poster sessions before lunch! <a href="/iclr_conf/">ICLR 2026</a>
Ayush Noori (@ayushnoori) 's Twitter Profile Photo

We are presenting “Prefix and output length-aware scheduling for efficient online LLM inference” at the ICLR 2025 (ICLR 2026) Sparsity in LLMs workshop (Sparsity in LLMs Workshop at ICLR 2025). 🪫 Challenge: LLM inference in data centers benefits from data parallelism. How can we exploit patterns in

We are presenting “Prefix and output length-aware scheduling for efficient online LLM inference” at the ICLR 2025 (<a href="/iclr_conf/">ICLR 2026</a>) Sparsity in LLMs workshop (<a href="/sparseLLMs/">Sparsity in LLMs Workshop at ICLR 2025</a>).

🪫 Challenge: LLM inference in data centers benefits from data parallelism. How can we exploit patterns in