Hsuan Su (@jacksukk) 's Twitter Profile
Hsuan Su

@jacksukk

Ph.D. student @ntu_spml
Research on Conversational AI

ID: 868175810

linkhttps://hsuansu.me/ calendar_today08-10-2012 15:26:44

135 Tweet

188 Takipçi

826 Takip Edilen

Fan-Yun Sun (@sunfanyun) 's Twitter Profile Photo

Training RL/robot policies requires extensive experience in the target environment, which is often difficult to obtain. How can we “distill” embodied policies from foundational models? Introducing FactorSim! #NeurIPS2024 We show that by generating prompt-aligned simulations and

yuwen lu (@yuwen_lu_) 's Twitter Profile Photo

hey! i’m at #cscw2024 presenting our paper on tuesday at 11am! we built a browser extension helping users avoid dark patterns on their interfaces! it’s cool! you should check it out!! it also got a best paper award! picture is today @ la paz waterfall garden, 🇨🇷 is amazing

hey! i’m at #cscw2024 presenting our paper on tuesday at 11am! 

we built a browser extension helping users avoid dark patterns on their interfaces! it’s cool! you should check it out!! it also got a best paper award!

picture is today @ la paz waterfall garden, 🇨🇷 is amazing
Justin Cho 조현동 (@hjch0) 's Twitter Profile Photo

✨EMNLP Paper ✨ Wouldn't it be great if we can also listen to LLM responses when we can't look at a screen? Problem: LLMs generate responses without considering the unique constraints of speech 😢 🎉 Let's fix that with Speechworthy Instruction-tuned Language Models

✨EMNLP Paper ✨
Wouldn't it be great if we can also listen to LLM responses when we can't look at a screen? 
Problem: LLMs generate responses without considering the unique constraints of speech 😢

🎉 Let's fix that with Speechworthy Instruction-tuned Language Models
Justin Cho 조현동 (@hjch0) 's Twitter Profile Photo

I'm presenting this work during today's poster session from 10:30AM-12PM at EMNLP! Come by and say hi 👋 x.com/HJCH0/status/1…

Ninareh Mehrabi (@ninarehmehrabi) 's Twitter Profile Photo

As many of you might already know, I had this dream to become a faculty. Unfortunately, that did not happen, so instead I decided to create my own lab kind of a thing. I am trying to start working with some students (PhD, masters, undergrad are all welcome) and mentor them.

Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

I will deliver the final talk in the SPS SLTC/AASP TC Webinar Series in 2024, sharing insights on fine-tuning models such as LLaMA and Whisper. 📅 Dec 17, 10 AM ET (11 PM Taiwan). Register: signalprocessingsociety.org/blog/sps-sltca…

Emily Dinan (@em_dinan) 's Twitter Profile Photo

check out our new work on merging expert models, Branch-Train-Stitch 🪡🪡🪡 had so much fun working on this with the incredible Qizhen (Irene) Zhang and team!!! 😊

Yung-Sung Chuang (@yungsungchuang) 's Twitter Profile Photo

(1/5)🚨LLMs can now self-improve to generate better citations✅ 📝We design automatic rewards to assess citation quality 🤖Enable BoN/SimPO w/o external supervision 📈Perform close to “Claude Citations” API w/ only 8B model 📄arxiv.org/abs/2502.09604 🧑‍💻github.com/voidism/SelfCi…

(1/5)🚨LLMs can now self-improve to generate better citations✅

📝We design automatic rewards to assess citation quality
🤖Enable BoN/SimPO w/o external supervision
📈Perform close to “Claude Citations” API w/ only 8B model

📄arxiv.org/abs/2502.09604
🧑‍💻github.com/voidism/SelfCi…
Yifu Qiu (@yifuqiu98) 's Twitter Profile Photo

🚀Happy to share my last-year  Apple's internship work! A promising use case of long-context LLMs is enabling the entire knowledge base to fit in the prompt as contextual knowledge for tasks like QA, rather than RAG pipeline. But are they up to this? If not, how to improve?

🚀Happy to share my last-year  Apple's internship work!

A promising use case of long-context LLMs is enabling the entire knowledge base to fit in the prompt as contextual knowledge for tasks like QA, rather than RAG pipeline.

But are they up to this? If not, how to improve?
Cheng Han Chiang (姜成翰) (@dcml0714) 's Twitter Profile Photo

🚀 New Paper Alert! 🚀 Want better LLM-as-a-Judge? TRACT: 🧠 CoT + Regression-Aware Fine-tuning (RAFT) = Better numerical predictions! 📊 arxiv.org/abs/2503.04381 🧵👇 A thread on TRACT:

🚀 New Paper Alert! 🚀
Want better LLM-as-a-Judge?
TRACT: 🧠 CoT + Regression-Aware Fine-tuning (RAFT) = Better numerical predictions! 📊
arxiv.org/abs/2503.04381
🧵👇 A thread on TRACT:
Fan-Yun Sun (@sunfanyun) 's Twitter Profile Photo

Spatial reasoning is a major challenge for the foundation models today, even in simple tasks like arranging objects in 3D space. #CVPR2025 Introducing LayoutVLM, a differentiable optimization framework that uses VLM to spatially reason about diverse scene layouts from unlabeled

𝚐𝔪𝟾𝚡𝚡𝟾 (@gm8xx8) 's Twitter Profile Photo

Scaling Laws of Synthetic Data for Language Models SynthLLM is a framework that generates high-quality synthetic pretraining data by extracting and recombining concepts across documents using a graph algorithm. It follows predictable scaling laws, with performance gains

Scaling Laws of Synthetic Data for Language Models

SynthLLM is a framework that generates high-quality synthetic pretraining data by extracting and recombining concepts across documents using a graph algorithm. It follows predictable scaling laws, with performance gains
Shao-Hua Sun (@shaohua0116) 's Twitter Profile Photo

We invite in-person tutorial proposals to the Asian Conference on Machine Learning (ACML) 2025 in Taipei, Taiwan, on Dec 12, 2025! Share your research with us & visit vibrant Taiwan! #ACML2025 Deadline: Aug 1; notification: Sep 5 CFT: acml-conf.org/2025/tutorial.… Please retweet!

We invite in-person tutorial proposals to the Asian Conference on Machine Learning (ACML) 2025 in Taipei, Taiwan, on Dec 12, 2025! Share your research with us & visit vibrant Taiwan! #ACML2025
Deadline: Aug 1; notification: Sep 5
CFT: acml-conf.org/2025/tutorial.…
Please retweet!
Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

🎧 With the rapid growth of audio LLM benchmarking studies, a comprehensive survey is timely! Check out the survey paper on benchmarks in audio LLMs by Chih-Kai Yang and Neo S. Ho. 🔥 Paper link: arxiv.org/abs/2505.15957