Weiting (Steven) Tan (@weiting_nlp) 's Twitter Profile
Weiting (Steven) Tan

@weiting_nlp

Ph.D. student at @jhuclsp, Research Scientist Intern @AIatMeta | Prev @AIatMeta @Amazon Alexa AI

ID: 1414244140544573442

linkhttps://steventan0110.github.io/ calendar_today11-07-2021 15:24:25

46 Tweet

133 Followers

220 Following

JHU CLSP (@jhuclsp) 's Twitter Profile Photo

📷 Calling all Speech, Language & ML enthusiasts! 📷 Join us at the Mid-Atlantic Student Colloquium on Speech, Language, Learning, hosted by Johns Hopkins University on May 3, 2024!

JHU CLSP (@jhuclsp) 's Twitter Profile Photo

📢 MASC-SLL Call for Papers is out! 📢 🎓 Are you a student passionate about Speech, Language, and Learning? 🗣️✨ 🌟 Present your research at the Mid-Atlantic Student Colloquium on Speech, Language, and Learning (MASC-SLL) at Johns Hopkins Johns Hopkins Engineering on May 3rd! 🌟

Jim Fan (@drjimfan) 's Twitter Profile Photo

I know your timeline is flooded now with word salads of "insane, HER, 10 features you missed, we're so back". Sit down. Chill. <gasp> Take a deep breath like Mark does in the demo </gasp>. Let's think step by step: - Technique-wise, OpenAI has figured out a way to map audio to

Lingfeng Shen (@lingfeng_nlp) 's Twitter Profile Photo

📢 Happy to share that our paper on #LLM safety in multilingual contexts has been accepted at #ACL 2024! ✨ We show the difficulty of alleviating multilingual safety issues in LLMs through standard alignment methods. arxiv.org/abs/2401.13136 🧵1/7

Lingfeng Shen (@lingfeng_nlp) 's Twitter Profile Photo

#RLHF is taking the spotlight. We usually focus on boosting reward and policy models to enhance RLHF outcomes. Our paper dives into the interactions between PM and RM from a data-centric way, revealing that their seamlessness is crucial to RLHF outcomes. arxiv.org/abs/2406.07971

#RLHF is taking the spotlight. We usually focus on boosting reward and policy models to enhance RLHF outcomes. Our paper dives into the interactions between PM and RM from a data-centric way, revealing that their seamlessness is crucial to RLHF outcomes. arxiv.org/abs/2406.07971
Weiting (Steven) Tan (@weiting_nlp) 's Twitter Profile Photo

Will be presenting my work on LLM-MT at #naacl today, at In-Person Poster Session 7 (11am). Please come if you are interested in machien translation and multilinguality! Would also love to discuss multimodal LLM (speech + text) and non-autoregressive decoding (NAT, diffusion)!

Haoran Xu (@fe1ixxu) 's Twitter Profile Photo

Here’s some better news: Combining CPO and SimPO can likely improve the model! Check out more details in our GitHub code: github.com/fe1ixxu/CPO_SI…

Here’s some better news: Combining CPO and SimPO can likely improve the model! Check out more details in our GitHub code: github.com/fe1ixxu/CPO_SI…
Diyi Yang (@diyi_yang) 's Twitter Profile Photo

We're very excited to release 🌟DiVA — Distilled Voice Assistant 🔊 Will Held ✅End-to-end differentiable speech LM; early fusion with Whisper and Llama 3 8B ✅Improves generalization by using distillation rather than supervised loss ✅Trained only using open-access

We're very excited to release 🌟DiVA — Distilled Voice Assistant 🔊 <a href="/WilliamBarrHeld/">Will Held</a> 

✅End-to-end differentiable speech LM; early fusion with Whisper and Llama 3 8B
✅Improves generalization by using distillation rather than supervised loss
✅Trained only using open-access
Chunting Zhou (@violet_zct) 's Twitter Profile Photo

Introducing *Transfusion* - a unified approach for training models that can generate both text and images. arxiv.org/pdf/2408.11039 Transfusion combines language modeling (next token prediction) with diffusion to train a single transformer over mixed-modality sequences. This

Introducing *Transfusion* - a unified approach for training models that can generate both text and images. arxiv.org/pdf/2408.11039

Transfusion combines language modeling (next token prediction) with diffusion to train a single transformer over mixed-modality sequences. This