JSALT 2022 - pre-training team (@jsalt_pretrain) 's Twitter Profile
JSALT 2022 - pre-training team

@jsalt_pretrain

Please follow the Twitter account of the pre-training team at JSALT 2022. The team will tweet our new findings here.

ID: 1528250731203284992

linkhttps://jsalt-2022-ssl.github.io/ calendar_today22-05-2022 05:45:59

48 Tweet

122 Followers

0 Following

Roger Tseng (@rogertseng) 's Twitter Profile Photo

2. We can also directly parse speech segment representations, where segments can be determined with any unsupervised word segmentation method, and representations extracted with a pretrained speech representation model such as XLS-R.

2. We can also directly parse speech segment representations, where segments can be determined with any unsupervised word segmentation method, and representations extracted with a pretrained speech representation model such as XLS-R.
Roger Tseng (@rogertseng) 's Twitter Profile Photo

Code, Video & Poster: github.com/roger-tseng/sp… Full paper: arxiv.org/abs/2303.08809 If you find this topic interesting, come chat! I'll present our paper tomorrow from 3:40 PM - 5:10 PM at the Audio and Text Segmentation, Tagging and Parsing (SLT-P9) poster session at #ICASSP2023

Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

Attending #ICASSP2023 in Rhodes, Greece? Don't miss the workshop on "Self-supervision in Audio, Speech & Beyond". Dive deep into the advancements in self-supervised learning. Catch me delivering the workshop keynote @ Jupiter Ballroom, 8:40 a.m. GMT+3. sites.google.com/view/icassp-sa…

Cheng Han Chiang (姜成翰) (@dcml0714) 's Twitter Profile Photo

I will be at #ACL2023NLP next week ✈️ to share our three papers on diverse topics. Looking forward to meeting old friends and making some new friends. ✨ Stop by our poster if you want to chat! 😁

I will be at #ACL2023NLP  next week ✈️ to share our three papers on diverse topics. Looking forward to meeting old friends and making some new friends. ✨ Stop by our poster if you want to chat! 😁
Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

If you're participating in ICML 2023, do not miss the workshop "What's Left to TEACH (Trustworthy, Enhanced, Adaptable, Capable, and Human-centric) Chatbots?" It's happening today in Room 303. sites.google.com/view/teach-icm… #ICML2023

WAVLab | @CarnegieMellon (@wavlab) 's Twitter Profile Photo

📢 Registration for the SPARKS workshop at #ASRU2023 is now OPEN! Dive deep into speech foundation models and benchmarking. Get ready for discussions on next-gen speech tech! 🎙️ 📄 Paper Submission: 10/19 🗓️ Workshop: 12/16 Details 👉 sites.google.com/g.ntu.edu.tw/s…

Hsuan Su (@jacksukk) 's Twitter Profile Photo

🚀 Introducing the Prompt Benchmark Challenge (PBC) 🚀 Curious about which prompts maximize LLM performance? Join us on the quest to uncover the ultimate prompts for Large Language Models! Explore more at 👉 llm.ee.ntu.edu.tw/prompt-benchma… #PBC #LLM #Prompt

Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

Join us for ASRU's satellite event - the Workshop on Speech Foundation Models & Performance Benchmarks (SPARKS), on Dec 16th, 2023, in Taiwan. 📌 Paper Submission: Oct 19th 🔗 Webpage: sites.google.com/g.ntu.edu.tw/s… Tip: When registering for ASRU, tick the SPARKS option. #ASRU

Cheng Han Chiang (姜成翰) (@dcml0714) 's Twitter Profile Photo

🎉🌱In the early stages of my research journey, I'm humbly honored to receive the Google PhD Fellowship🏆 So much more to learn, discover, and explore in this exciting path🚀 🙏 Infinite thanks to my advisor, Hung-yi Lee (李宏毅) , for his guiding light. This can't happen without him

Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

Join us for an enlightening afternoon with distinguished speech researchers, Dr. Andreas Stolcke and Prof. Torbjørn Svendsen. Their talks will take place at Barry Lam Hall (博理館) (reurl.cc/krNxl9), R101 (Auditorium), NTU, on December 21st, starting at 2:20PM. #ASRU2023

Join us for an enlightening afternoon with distinguished speech researchers, Dr. Andreas Stolcke and Prof. Torbjørn Svendsen. Their talks will take place at Barry Lam Hall (博理館) (reurl.cc/krNxl9), R101 (Auditorium), NTU, on December 21st, starting at 2:20PM. #ASRU2023
Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

Excited to speak at #ASRU2023 tomorrow (December 20) at 11:30 AM (GMT+8) on "The Journey of Advancements in Speech Foundation Models"! We'll explore the evolution of speech foundation models. Below, please find the slides: drive.google.com/file/d/1ZWfnOE…

Cheng Han Chiang (姜成翰) (@dcml0714) 's Twitter Profile Photo

📢New Paper Alert🎉 Excited to share our EACL'24 paper 🤔 Do LLMs generate redundant reasonings? 📚 We create questions that can be answered w/o calculations ➡️ LLMs tend to answer with unnecessary reasonings and calculations arxiv.org/abs/2401.11467 #eacl2024 #NLProc #LLM

📢New Paper Alert🎉
Excited to share our EACL'24 paper
🤔 Do LLMs generate redundant reasonings?
📚 We create questions that can be answered w/o calculations
➡️ LLMs tend to answer with unnecessary reasonings and calculations  
arxiv.org/abs/2401.11467
#eacl2024 #NLProc #LLM
Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

Thrilled to see the team continuously enhancing the materials based on my online lectures! 🚀 Despite never having met them in person, their dedication truly impresses me. Check out the amazing work at github.com/datawhalechina…

Cheng Han Chiang (姜成翰) (@dcml0714) 's Twitter Profile Photo

🚀Thrilled to share our new paper: Merging Facts, Crafting Fallacies ✅+✅+✅→❌ arxiv.org/abs/2402.05629 🤔 Does combining factual claims form a factual paragraph? 🙅🏻 LLMs can generate nonfactual paragraphs composed of factual claims! 💯 Existing metrics can't handle this!

🚀Thrilled to share our new paper: Merging Facts, Crafting Fallacies  ✅+✅+✅→❌
arxiv.org/abs/2402.05629
🤔 Does combining factual claims form a factual paragraph?
🙅🏻 LLMs can generate nonfactual paragraphs composed of factual claims!
💯 Existing metrics can't handle this!
Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

Recent years have witnessed significant developments in audio codec models (an overview figure from arxiv.org/abs/2402.13236). We introduce Codec-SUPERB (arxiv.org/abs/2402.13071) to boost fair and comprehensive comparison. Leaderboard: codecsuperb.com

Recent years have witnessed significant developments in audio codec models (an overview figure from arxiv.org/abs/2402.13236). We introduce Codec-SUPERB (arxiv.org/abs/2402.13071) to boost fair and comprehensive comparison. Leaderboard: codecsuperb.com
Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

Fine-tuning the LLaMA-2-Chat model may degrade its original capabilities (arxiv.org/abs/2401.03129). But here's a lifeline: Chat Vector (arxiv.org/abs/2310.04799) keeps a chat model's original capability (it also works on Mistral). Recommend to everyone fine-tuning their LLMs.

Fine-tuning the LLaMA-2-Chat model may degrade its original capabilities (arxiv.org/abs/2401.03129). But here's a lifeline: Chat Vector (arxiv.org/abs/2310.04799) keeps a chat model's original capability (it also works on Mistral). Recommend to everyone fine-tuning their LLMs.
Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

Join the Webinar Series for Advancements in Audio, Speech and Language Technology. Next up: "End-to-End Automatic Speech Recognition" by Dr. Jinyu Li from Microsoft on May 10 @ 1:00 pm EDT (May 11 @ 1:00 am Taiwan time) Register now: ieee.webex.com/weblink/regist…

Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

Join us for the Dynamic-SUPERB call-for-tasks event. Submit your innovative task to challenge the speech foundation models that can understand task instruction. Let's push the boundaries of what speech foundation models can do! github.com/dynamic-superb…

Join us for the Dynamic-SUPERB call-for-tasks event. Submit your innovative task to challenge the speech foundation models that can understand task instruction. Let's push the boundaries of what speech foundation models can do! github.com/dynamic-superb…
Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

Webinar Series for Advancements in Audio, Speech, and Language Technology Next Webinar: Neural Target Speech and Sound Extraction: An Overview Speaker: Dr. Marc Delcroix Time: June 6, 2024, 7:30 PM (NY Time) Register: lnkd.in/g-XSmu9v