Ju-Chieh Chou (@ju_chieh) 's Twitter Profile
Ju-Chieh Chou

@ju_chieh

Ph.D. student @TTIC_Connect

ID: 1010315335038189568

linkhttps://home.ttic.edu/~jcchou/ calendar_today23-06-2018 00:15:21

82 Tweet

160 Takipçi

482 Takip Edilen

Mirco Ravanelli (@mirco_ravanelli) 's Twitter Profile Photo

The #SpeechBrain team is working hard to release soon an open-source all-in-one toolkit based on #PyTorch specifically designed for multiple tasks (e.g., #ASR, enhancement, separation, speaker recognition/diarization, multi-mic processing). speechbrain.github.io milamontreal

The #SpeechBrain team is working hard to release soon an open-source all-in-one toolkit based on #PyTorch  specifically designed for multiple tasks (e.g., #ASR, enhancement, separation, speaker recognition/diarization, multi-mic processing).

speechbrain.github.io <a href="/MILAMontreal/">milamontreal</a>
Mirco Ravanelli (@mirco_ravanelli) 's Twitter Profile Photo

#SpeechBrain is growing! Today, let me share a tutorial on "Speech Classification from Scratch". Helpful for speaker-id, language-id, emotion recognition, sound classification, etc... Tutorial: colab.research.google.com/drive/1UwisnAj… Web: speechbrain.github.io Code: github.com/speechbrain

#SpeechBrain is growing! Today, let me share a tutorial on "Speech Classification from Scratch".

Helpful for speaker-id, language-id, emotion recognition, sound classification, etc...

Tutorial: colab.research.google.com/drive/1UwisnAj…
Web: speechbrain.github.io
Code: github.com/speechbrain
Mirco Ravanelli (@mirco_ravanelli) 's Twitter Profile Photo

#SpeechBrain is growing very fast! We are happy to announce the new call for #sponsors. We have very ambitious plans for the future and sponsors can play a crucial role in the development of our #Opensource #toolkit. More info here: speechbrain.github.io/img/Call_for_S…

#SpeechBrain is growing very fast! We are happy to announce the new call for #sponsors. 

We have very ambitious plans for the future and  sponsors can play a crucial role in the development of our #Opensource #toolkit.

More info here:
speechbrain.github.io/img/Call_for_S…
Loren Lugosch (@lorenlugosch) 's Twitter Profile Photo

Researchers Had To Shut Down AI After It Taught Itself 19 Languages?!* 🤔😱🤖😤 Like👍 Subscribe🔔 * = we used pseudo-labeling to train a single massively multilingual speech recognizer for all 60 languages of Common Voice. Paper: arxiv.org/abs/2111.00161 🧵

Researchers Had To Shut Down AI After It Taught Itself 19 Languages?!* 🤔😱🤖😤 Like👍 Subscribe🔔

* = we used pseudo-labeling to train a single massively multilingual speech recognizer for all 60 languages of Common Voice.

Paper: arxiv.org/abs/2111.00161
🧵
Chao-Chun (Joe) Hsu (@chaochunh) 's Twitter Profile Photo

Can we summarize text to support a decision? How does that differ from text-only summarization? We proposed a novel task of decision-focused summarization which aims to generate summaries for a target decision. #EMNLP paper: arxiv.org/pdf/2109.06896… video: youtube.com/watch?v=by0JJH…

Can we summarize text to support a decision? How does that differ from text-only summarization? We proposed a novel task of decision-focused summarization which aims to generate summaries for a target decision. #EMNLP

paper: arxiv.org/pdf/2109.06896…
video: youtube.com/watch?v=by0JJH…
Hung-yi Lee (李宏毅) (@hungyilee2) 's Twitter Profile Photo

Receive the YouTube Creator Silver Award for 100,000 subscribers. When I started uploading videos about DL to YouTube in the fall of 2016, I never imagined this achievement. Thanks to all subscribers. We learn together. youtube.com/c/HungyiLeeNTU/

Receive the YouTube Creator Silver Award for 100,000 subscribers. When I started uploading videos about DL to YouTube in the fall of 2016, I never imagined this achievement. Thanks to all subscribers. We learn together. youtube.com/c/HungyiLeeNTU/
Thomas Wolf (@thom_wolf) 's Twitter Profile Photo

I find it quite surprising how much smaller diffusion models tend to be versus transformers generative model of similar quality. Where could this come from? Better parameter sharing, different scaling law behavior, better use of the source of randomness, smt else…?

Yung-Sung Chuang (@yungsungchuang) 's Twitter Profile Photo

(1/5)🚨Can LLMs be more factual without retrieval or finetuning?🤔 -yes✅ 🦙We find factual knowledge often lies in higher layers of LLaMA 💪Contrast high/low layers can amplify factuality & boost TruthfulQA by 12-17% 📝arxiv.org/abs/2309.03883 🧑‍💻github.com/voidism/DoLa #NLProc

(1/5)🚨Can LLMs be more factual without retrieval or finetuning?🤔 -yes✅

🦙We find factual knowledge often lies in higher layers of LLaMA
💪Contrast high/low layers can amplify factuality &amp; boost TruthfulQA by 12-17%

📝arxiv.org/abs/2309.03883
🧑‍💻github.com/voidism/DoLa

#NLProc
Weijia Shi (@weijiashi2) 's Twitter Profile Photo

Introduce In-Context Pretraining🖇️: train LMs on contexts of related documents. Improving 7B LM by simply reordering pretrain docs 📈In-context learning +8% 📈Faithful +16% 📈Reading comprehension +15% 📈Retrieval augmentation +9% 📈Long-context reason +5% arxiv.org/abs/2301.12652

Introduce In-Context Pretraining🖇️: train LMs on contexts of related documents. Improving 7B LM by simply reordering pretrain docs
📈In-context learning +8%
📈Faithful +16%
📈Reading comprehension +15%
📈Retrieval augmentation +9%
📈Long-context reason +5%
arxiv.org/abs/2301.12652
Mirco Ravanelli (@mirco_ravanelli) 's Twitter Profile Photo

Exciting news! 🎉 #SpeechBrain 1.0 is out with tons of thrilling advancements. Our #OpenSource toolkit now features 200+ recipes and 100+ pretrained models on #HuggingFace for diverse #ConversationalAI tasks. 🌐 Website: speechbrain.github.io 💻 Repo: github.com/speechbrain/sp…

Exciting news! 🎉 #SpeechBrain 1.0 is out with tons of thrilling advancements.

Our #OpenSource toolkit now features 200+ recipes and 100+ pretrained models on #HuggingFace for diverse #ConversationalAI tasks.

🌐 Website: speechbrain.github.io

💻 Repo: github.com/speechbrain/sp…
Yann LeCun (@ylecun) 's Twitter Profile Photo

Nathan MrBusiness Scientific knowledge is rarely secret. Technical knowledge doesn't remain secret for very long. Know-how and practical expertise takes longer to disseminate. Product technology and market experience takes even more time and money to reproduce.