Junwen Bai (@junwenbai) 's Twitter Profile
Junwen Bai

@junwenbai

Research Scientist @Google
PhD from @CornellCIS

ID: 822799504449486848

linkhttps://junwenbai.github.io/ calendar_today21-01-2017 13:34:23

8 Tweet

21 Takipçi

68 Takip Edilen

Ariel Ortiz-Bobea (@arielortizbobea) 's Twitter Profile Photo

Congrats to Joshua Fan, Junwen Bai, Carla Gomes (@CornellCIS) & Zhiyun Li (Cornell Dyson) for winning the best paper award (ML innovation category) at the Climate Change AI #NeurIPS2021 workshop! Very proud of our students! 🍾🎉

Congrats to Joshua Fan, <a href="/JunwenBai/">Junwen Bai</a>, Carla Gomes (@CornellCIS) &amp; <a href="/ZhiyunLi6/">Zhiyun Li</a> (<a href="/CornellDyson/">Cornell Dyson</a>) for winning the best paper award (ML innovation category) at the <a href="/ClimateChangeAI/">Climate Change AI</a> #NeurIPS2021 workshop!

Very proud of our students! 🍾🎉
Climate Change AI (@climatechangeai) 's Twitter Profile Photo

Congrats to our #NeurIPS2021 workshop best paper winners in the category "ML innovation"🏆 Joshua Fan, Junwen Bai, Zhiyun Li, Ariel Ortiz-Bobea & Carla Gomes (all Cornell University) for work on spatio-temporal GNNs for crop yield prediction. Well deserved! 👏🌱🤖 climatechange.ai/papers/neurips…

Congrats to our #NeurIPS2021 workshop best paper winners in the category "ML innovation"🏆

Joshua Fan, Junwen Bai, Zhiyun Li, <a href="/ArielOrtizBobea/">Ariel Ortiz-Bobea</a> &amp; Carla Gomes (all <a href="/Cornell/">Cornell University</a>) for work on spatio-temporal GNNs for crop yield prediction. Well deserved! 👏🌱🤖

climatechange.ai/papers/neurips…
AssemblyAI (@assemblyai) 's Twitter Profile Photo

In this week's Deep Learning Paper Review, our researchers examine Joint Unsupervised and Supervised Training For Multilingual #ASR. Read on for our key takeaways 👇 assemblyai.com/blog/review-ju… #DeepLearning #SpeechRecognition

Tri Dao (@tri_dao) 's Twitter Profile Photo

Announcing FlashAttention, a fast and memory-efficient attention algorithm with no approximation! 📣 w/ Dan Fu By reducing GPU memory reads/writes, FlashAttention runs 2-4x faster & requires 5-20x less memory than PyTorch standard attention, & scales to seq. length 64K. 1/

Announcing FlashAttention, a fast and memory-efficient attention algorithm with no approximation! 📣  w/ <a href="/realDanFu/">Dan Fu</a>

By reducing GPU memory reads/writes, FlashAttention runs 2-4x faster &amp; requires 5-20x less memory than PyTorch standard attention, &amp; scales to seq. length 64K. 1/
Junwen Bai (@junwenbai) 's Twitter Profile Photo

CoDA brings parameter efficiency and inference efficiency simultaneously. The idea derives from NLP and extends to Speech & Vision with impressive results!