Anurag Kumar (@acouintel) 's Twitter Profile
Anurag Kumar

@acouintel

Research Scientist, @GoogleDeepMind | Prev: @AIatMeta | CMU @SCSatCMU | @IITKanpur | Audio/Speech, Multimodal AI

ID: 744256054465302532

linkhttps://anuragkr90.github.io/ calendar_today18-06-2016 19:50:45

207 Tweet

2,2K Followers

288 Following

Anurag Kumar (@acouintel) 's Twitter Profile Photo

Yao Xie on Generative Models for Statistical Inference: Advancing Probabilistic Representations. #audioimagination2024 #NeurIPS2024

Yao Xie on Generative Models for Statistical Inference: Advancing Probabilistic Representations. #audioimagination2024 #NeurIPS2024
Anurag Kumar (@acouintel) 's Twitter Profile Photo

It was exciting to see the amazing turnout at our Audio Imagination Workshop NeurIPS Conference #NeurIPS2024. Grateful to everyone, invited speakers, panelists, authors and participants for the interesting presentations, discussions, and engagement. audio-imagination.com

It was exciting to see the amazing turnout  at our Audio Imagination Workshop  <a href="/NeurIPSConf/">NeurIPS Conference</a>  #NeurIPS2024. Grateful to everyone, invited speakers, panelists, authors and participants for the interesting presentations, discussions, and engagement. audio-imagination.com
arXiv Sound (@arxivsound) 's Twitter Profile Photo

``SyncFlow: Toward Temporally Aligned Joint Audio-Video Generation from Text,'' Haohe Liu, Gael Le Lan, Xinhao Mei, Zhaoheng Ni, Anurag Kumar, Varun Nagaraja, Wenwu Wang, Mark D. Plumbley, Yangyang Shi, Vikas Chandra, ift.tt/sxluwgt

Shrestha Mohanty (@shremoha) 's Twitter Profile Photo

Excited to share our work at COLING 2025! While I couldn’t attend in person, Jad Kabbara will be presenting today at the 1:30 PM poster session. Come by to learn how we’re using LLMs to improve understanding in social conversations! #COLING2025 #NLProc

Excited to share our work at <a href="/coling2025/">COLING 2025</a>! While I couldn’t attend in person, <a href="/jad_kabbara/">Jad Kabbara</a> will be presenting today at the 1:30 PM poster session. Come by to learn how we’re using LLMs to improve understanding in social conversations!
#COLING2025 #NLProc
arXiv Sound (@arxivsound) 's Twitter Profile Photo

``Efficient Audiovisual Speech Processing via MUTUD: Multimodal Training and Unimodal Deployment,'' Joanna Hong, Sanjeel Parekh, Honglie Chen, Jacob Donley, Ke Tan, Buye Xu, Anurag Kumar, ift.tt/5JkZ0Gp

Anurag Kumar (@acouintel) 's Twitter Profile Photo

Career Update: Excited to join Google Deepmind Google DeepMind to continue working on audio/speech/multimodal AI. I left Meta Meta after more than 6 years and I will definitely miss working with some amazing friends and colleagues. Super thankful for all the fun collaborations.

Nando de Freitas (@nandodf) 's Twitter Profile Photo

RL is not all you need, nor attention nor Bayesianism nor free energy minimisation, nor an age of first person experience. Such statements are propaganda. You need thousands of people working hard on data pipelines, scaling infrastructure, HPC, apps with feedback to drive