Gopala Anumanchipalli (@gopalaspeech) 's Twitter Profile
Gopala Anumanchipalli

@gopalaspeech

Robert E. And Beverly A. Brooks Assistant Professor @UCBerkeley @UCSF

Formerly @CarnegieMellon @ISTecnico @IIIT_Hyderabad

ID: 996829570274807808

linkhttp://people.eecs.berkeley.edu/~gopala/ calendar_today16-05-2018 19:07:44

150 Tweet

948 Followers

586 Following

Tingle Li (@tingle_li) 's Twitter Profile Photo

Catch us at poster #242 on Thursday at 4:30 PM! More details here: ๐Ÿ”— Website: tinglok.netlify.app/files/avsoundsโ€ฆ ๐Ÿ“„ arXiv: arxiv.org/abs/2409.14340 w/ Ren Wang , Bernie Huang, Andrew Owens , and Gopala Anumanchipalli .

Kaylo Littlejohn (@kaylolittlejohn) 's Twitter Profile Photo

If you are at #SfN2024, check out our poster in section F9 from 8 AM to 12 PM on Sunday! We present a streamable framework to restore a naturalistic voice to a person with paralysis. We will also show demos of voice synthesis, incremental TTS, and generalization to unseen words.

arXiv Sound (@arxivsound) 's Twitter Profile Photo

``Sylber: Syllabic Embedding Representation of Speech from Raw Audio,'' Cheol Jun Cho, Nicholas Lee, Akshat Gupta, Dhruv Agarwal, Ethan Chen, Alan W Black, Gopala K. Anumanchipalli, ift.tt/5PWQXna

JIACHEN LIAN (@lianjiachen) 's Twitter Profile Photo

I will be attending #NeurIPS2024 presenting our work SSDM: Scalable Speech Dysfluency Modeling (lnkd.in/gP9VvKdk), East Exhibit Hall A-C #3207, on Thursday, December 12th, 4:30-7:30 PM PST. Hope to meet with old and new friends!

Gopala Anumanchipalli (@gopalaspeech) 's Twitter Profile Photo

Self-Supervised Syllabic Representation Learning from speech. With unsupervised syllable discovery & linear time tokenization of speech at the syllabic rate (~4 Hz) !! Work from my group Cheol Jun Cho, Nick Lee and Akshat Gupta Berkeley AI Research to be presented at #ICLR2025

Akshat Gupta (@akshatgupta57) 's Twitter Profile Photo

Thrilled to share that our paper on "Norm Growth and Stability Challenges in Sequential Knowledge Editing" has been accepted for an Oral Presentation at the KnowFM workshop @ #AAAI2025 w/ Tom Hartvigsen Ahmed Alaa Gopala Anumanchipalli More details below (1/n)

Thrilled to share that our paper on "Norm Growth and Stability Challenges in Sequential Knowledge Editing" has been accepted for an Oral Presentation at the KnowFM workshop @ #AAAI2025 

w/ <a href="/tom_hartvigsen/">Tom Hartvigsen</a> <a href="/_ahmedmalaa/">Ahmed Alaa</a> <a href="/GopalaSpeech/">Gopala Anumanchipalli</a> 

More details below (1/n)
Akshat Gupta (@akshatgupta57) 's Twitter Profile Photo

Our work on knowledge editing got an "Outstanding Paper Award"๐Ÿ†๐Ÿ† at the AAAI KnowFM Workshop!! #AAAI2025 ๐Ÿฅณ๐Ÿฅณ๐Ÿฅณ Congratulations to my amazing co-authors Tom Hartvigsen Ahmed Alaa Gopala Anumanchipalli

Our work on knowledge editing got an "Outstanding Paper Award"๐Ÿ†๐Ÿ† at the <a href="/RealAAAI/">AAAI</a> KnowFM Workshop!! #AAAI2025  ๐Ÿฅณ๐Ÿฅณ๐Ÿฅณ

Congratulations to my amazing co-authors <a href="/tom_hartvigsen/">Tom Hartvigsen</a> <a href="/_ahmedmalaa/">Ahmed Alaa</a> <a href="/GopalaSpeech/">Gopala Anumanchipalli</a>
Guan-Ting (Daniel) Lin (@gtl094144) 's Twitter Profile Photo

(1/5) Introducing Full-Duplex-Bench: A Benchmark for Full-Duplex Spoken Dialogue Models Weโ€™re excited to present Full-Duplex-Bench, the first benchmark designed to evaluate turn-taking capabilities in full-duplex spoken dialogue models (SDMs)! arxiv.org/abs/2503.04721 Details๐Ÿ‘‡

(1/5) Introducing Full-Duplex-Bench: A Benchmark for Full-Duplex Spoken Dialogue Models

Weโ€™re excited to present Full-Duplex-Bench, the first benchmark designed to evaluate turn-taking capabilities in full-duplex spoken dialogue models (SDMs)!
arxiv.org/abs/2503.04721

Details๐Ÿ‘‡
Kaylo Littlejohn (@kaylolittlejohn) 's Twitter Profile Photo

1/n) Our latest work is out today in Nature Neuroscience! We developed a streaming โ€œbrain-to-voiceโ€ neuroprosthesis which restores naturalistic, fluent, intelligible speech to a person who has paralysis. nature.com/articles/s4159โ€ฆ

UCSF Neurosurgery (@neurosurgucsf) 's Twitter Profile Photo

Today in Nature Neuroscience, @ChangLabUCSF and Berkeley Engineeringโ€™s Gopala Anumanchipalli show that their new AI-based method for decoding neural data synthesizes audible speech from neural data in real-time: l8r.it/G2KC

Nature Portfolio (@natureportfolio) 's Twitter Profile Photo

A paper in Nature Neuroscience presents a new device capable of translating speech activity in the brain into spoken words in real-time. This technology could help people with speech loss to regain their ability to communicate more fluently in real time. go.nature.com/444kW0k

The Associated Press (@ap) 's Twitter Profile Photo

Scientists have developed a device that can translate thoughts about speech into spoken words in real time. Although itโ€™s still experimental, they hope the brain-computer interface could someday help give voice to those unable to speak.

Berkeley AI Research (@berkeley_ai) 's Twitter Profile Photo

Work led by BAIR students Kaylo Littlejohn and Cheol Jun Cho advised by BAIR faculty Gopala Anumanchipalli "...made it possible to synthesize brain signals into speech in close to real-time." dailycal.org/news/campus/reโ€ฆ via The Daily Californian

Akshat Gupta (@akshatgupta57) 's Twitter Profile Photo

#ICLR25 Our work on characterizing alignment between MLP matrices in LLMs and Linear Associative Memories has been accepted for an Oral Presentation at the NFAM workshop. Location : Hall 4 #5 Time : 11 AM (April 27) Gopala Anumanchipalli Berkeley AI Research

#ICLR25 Our work on characterizing alignment between MLP matrices in LLMs and Linear Associative Memories has been accepted for an Oral Presentation at the NFAM workshop. 

Location : Hall 4 #5
Time : 11 AM (April 27)

<a href="/GopalaSpeech/">Gopala Anumanchipalli</a> <a href="/berkeley_ai/">Berkeley AI Research</a>
Nick Lee (@nicholaszlee) 's Twitter Profile Photo

๐Ÿš€ Excited to share that our paper on Plan-and-Act has been accepted to ICML 2025. Below is a TLDR: ๐Ÿ”Ž Problem: โ€ข LLM agents struggle on complex, multi-step web tasks (or API calls for that matter). โ€ข Why not add planning for complex tasks and decouple planning and execution?

๐Ÿš€ Excited to share that our paper on Plan-and-Act has been accepted to ICML 2025. Below is a TLDR:

๐Ÿ”Ž Problem:
โ€ข LLM agents struggle on complex, multi-step web tasks (or API calls for that matter).
โ€ข Why not add planning for complex tasks and decouple planning and execution?
Akshat Gupta (@akshatgupta57) 's Twitter Profile Photo

Excited to have two paper accepted at #ACL2025 !๐ŸŽ‰๐ŸŽ‰ 1 Main track and 1 Findings. Papers out on arxiv soon. Big thank you to all my collaborators!

Excited to have two paper accepted at #ACL2025 !๐ŸŽ‰๐ŸŽ‰
1 Main track and 1 Findings. Papers out on arxiv soon. 

Big thank you to all my collaborators!
Akshat Gupta (@akshatgupta57) 's Twitter Profile Photo

Just did a major revision to our paper on Lifelong Knowledge Editing!๐Ÿ” Key takeaway (+ our new title) - "Lifelong Knowledge Editing requires Better Regularization" Fixing this leads to consistent downstream performance! Tom Hartvigsen Ahmed Alaa Gopala Anumanchipalli Berkeley AI Research

Just did a major revision to our paper on Lifelong Knowledge Editing!๐Ÿ”

Key takeaway (+ our new title) - "Lifelong Knowledge Editing requires Better Regularization"

Fixing this leads to consistent downstream performance!

<a href="/tom_hartvigsen/">Tom Hartvigsen</a> <a href="/_ahmedmalaa/">Ahmed Alaa</a> <a href="/GopalaSpeech/">Gopala Anumanchipalli</a> <a href="/berkeley_ai/">Berkeley AI Research</a>