chuyi shang (@chuyishang) 's Twitter Profile
chuyi shang

@chuyishang

CS + Econ @UCBerkeley | research @BerkeleyML @Berkeley_AI | big food, music, and wikipedia enthusiast

ID: 3818624775

calendar_today29-09-2015 22:36:25

75 Tweet

104 Takipçi

579 Takip Edilen

Jason Wei (@_jasonwei) 's Twitter Profile Photo

Enjoyed visiting UC Berkeley’s Machine Learning Club yesterday, where I gave a talk on doing AI research. Slides: docs.google.com/presentation/d… In the past few years I’ve worked with and observed some extremely talented researchers, and these are the trends I’ve noticed: 1. When

Machine Learning at Berkeley (@berkeleyml) 's Twitter Profile Photo

ML@B is super excited to have partnered with Udacity on their Generative AI Nanodegree Program which launched today! It covers various topics, enabling developers to integrate generative AI into software applications. Check it out here: tinyurl.com/yc63zb88!

ML@B is super excited to have partnered with Udacity on their Generative AI Nanodegree Program which launched today! It covers various topics, enabling developers to integrate generative AI into software applications. Check it out here: tinyurl.com/yc63zb88!
Roei Herzig (@roeiherzig) 's Twitter Profile Photo

Happy to share that our 𝐌𝐮𝐥𝐭𝐢𝐦𝐨𝐝𝐚𝐥 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 work🤖 has been accepted to #EMNLP2024 Main conference! Hope to see everyone in Miami!🥳🌞🌅 Kudos to all authors and collaborators: chuyi shang, Amos You, Sanjay Subramanian, and trevordarrell.

Henry Ko (@henryhm_ko) 's Twitter Profile Photo

I wrote a new blog on TPUs -- it's been fun seeing how different they are from GPUs and also drawing things on excalidraw again✏️ henryhmko.github.io/posts/tpu/tpu.…

I wrote a new blog on TPUs -- it's been fun seeing how different they are from GPUs and also drawing things on excalidraw again✏️

henryhmko.github.io/posts/tpu/tpu.…
DailyPapers (@huggingpapers) 's Twitter Profile Photo

Latent Implicit Visual Reasoning Current LMMs are text-centric and struggle with visual reasoning tasks. LIVR trains models to discover visual reasoning tokens implicitly—no supervision needed—enabling task-adaptive visual abstraction that outperforms explicit methods.

Latent Implicit Visual Reasoning

Current LMMs are text-centric and struggle with visual reasoning tasks. LIVR trains models to discover visual reasoning tokens implicitly—no supervision needed—enabling task-adaptive visual abstraction that outperforms explicit methods.