Binxu Wang 🐱 (@wangbinxu) 's Twitter Profile
Binxu Wang 🐱

@wangbinxu

@KempnerInst Fellow; Neuro PhD in Ponce Lab @Harvard; interested in Vision, generative model, optimization. Prev:WUSTL Neuro; PKU Physics, Yuanpei College

ID: 1059998888055136256

linkhttps://scholar.harvard.edu/binxuw calendar_today07-11-2018 02:40:03

556 Tweet

893 Takipçi

833 Takip Edilen

Binxu Wang 🐱 (@wangbinxu) 's Twitter Profile Photo

I'll present a piece of work with Carlos R. Ponce in NANO54 8:30 am Oct.9th at #sfn2024! As we know, DNN-based regression models can predict neural activities well, but do they care about the same features as neurons do? Using feature attribution method, we found common dimension

I'll present a piece of work with <a href="/HombreCerebro/">Carlos R. Ponce</a>  in NANO54 8:30 am Oct.9th at #sfn2024!
As we know, DNN-based regression models can predict neural activities well, but do they care about the same features as neurons do? Using feature attribution method, we found common dimension
Binxu Wang 🐱 (@wangbinxu) 's Twitter Profile Photo

there was a saying in chinese sth like 去头上辫子易 去心中辫子难 it’s easy to superficially act like politically correct, but it’s hard to change the bias internal to their mind, even if they tried a little bit to make it nuanced. really not sure if it’s easier to debiasing

Heng Yang (@hankyang94) 's Twitter Profile Photo

How is Rosalind going to face the hundreds of talented, hardworking, and genuine Chinese students at Massachusetts Institute of Technology (MIT) when she got back? They (and many of us) went through so much trouble to get to the US due to the pure drive to do good science, and yet you put this in your slide. This is

The vOICe vision BCI 🧠🇪🇺 (@seeingwithsound) 's Twitter Profile Photo

Vision CNNs trained to estimate spatial latents learned similar ventral-stream-aligned representations arxiv.org/abs/2412.09115 via Patrick Mineault; inferior temporal cortex and ventral stream the source of voluntary mental imagery including position and orientation?

Jing Yu Koh (@kohjingyu) 's Twitter Profile Photo

When I was leaving Google to start my PhD, several people told me that having access to less resources in academia would actually encourage more creative work. I didn’t really understand it at the time: can’t you arbitrarily impose the same constraints on yourself to be efficient

Kohitij Kar (@kohitijkar) 's Twitter Profile Photo

🌟New preprint with Lynn Sörensen and James DiCarlo When animals learn new object discrimination tasks, how much does their IT cortex change? In their untrained state, animals can still see objects but can’t attach labels—so we don’t expect the ventral stream to fully

🌟New preprint with <a href="/LynnKASorensen/">Lynn Sörensen</a>  and <a href="/JamesJDiCarlo/">James DiCarlo</a>

When animals learn new object discrimination tasks, how much does their IT cortex change? In their untrained state, animals can still see objects but can’t attach labels—so we don’t expect the ventral stream to fully
Yaroslav Bulatov (@yaroslavvb) 's Twitter Profile Photo

Andrew Carr (e/🤸) I spent a while figuring out why some logistic regression problems have closed form solution, while others don't. Ultimately it comes down to a special condition on the hypergraph characterizing the problem -- it needs to be chordal. Seems somewhat arbitrary.

Thomas Fel (@napoolar) 's Twitter Profile Photo

Train your vision SAE on Monday, then again on Tuesday, and you'll find only about 30% of the learned concepts match. ⚓ We propose Archetypal SAE which anchors concepts in the real data’s convex hull, delivering stable and consistent dictionaries. arxiv.org/pdf/2502.12892…

Train your vision SAE on Monday, then again on Tuesday, and you'll find only about 30% of the learned concepts match.

⚓ We propose Archetypal SAE  which anchors concepts in the real data’s convex hull, delivering stable and consistent dictionaries.

arxiv.org/pdf/2502.12892…
Kempner Institute at Harvard University (@kempnerinst) 's Twitter Profile Photo

NEW in the #KempnerInstitute blog: Want consistent & stable #SAE concepts across training runs? Archetypal SAE anchors concepts in the real data’s convex hull and delivers consistent & stable dictionaries! Read the blog: bit.ly/4kEAJZN

Kempner Institute at Harvard University (@kempnerinst) 's Twitter Profile Photo

New in the Deeper Learning blog: Kempner researchers show how VLMs speak the same semantic language across images and text. bit.ly/KempnerVLM by Isabel Papadimitriou, Chloe H. Su, Thomas Fel, Stephanie Gil, and Sham Kakade #AI #ML #VLMs #SAEs

Chengxu Zhuang (@chengxuzhuang) 's Twitter Profile Photo

Glad to see that my first publication w Daniel Yamins was continued by Aran Nayebi ! AI has so much potential to contribute to other fields of science, particularly neuroscience, considering how much is unknown and the fascinating parallels—and differences—between AI and the brain.

Chloe H. Su (@huangyu58589918) 's Twitter Profile Photo

What precision should we use to train large AI models effectively? Our latest research probes the subtle nature of training instabilities under low precision formats like MXFP8 and ways to mitigate them. Thread 🧵👇

What precision should we use to train large AI models effectively? Our latest research probes the subtle nature of training instabilities under low precision formats like MXFP8 and ways to mitigate them. Thread 🧵👇
Koyena Pal (@kpal_koyena) 's Twitter Profile Photo

🚨 Registration is live! 🚨 The New England Mechanistic Interpretability (NEMI) Workshop is happening August 22nd 2025 at Northeastern University! A chance for the mech interp community to nerd out on how models really work 🧠🤖 🌐 Info: nemiconf.github.io/summer25/ 📝 Register:

🚨 Registration is live! 🚨

The New England Mechanistic Interpretability (NEMI) Workshop is happening August 22nd 2025 at Northeastern University!

A chance for the mech interp community to nerd out on how models really work 🧠🤖

🌐 Info: nemiconf.github.io/summer25/
📝 Register:
Cengiz Pehlevan (@cpehlevan) 's Twitter Profile Photo

Great to see this one finally out in PNAS! Asymptotic theory of in-context learning by linear attention pnas.org/doi/10.1073/pn… Many thanks to my amazing co-authors Yue Lu, Mary Letey, Jacob Zavatone-Veth and Anindita Maiti