Binxu Wang 🐱
@wangbinxu
@KempnerInst Fellow; Neuro PhD in Ponce Lab @Harvard; interested in Vision, generative model, optimization. Prev:WUSTL Neuro; PKU Physics, Yuanpei College
ID: 1059998888055136256
https://scholar.harvard.edu/binxuw 07-11-2018 02:40:03
556 Tweet
893 Takipçi
833 Takip Edilen
65k ECoG electrodes in a flexible array! With some familiar faces Andreas Tolias Lab @ Stanford University R. James Cotton Bijan Pesaran
I'll present a piece of work with Carlos R. Ponce in NANO54 8:30 am Oct.9th at #sfn2024! As we know, DNN-based regression models can predict neural activities well, but do they care about the same features as neurons do? Using feature attribution method, we found common dimension
How is Rosalind going to face the hundreds of talented, hardworking, and genuine Chinese students at Massachusetts Institute of Technology (MIT) when she got back? They (and many of us) went through so much trouble to get to the US due to the pure drive to do good science, and yet you put this in your slide. This is
Vision CNNs trained to estimate spatial latents learned similar ventral-stream-aligned representations arxiv.org/abs/2412.09115 via Patrick Mineault; inferior temporal cortex and ventral stream the source of voluntary mental imagery including position and orientation?
🌟New preprint with Lynn Sörensen and James DiCarlo When animals learn new object discrimination tasks, how much does their IT cortex change? In their untrained state, animals can still see objects but can’t attach labels—so we don’t expect the ventral stream to fully
Andrew Carr (e/🤸) I spent a while figuring out why some logistic regression problems have closed form solution, while others don't. Ultimately it comes down to a special condition on the hypergraph characterizing the problem -- it needs to be chordal. Seems somewhat arbitrary.
i m really curious about the digital twin of Yunyi Shen/申云逸 🐺 part lol
New in the Deeper Learning blog: Kempner researchers show how VLMs speak the same semantic language across images and text. bit.ly/KempnerVLM by Isabel Papadimitriou, Chloe H. Su, Thomas Fel, Stephanie Gil, and Sham Kakade #AI #ML #VLMs #SAEs
Glad to see that my first publication w Daniel Yamins was continued by Aran Nayebi ! AI has so much potential to contribute to other fields of science, particularly neuroscience, considering how much is unknown and the fascinating parallels—and differences—between AI and the brain.
Great to see this one finally out in PNAS! Asymptotic theory of in-context learning by linear attention pnas.org/doi/10.1073/pn… Many thanks to my amazing co-authors Yue Lu, Mary Letey, Jacob Zavatone-Veth and Anindita Maiti
At ICML for the next 2 days to present multiple works, if you're into interpretability, complexity, or just wanna know how cool Kempner Institute at Harvard University is, hit me up 👋