Pinyuan Feng (Tony) (@pinyuan3) 's Twitter Profile
Pinyuan Feng (Tony)

@pinyuan3

PhD student @ColumbiaPSYC 🦁 @KriegeskorteLab | Prev @BrownCSDept 🐻 @serre_lab | Brain 🧠 Mind 💭 Machine 🤖

ID: 1183016486367039488

calendar_today12-10-2019 13:48:20

13 Tweet

46 Followers

366 Following

Drew Linsley (@drewlinsley) 's Twitter Profile Photo

The sensitivity of DNNs to adversarial attacks has long been thought to be an "Achilles heel" that will ultimately make them unsafe in real-world applications. Has this sensitivity changed as DNNs have scaled-up and rivaled or beat human-level accuracy? serre-lab.github.io/Adversarial-Al…

The sensitivity of DNNs to adversarial attacks has long been thought to be an "Achilles heel" that will ultimately make them unsafe in real-world applications.

Has this sensitivity changed as DNNs have scaled-up and rivaled or beat human-level accuracy?

serre-lab.github.io/Adversarial-Al…
Carney Institute for Brain Science (@carneyinstitute) 's Twitter Profile Photo

Just how does AI distinguish one object from another? A team at Carney has developed a new approach to understanding computer vision, which can be used to help create better, safer and more robust artificial intelligence systems. Read more @ bit.ly/CarneyCRAFT

Just how does AI distinguish one object from another? A team at Carney has developed a new approach to understanding computer vision, which can be used to help create better, safer and more robust artificial intelligence systems.

Read more @ bit.ly/CarneyCRAFT
Patrick Mineault (@patrickmineault) 's Twitter Profile Photo

How does your blue compare to others'? I implemented a new feature on ismy.blue to compare against a database of hundreds of people who took the test.

How does your blue compare to others'? I implemented a new feature on ismy.blue to compare against a database of hundreds of people who took the test.
Paul Linton (@lintonvision) 's Twitter Profile Photo

Amazing talk by Li Zhaoping (Li Zhaoping) on: "Looking and Seeing through a Bottleneck: a VBC Framework for Vision, from the Perspective of Primary Visual Cortex" See her 2024 paper "Peripheral vision is mainly for looking rather than seeing": sciencedirect.com/science/articl…

Amazing talk by Li Zhaoping (<a href="/Li_Zhaoping/">Li Zhaoping</a>) on: 

"Looking and Seeing through a Bottleneck: a VBC Framework for Vision, from the Perspective of Primary Visual Cortex"

See her 2024 paper "Peripheral vision is mainly for looking rather than seeing": sciencedirect.com/science/articl…
CBMM (@mit_cbmm) 's Twitter Profile Photo

[video] Aligning deep networks with human vision will require novel neural architectures, data diets and training algorithms Thomas Serre, Brown University cbmm.mit.edu/video/aligning…

[video] Aligning deep networks with human vision will require novel neural architectures, data diets and training algorithms

Thomas Serre, Brown University

cbmm.mit.edu/video/aligning…
Hokin Deng (@denghokin) 's Twitter Profile Photo

#ICML #cognition #GrowAI We spent 2 years carefully curated every single experiment (i.e. object permanence, A-not-B task, visual cliff task) in this dataset (total: 1503 classic experiments spanning 12 core cognitive concepts). We spent another year to get 230 MLLMs evaluated

#ICML #cognition #GrowAI We spent 2 years carefully curated every single experiment (i.e. object permanence, A-not-B task, visual cliff task) in this dataset (total: 1503 classic experiments spanning 12 core cognitive concepts). 

We spent another year to get 230 MLLMs evaluated
Eric Xing (@ericxing) 's Twitter Profile Photo

I have been long arguing that a world model is NOT about generating videos, but IS about simulating all possibilities of the world to serve as a sandbox for general-purpose reasoning via thought-experiments. This paper proposes an architecture toward that arxiv.org/abs/2507.05169

Shashank (@shawshank_v) 's Twitter Profile Photo

Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research🧵

Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research🧵
UniReps (@unireps) 's Twitter Profile Photo

Ready to present your latest work? The Call for Papers for #UniReps2025 NeurIPS Conference is open! 👉Check the CFP: unireps.org/2025/call-for-… 🔗 Submit your Full Paper or Extended Abstract here: openreview.net/group?id=NeurI… Speakers and panelists: Danica Sutherland David Alvarez Melis Nikolaus Kriegeskorte

Ready to present your latest work? The Call for Papers for #UniReps2025 <a href="/NeurIPSConf/">NeurIPS Conference</a> is open!

👉Check the CFP: unireps.org/2025/call-for-…

🔗 Submit your Full Paper or Extended Abstract here: openreview.net/group?id=NeurI…

Speakers and panelists: <a href="/d_j_sutherland/">Danica Sutherland</a> <a href="/elmelis/">David Alvarez Melis</a> <a href="/KriegeskorteLab/">Nikolaus Kriegeskorte</a>
Shawn Shen (@shawn_shen_oix) 's Twitter Profile Photo

I’m Shawn, founder of Memories.ai, former researcher at Meta and CS PhD at University of Cambridge. Today we’re launching Memories.ai: we built the world’s first Large Visual Memory Model - to give AI human-like visual memories. Why visual memory? AI to

Columbia University's Zuckerman Institute (@zuckermanbrain) 's Twitter Profile Photo

Collaborations are moving neuroscience forward. Scientists from 20 institutions (including ours!) created the first brain-wide map of individual neurons’ activity in mice for the International Brain Laboratory. Video from SWC sainsburywellcome.org/web/research-n…

机器之心 JIQIZHIXIN (@synced_global) 's Twitter Profile Photo

DeepSeek’s paper “DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning” was featured on the cover of Nature, with the company’s CEO Wenfeng Liang as the corresponding author.

DeepSeek’s paper “DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning” was featured on the cover of Nature, with the company’s CEO Wenfeng Liang as the corresponding author.
Ethan Hwang (@ethanhwang_) 's Twitter Profile Photo

Encoding models predict visual responses to novel images in cortical areas. But can these models offer new insights about categorical representations? If so, we should be able to generate new hypotheses from them to be tested in future experiments. NeurIPS Conference #NeurIPS2025 1/15

Encoding models predict visual responses to novel images in cortical areas. But can these models offer new insights about categorical representations? If so, we should be able to generate new hypotheses from them to be tested in future experiments. <a href="/NeurIPSConf/">NeurIPS Conference</a> #NeurIPS2025

1/15