Rahul Venkatesh (@rahul_venkatesh) 's Twitter Profile
Rahul Venkatesh

@rahul_venkatesh

CS Ph.D. student at Stanford @NeuroAILab @StanfordAILab

ID: 88371693

linkhttps://rahulvenkk.github.io/ calendar_today08-11-2009 07:16:27

27 Tweet

114 Takipçi

253 Takip Edilen

Imran Thobani (@cogphilosopher) 's Twitter Profile Photo

Excited to give a talk on our work (w/ Javier Sagastuy mohammad hossein Rosa Cao Daniel Yamins) on inter-animal transforms at the CogCompNeuro Battle of the Metrics (5:15-7 pm EST)! We develop a principled approach to measuring similarity between DNNs and the brain. #CCN2024

Excited to give a talk on our work (w/ <a href="/jvrsgsty/">Javier Sagastuy</a> <a href="/nayebi/">mohammad hossein</a> <a href="/luosha/">Rosa Cao</a> <a href="/dyamins/">Daniel Yamins</a>) on inter-animal transforms at the <a href="/CogCompNeuro/">CogCompNeuro</a> Battle of the Metrics (5:15-7 pm EST)! We develop a principled approach to measuring similarity between DNNs and the brain. #CCN2024
Khaled Jedoui (@kjedoui) 's Twitter Profile Photo

Excited to present at #CCN2024! Join me and Daniel Yamins today at 11:15-1:15 (C54) for our poster: "Towards Task-Appropriate Readout Mechanisms For Physical Scene Understanding". We propose a novel strategy for designing task-appropriate readout models using idealized representations

Excited to present at #CCN2024! Join me and <a href="/dyamins/">Daniel Yamins</a> today at 11:15-1:15 (C54) for our poster: "Towards Task-Appropriate Readout Mechanisms For Physical Scene Understanding". We propose a novel strategy for designing task-appropriate readout models using idealized representations
Stefan Stojanov (@sstj389) 's Twitter Profile Photo

Extracting structure that’s implicitly learned by video foundation models _without_ relying on labeled data is a fundamental challenge. What’s a better place to start than extracting motion? Temporal correspondence is a key building block of perception. Check out our paper!

Rahul Venkatesh (@rahul_venkatesh) 's Twitter Profile Photo

Excited to share our recent work on self-supervised discovery of motion concepts with counterfactual world modeling. It has been a privilege to work on this project with amazing collaborators Stefan Stojanov Seungwoo (Simon) Kim David Wendt, Kevin Feigelis, Jiajun Wu and Daniel Yamins

Stefan Stojanov (@sstj389) 's Twitter Profile Photo

Video prediction foundation models implicitly learn how objects move in videos. Can we learn how to extract these representations to accurately track objects in videos _without_ any supervision? Yes! 🧵 Work done with: Rahul Venkatesh, Seungwoo (Simon) Kim, Jiajun Wu and Daniel Yamins

Daniel Yamins (@dyamins) 's Twitter Profile Photo

New paper on 3D scene understanding for static images with a novel large-scale video prediction model. neuroailab.github.io/projects/lras_… Strong results in self-supervised depth extraction, novel view synthesis (aka camera control), and complex object manipulations.

New paper on 3D scene understanding for static images with a novel large-scale video prediction model.  neuroailab.github.io/projects/lras_…  Strong results in self-supervised depth extraction, novel view synthesis (aka camera control), and complex object manipulations.
Klemen Kotar (@klemenkotar) 's Twitter Profile Photo

🚀 Excited to share our new paper! We introduce the first autoregressive model that natively handles: 🎥 Novel view synthesis 🎨 Interactive 3D object editing 📏 Depth extraction ➕ and more! No fine-tuning needed—just prompting. Outperforming even diffusion-based methods!