Khaled Jedoui (@kjedoui) 's Twitter Profile
Khaled Jedoui

@kjedoui

ID: 1468373262589513730

calendar_today08-12-2021 00:14:15

11 Tweet

14 Followers

47 Following

MrBeast (@mrbeast) 's Twitter Profile Photo

I’m gonna give 10 random people that repost this and follow me $25,000 for fun (the $250,000 my X video made) I’ll pick the winners in 72 hours

Rahul Venkatesh (@rahul_venkatesh) 's Twitter Profile Photo

Excited to present at #CCN2024! Join me, Honglin Chen and Daniel Yamins today at 1:30-3:30 (B109) for our poster: "Climbing the Ladder of Causation with Counterfactual World Modeling". We build a visual world model with capabilities analogous to Pearl's Ladder of Causation Judea Pearl

Excited to present at #CCN2024! Join me, <a href="/honglin_c/">Honglin Chen</a>  and <a href="/dyamins/">Daniel Yamins</a> today at 1:30-3:30 (B109) for our poster: "Climbing the Ladder of Causation with Counterfactual World Modeling". We build a visual world model with capabilities analogous to Pearl's Ladder of Causation <a href="/yudapearl/">Judea Pearl</a>
Seungwoo (Simon) Kim (@sekim1112) 's Twitter Profile Photo

We prompt a generative video model to extract state-of-the-art optical flow, using zero labels and no fine-tuning. Our method, KL-tracing, achieves SOTA results on TAP-Vid & generalizes to challenging YouTube clips. Khai Loong Aw Klemen Kotar Cristóbal Eyzaguirre Ercilla Wanhee Lee

Klemen Kotar (@klemenkotar) 's Twitter Profile Photo

📷 New Preprint: SOTA optical flow extraction from pre-trained generative video models! While it seems intuitive that video models grasp optical flow, extracting that understanding has proven surprisingly elusive.

Daniel Yamins (@dyamins) 's Twitter Profile Photo

Over the past 18 months my lab has been developing a new approach to visual world modeling. There will be a magnum opus that ties it all together out in the next couple of weeks. But for now there are some individual application papers that have poked out.

Yingtian Tang (@yingtian80536) 's Twitter Profile Photo

🧠 NEW PREPRINT Many-Two-One: Diverse Representations Across Visual Pathways Emerge from A Single Objective biorxiv.org/content/10.110…

Daniel Yamins (@dyamins) 's Twitter Profile Photo

It was a steep climb in the "early days" (~2012) up the gradient of better ImageNet categorization towards better visual system models. That tapped out around 2015 after ResNet as (frankly) progress in computer vision kind of stalled.... But now with video models starting to

It was a steep climb in the "early days" (~2012) up the gradient of better ImageNet categorization towards better visual system models.   That tapped out around 2015 after ResNet as (frankly) progress in computer vision kind of stalled....

But now with video models starting to
Martin Schrimpf @ICLR2025 (@martin_schrimpf) 's Twitter Profile Photo

What makes visual processing in the brain so powerful and flexible? Very excited to share our new work where we started from SOTA models that accurately predict dynamic brain activity during hours of video watching, and investigated core computations underlying visual perception

What makes visual processing in the brain so powerful and flexible? Very excited to share our new work where we started from SOTA models that accurately predict dynamic brain activity during hours of video watching, and investigated core computations underlying visual perception
Martin Schrimpf @ICLR2025 (@martin_schrimpf) 's Twitter Profile Photo

Great work by Yingtian Tang with Abdulkadir Gokce, Khaled Jedoui, and Daniel Yamins (and me). Check out the full thread for more details x.com/yingtian80536/… and of course the paper biorxiv.org/content/10.110… #NeuroAI #Vision #Neuroscience #AI

Daniel Yamins (@dyamins) 's Twitter Profile Photo

Here is our best thinking about how to make world models. I would apologize for it being a massive 40-page behemoth, but it's worth reading. arxiv.org/pdf/2509.09737