Carl Vondrick (@cvondrick) 's Twitter Profile
Carl Vondrick

@cvondrick

Associate Professor at @Columbia. PC for @iclr_conf

ID: 870606756

linkhttp://www.cs.columbia.edu/~vondrick/ calendar_today09-10-2012 21:14:35

761 Tweet

6,6K Takipçi

574 Takip Edilen

Daniel Geng (@dangengdg) 's Twitter Profile Photo

What do you see in these images? These are called hybrid images, originally proposed by Aude Oliva et al. They change appearance depending on size or viewing distance, and are just one kind of perceptual illusion that our method, Factorized Diffusion, can make.

Basile Van Hoorick (@basilevanh) 's Twitter Profile Photo

Excited to share our new paper on large-angle monocular dynamic novel view synthesis! Given a single RGB video, we propose a method that can imagine what that scene would look like from any other viewpoint. Website: gcd.cs.columbia.edu Paper: arxiv.org/abs/2405.14868 🧵(1/5)

ICLR 2025 (@iclr_conf) 's Twitter Profile Photo

📢 ICLR 2025 submissions must have an abstract registered on OpenReview by Sept 27 at 11:5pm AoE. The author list cannot be changed after this time. All papers are also required to have an author who is registered to review. Register as a reviewer here: docs.google.com/forms/d/e/1FAI…

ICLR 2025 (@iclr_conf) 's Twitter Profile Photo

ICLR by the numbers: - 13,665 abstracts - 15,249 reviewers - 824 area chairs - 71 senior area chairs Full papers must be submitted by Oct 1 at 11:59pm AoE. Good luck!!

Sumit Sarin (@_sumit_sarin_) 's Twitter Profile Photo

Have you wondered "How Video Meetings Change Your Expression?" We will be presenting our work at #ECCV2024 European Conference on Computer Vision #ECCV2026 tomorrow Oct 1 @ 10:30AM (poster #195). Come say hi! facet.cs.columbia.edu Huge thanks to my amazing collaborators Utkarsh Mall Purva Tendulkar Carl Vondrick

Have you wondered "How Video Meetings Change Your Expression?"

We will be presenting our work at #ECCV2024 <a href="/eccvconf/">European Conference on Computer Vision #ECCV2026</a> tomorrow Oct 1 @ 10:30AM (poster #195). Come say hi!

facet.cs.columbia.edu

Huge thanks to my amazing collaborators <a href="/utkarshmall13/">Utkarsh Mall</a> <a href="/PurvaTendulkar/">Purva Tendulkar</a> <a href="/cvondrick/">Carl Vondrick</a>
Jeremy Klotz (@jklotz_) 's Twitter Profile Photo

At #ECCV2024, we presented Minimalist Vision with Freeform Pixels, a new vision paradigm that uses a small number of freeform pixels to solve lightweight vision tasks. We are honored to have received the Best Paper Award! Check out the project here: cave.cs.columbia.edu/projects/categ…

ICLR 2025 (@iclr_conf) 's Twitter Profile Photo

For #ICLR2025, we're piloting a feedback agent that provides optional feedback to reviewers. The aim is to help make reviews more constructive and actionable for authors. blog.iclr.cc/2024/10/09/icl…

Animesh Garg (@animesh_garg) 's Twitter Profile Photo

"This paper lacks comparisons to baselines and is lacking excitement and novelty" -- The bane of authors trying to make a case of their papers. -- vagueness without evidence This year the ICLR PC is young and fearless, and trying many new things. Won't we all love reviews that

"This paper lacks comparisons to baselines and is lacking excitement and novelty" -- 
The bane of authors trying to make a case of their papers. -- vagueness without evidence

This year the ICLR PC is young and fearless, and trying many new things.
Won't we all love reviews that
Ruoshi Liu (@ruoshi_liu) 's Twitter Profile Photo

Introducing Dr. Robot, a robot self-model which is differentiable from its visual appearance to its control parameters. With it, we can control and plan robot actions through image gradients. Accepted to CoRL 2024 with an oral! drrobot.cs.columbia.edu

Introducing Dr. Robot, a robot self-model which is differentiable from its visual appearance to its control parameters. With it, we can control and plan robot actions through image gradients. Accepted to CoRL 2024 with an oral!

drrobot.cs.columbia.edu
Ruoshi Liu (@ruoshi_liu) 's Twitter Profile Photo

We present🌊AquaBot🤖: a fully autonomous underwater manipulation system powered by visuomotor policies that can continue to improve through self-learning to perform tasks including object grasping, garbage sorting, and rescue retrieval. aquabot.cs.columbia.edu more details👇

Adam Elmachtoub (@adam235711) 's Twitter Profile Photo

Don't forget to apply, deadline is a couple weeks away. IE/OR is a broad field, if you do things with data applied to energy, healthcare, manufacturing, non-profits, transportation, etc.... apply!

Yunzhu Li (@yunzhuliyz) 's Twitter Profile Photo

📢 I’ll be admitting PhD students to Columbia CS in the heart of NYC 🗽—the most vibrant city in the world! 🌆 If you're passionate about advancing robot learning and envision a future where robots 🤖 are part of our daily lives, apply to join my group: yunzhuli.github.io

John Hewitt (@johnhewtt) 's Twitter Profile Photo

I’m hiring PhD students in computer science at Columbia! Our lab will tackle core challenges in understanding and controlling neural models that interact with language. for example, - methods for LLM control - discoveries of LLM properties - pretraining for understanding

Anand Bhattad (@anand_bhattad) 's Twitter Profile Photo

🧵 1/3 Many at #CVPR2024 & #ECCV2024 asked what would be next in our workshop series. We're excited to announce "How to Stand Out in the Crowd?" at #CVPR2025 Nashville - our 4th community-building workshop featuring this incredible speaker lineup! 🔗 sites.google.com/view/standoutcv

🧵 1/3 Many at #CVPR2024 &amp; #ECCV2024 asked what would be next in our workshop series.

We're excited to announce "How to Stand Out in the Crowd?" at #CVPR2025 Nashville - our 4th community-building workshop featuring this incredible speaker lineup!

🔗 sites.google.com/view/standoutcv
Junbang Liang (@liangjunbang) 's Twitter Profile Photo

Can visuomotor policy learn from video generation alone? We find learning to generate videos is an effective proxy with action-free video data providing critical benefits for generalizing to novel tasks! 🚀 Website: videopolicy.cs.columbia.edu Paper: arxiv.org/abs/2508.00795