Jeongsoo Park (@jespark0) 's Twitter Profile
Jeongsoo Park

@jespark0

PhD student @UMichCSE

ID: 1522624138678046720

linkhttp://jespark.net calendar_today06-05-2022 17:08:04

16 Tweet

129 Takipçi

117 Takip Edilen

Daniel Geng (@dangengdg) 's Twitter Profile Photo

Can you make a jigsaw puzzle with two different solutions? Or an image that changes appearance when flipped? We can do that, and a lot more, by using diffusion models to generate optical illusions! Continue reading for more illusions and method details 🧵

Daniel Geng (@dangengdg) 's Twitter Profile Photo

What do you see in these images? These are called hybrid images, originally proposed by Aude Oliva et al. They change appearance depending on size or viewing distance, and are just one kind of perceptual illusion that our method, Factorized Diffusion, can make.

Yiming Dou (@_yimingdou) 's Twitter Profile Photo

NeRF captures visual scenes in 3D👀. Can we capture their touch signals🖐️, too? In our #CVPR2024 paper Tactile-Augmented Radiance Fields (TaRF), we estimate both visual and tactile signals for a given 3D position within a scene. Website: dou-yiming.github.io/TaRF/ arXiv:

Ziyang Chen (@czyangchen) 's Twitter Profile Photo

These spectrograms look like images, but can also be played as a sound! We call these images that sound. How do we make them? Look and listen below to find out, and to see more examples!

Sarah Jabbour (@sarahjabbour_) 's Twitter Profile Photo

This year I'm organizing ML4H Outreach program, and want to highlight our Author Mentorship program. Whether you're a mentee looking for guidance or a more experienced researcher with time to mentor, we'd love to have you be a part of this program! Deadline to apply is July 5!

Sarah Jabbour (@sarahjabbour_) 's Twitter Profile Photo

📢Presenting 𝐃𝐄𝐏𝐈𝐂𝐓: Diffusion-Enabled Permutation Importance for Image Classification Tasks #ECCV2024 We use permutation importance to compute dataset-level explanations for image classifiers using diffusion models (without access to model parameters or training data!)

📢Presenting 𝐃𝐄𝐏𝐈𝐂𝐓: Diffusion-Enabled Permutation Importance for Image Classification Tasks #ECCV2024

We use permutation importance to compute dataset-level explanations for image classifiers using diffusion models (without access to model parameters or training data!)
Ayush Shrivastava (@ayshrv) 's Twitter Profile Photo

We present Global Matching Random Walks, a simple self-supervised approach to the Tracking Any Point (TAP) problem, accepted to #ECCV2024. We train a global matching transformer to find cycle consistent tracks through video via contrastive random walks (CRW).

Chris Rockwell (@_crockwell) 's Twitter Profile Photo

Ever wish YouTube had 3D labels? 🚀Introducing🎥DynPose-100K🎥, an Internet-scale collection of diverse videos annotated with camera pose! Applications include camera-controlled video generation🤩and learned dynamic pose estimation😯 Download: huggingface.co/datasets/nvidi…

Daniel Geng (@dangengdg) 's Twitter Profile Photo

Hello! If you like pretty images and videos and want a rec for CVPR oral session, you should def go to Image/Video Gen, Friday at 9am: I'll be presenting "Motion Prompting" Ryan Burgert will be presenting "Go with the Flow" and Pascal CHANG will be presenting "LookingGlass"

Yiming Dou (@_yimingdou) 's Twitter Profile Photo

Ever wondered how a scene sounds👂 when you interact👋 with it? Introducing our #CVPR2025 work "Hearing Hands: Generating Sounds from Physical Interactions in 3D Scenes" -- we make 3D scene reconstructions audibly interactive! yimingdou.com/hearing_hands/

Ayush Shrivastava (@ayshrv) 's Twitter Profile Photo

Excited to share our CVPR 2025 paper on cross-modal space-time correspondence! We present a method to match pixels across different modalities (RGB-Depth, RGB-Thermal, Photo-Sketch, and cross-style images) — trained entirely using unpaired data and self-supervision. Our

Excited to share our CVPR 2025 paper on cross-modal space-time correspondence!

We present a method to match pixels across different modalities (RGB-Depth, RGB-Thermal, Photo-Sketch, and cross-style images) — trained entirely using unpaired data and self-supervision.

Our
Linyi Jin (@jin_linyi) 's Twitter Profile Photo

Hello! If you are interested in dynamic 3D or 4D, don't miss the oral session 3A at 9 am on Saturday: Zhengqi Li will be presenting "MegaSaM" I'll be presenting "Stereo4D" and Qianqian Wang will be presenting "CUT3R"

Jeongsoo Park (@jespark0) 's Twitter Profile Photo

Had a ton of fun presenting today at #CVPR2025! Thanks to everyone who came to my poster, and thank you for asking excellent questions!

Had a ton of fun presenting today at #CVPR2025! Thanks to everyone who came to my poster, and thank you for asking excellent questions!