Richard Zhang (@rzhang88) 's Twitter Profile
Richard Zhang

@rzhang88

Sr Research Scientist @AdobeResearch
PhD @berkeley_ai, BS/MEng @cornellece

🤖 Computer vision, deep learning, graphics

ID: 118862315

linkhttp://richzhang.github.io calendar_today01-03-2010 23:35:57

350 Tweet

7,7K Followers

296 Following

MIT CSAIL (@mit_csail) 's Twitter Profile Photo

Diffusion models generate high-quality images but require hundreds of forward passes. MIT CSAIL and Adobe Research introduce Distribution Matching Distillation (DMD), a distillation approach that converts costly multi-step diffusion models into fast one-step generators. A

Jiteng Mu (@jitengmu) 's Twitter Profile Photo

We introduce🌟Editable Image Elements🥳, a new disentangled and controllable latent space for diffusion models, that allows for various image editing operations (e.g., move, resize,  de-occlusion, object removal, variations, composition) jitengmu.github.io/Editable_Image… More details🧵👇

Yotam Nitzan (@yotamnitzan) 's Twitter Profile Photo

LazyDiffusion is accepted to #ECCV2024! Traditional image editing methods regenerate unchanged pixels, wasting time and computation. LazyDiffusion generates only novel pixels while respecting the full image context, and does so up to x10 faster! lazydiffusion.github.io

LazyDiffusion is accepted to #ECCV2024! 

Traditional image editing methods regenerate unchanged pixels, wasting time and computation. LazyDiffusion generates only novel pixels while respecting the full image context, and does so up to x10 faster!

lazydiffusion.github.io
AK (@_akhaliq) 's Twitter Profile Photo

TurboEdit Instant text-based image editing discuss: huggingface.co/papers/2408.08… We address the challenges of precise image inversion and disentangled image editing in the context of few-step diffusion models. We introduce an encoder based iterative inversion technique. The

Richard Zhang (@rzhang88) 's Twitter Profile Photo

I'll be speaking about Data Attribution, including our recently accepted NeurIPS 2024 paper: peterwang512.github.io/AttributeByUnl… Work w/ Sheng-Yu Wang, Aaron Hertzmann, Jun-Yan Zhu, A. A. Efros AI4VA workshop European Conference on Computer Vision #ECCV2026, 11:45am at Amber Room 2. See you there!

I'll be speaking about Data Attribution, including our recently accepted NeurIPS 2024 paper: peterwang512.github.io/AttributeByUnl…

Work w/ <a href="/ShengYuWang6/">Sheng-Yu Wang</a>, <a href="/AaronHertzmann/">Aaron Hertzmann</a>, <a href="/junyanz89/">Jun-Yan Zhu</a>, A. A. Efros

AI4VA workshop <a href="/eccvconf/">European Conference on Computer Vision #ECCV2026</a>, 11:45am at Amber Room 2. See you there!
Minguk_Kang (@minguk_kang) 's Twitter Profile Photo

We're excited to introduce our new 1-step image generator, Diffusion2GAN at #ECCV2024, which enables ODE-preserving 1k image generation in just 0.16 seconds! Check out our #ECCV2024 paper mingukkang.github.io/Diffusion2GAN/ and stop by poster #181 (Wed Oct 2, 10:30-12:30 CEST) if you're

Jiteng Mu (@jitengmu) 's Twitter Profile Photo

Precise spatial image editing with diffusion models? We will be presenting #ECCV2024 Editable Image Elements (Thu Oct 3, 16:30-18:30 CEST, poster #262). Please come check out our poster and say hi😃! w/ Michaël Gharbi,Richard Zhang,Eli Shechtman,Nuno Vasconcelos,Xiaolong Wang,Taesung Park.

Nupur Kumari (@nupurkmr9) 's Twitter Profile Photo

Check out our #SIGGRAPHASIA2024 technical paper, CustomDiffusion360, that adds object viewpoint control during customization. We are presenting today (Dec 3, 1 pm JST) Project page: customdiffusion360.github.io w/ Grace, Richard Zhang, Taesung Park, Eli Shechtman, and, Jun-Yan Zhu

Tianwei Yin (@tianweiy) 's Twitter Profile Photo

Video diffusion models generate high-quality videos but are too slow for interactive applications. We MIT CSAIL Adobe Research introduce CausVid, a fast autoregressive video diffusion model that starts playing the moment you hit "Generate"! A thread 🧵

Sheng-Yu Wang (@shengyuwang6) 's Twitter Profile Photo

Generative models create an image inspired by the training data. But which training data are used to synthesize an image? Our #NeurIPS2024 work attributes a generated image to influential training data -- by unlearning *synthesized* images. Page: peterwang512.github.io/AttributeByUnl… 1/7

Sheng-Yu Wang (@shengyuwang6) 's Twitter Profile Photo

I will be presenting this at #NeurIPS2024 today! Come if interested :) - Wed 11 Dec 4:30 — 7:30 pm - East Exhibit Hall A-C - Poster ID 2603

Rohit Gandikota (@rohitgandikota) 's Twitter Profile Photo

Can you ask a Diffusion Model to break down a concept? 👀 SliderSpace 🚀 reveals maps of the visual knowledge naturally encoded within diffusion models. It works by decomposing the model's capabilities into intuitive, composable sliders. Here's how 🧵👇