
Richard Zhang
@rzhang88
Sr Research Scientist @AdobeResearch
PhD @berkeley_ai, BS/MEng @cornellece
🤖 Computer vision, deep learning, graphics
ID: 118862315
http://richzhang.github.io 01-03-2010 23:35:57
350 Tweet
7,7K Followers
296 Following


We introduce🌟Editable Image Elements🥳, a new disentangled and controllable latent space for diffusion models, that allows for various image editing operations (e.g., move, resize, de-occlusion, object removal, variations, composition) jitengmu.github.io/Editable_Image… More details🧵👇




I'll be speaking about Data Attribution, including our recently accepted NeurIPS 2024 paper: peterwang512.github.io/AttributeByUnl… Work w/ Sheng-Yu Wang, Aaron Hertzmann, Jun-Yan Zhu, A. A. Efros AI4VA workshop European Conference on Computer Vision #ECCV2026, 11:45am at Amber Room 2. See you there!


We're excited to introduce our new 1-step image generator, Diffusion2GAN at #ECCV2024, which enables ODE-preserving 1k image generation in just 0.16 seconds! Check out our #ECCV2024 paper mingukkang.github.io/Diffusion2GAN/ and stop by poster #181 (Wed Oct 2, 10:30-12:30 CEST) if you're

Precise spatial image editing with diffusion models? We will be presenting #ECCV2024 Editable Image Elements (Thu Oct 3, 16:30-18:30 CEST, poster #262). Please come check out our poster and say hi😃! w/ Michaël Gharbi,Richard Zhang,Eli Shechtman,Nuno Vasconcelos,Xiaolong Wang,Taesung Park.

Check out our #SIGGRAPHASIA2024 technical paper, CustomDiffusion360, that adds object viewpoint control during customization. We are presenting today (Dec 3, 1 pm JST) Project page: customdiffusion360.github.io w/ Grace, Richard Zhang, Taesung Park, Eli Shechtman, and, Jun-Yan Zhu


Generative models create an image inspired by the training data. But which training data are used to synthesize an image? Our #NeurIPS2024 work attributes a generated image to influential training data -- by unlearning *synthesized* images. Page: peterwang512.github.io/AttributeByUnl… 1/7

