He Zhang (@zhanghesprinter) 's Twitter Profile
He Zhang

@zhanghesprinter

Senior Research Scientist @ Adobe Research.
“old” student athlete for 100&200M

ID: 2742559937

linkhttps://sites.google.com/site/hezhangsprinter calendar_today14-08-2014 11:15:13

75 Tweet

408 Followers

255 Following

Adobe Research (@adoberesearch) 's Twitter Profile Photo

Adobe Research and the InDesign team joined forces for the Summit Sneak #ProjectVisionCast, an experimental prototype that allows users to combine data insights and explore brand imagery, fostering creative brainstorming backed by numbers. research.adobe.com/news/the-resea…

He Zhang (@zhanghesprinter) 's Twitter Profile Photo

Today @ ExHall D Poster #176 We will present UniReal, our multimodal image generation and editing model. Come and drop by if you are interested #CVPR2025 Adobe Research

He Zhang (@zhanghesprinter) 's Twitter Profile Photo

DiffusionGS is accepted in ICCV'2025 where we bake gaussion spaltting into the diffusion model so that we don't need to do multiple view generation and reconstruction as two separate steps. Adobe Research

He Zhang (@zhanghesprinter) 's Twitter Profile Photo

We presented OmniVcus, a unified video customization framework can do (1) Single/Double/Multi-subject Video Cus Video Cus (2) Instructive Edit Subject-driven Video Cus (3) Camera-controlled Subject-driven Video Cus (4) Depth/mask -controlled Subject-driven Video Customization

Bilawal Sidhu (@bilawalsidhu) 's Twitter Profile Photo

Photoshop’s new harmonize feature looks genuinely useful — effectively making complex compositing tasks just one click. Seems Adobe has productized Project Perfect Blend from their sneaks presentation.

Adobe Research (@adoberesearch) 's Twitter Profile Photo

First unveiled as early research at #AdobeMAX last year, Adobe Photoshop’s new Harmonize feature can turn hours of editing into minutes! Kudos to the Adobe Research and Photoshop Engineering teams! adobe.ly/4lxiXXs

He Zhang (@zhanghesprinter) 's Twitter Profile Photo

Nice work ! Glad to see more research on treating image editing as a video generation task like we did with Xi Chen on UniReal (CVPR'25) arxiv.org/abs/2412.07774

He Zhang (@zhanghesprinter) 's Twitter Profile Photo

I will be at #ICCV2025 presenting two papers in the main conference! 🚀 Details are in the thread below. In the meantime, we’re hiring interns and full-time research scientists passionate about generative modeling — feel free to reach out if you’d like to join us or chat at ICCV!

He Zhang (@zhanghesprinter) 's Twitter Profile Photo

The first one is DiffusionGS, a single-stage diffusion model that bridges 2D generation and 3D Gaussian splatting for fast, high-quality 3D scene reconstruction from just one image. caiyuanhao1998.github.io/project/Diffus…

He Zhang (@zhanghesprinter) 's Twitter Profile Photo

And the 2nd one: DIVE: Taming DINO for Subject-Driven Video Editing, which leverages powerful DINO features to achieve consistent, identity-preserving video edits guided by text or reference images.