Zongze Wu (@zongze_wu) 's Twitter Profile
Zongze Wu

@zongze_wu

computer vision researcher

ID: 1361740007095095296

calendar_today16-02-2021 18:11:38

60 Tweet

177 Takipçi

118 Takip Edilen

Yotam Nitzan (@yotamnitzan) 's Twitter Profile Photo

Don't you just hate it when your photos come out blurry? And what about not smiling in the group photo? Our new work, MyStyle, has got you covered! mystyle-personalized-prior.github.io Using ~100 of your photos, MyStyle learns what 𝘆𝗼𝘂 look like and will fix your lousy photos. 🧵

Zongze Wu (@zongze_wu) 's Twitter Profile Photo

We'll present our work StyleAlign on the analysis and applications of Aligned Generative Models in the following oral and poster sessions in ICLR. You are more than welcome to join us.

We'll present our work StyleAlign on the analysis and applications of Aligned Generative Models in the following oral and poster sessions in ICLR. You are more than welcome to join us.
Omri Avrahami (@omriavr) 's Twitter Profile Photo

[1/n] I am happy to share that our GAN Cocktail paper was accepted to #ECCV2022 European Conference on Computer Vision #ECCV2026! Many thanks to my great supervisors Dani Lischinski and Ohad Fried! The project page is available at: omriavrahami.com/GAN-cocktail-p…

Or Patashnik (@opatashnik) 's Twitter Profile Photo

Happy to share that "Third Times the Charm?" was accepted to the AIM workshop at #ECCV2022! In this work, we analyze SG3 and explore what it has to offer compared to SG2. We go beyond image editing and show video editing results not possible before with SG2.

Zongze Wu (@zongze_wu) 's Twitter Profile Photo

We update the codes for [StyleCLIP](github.com/orpatashnik/St…). Now it supports StyleSpace single channel editing and StyleCLIP global direction editing based on stylegan2-ada-pytorch.

Zongze Wu (@zongze_wu) 's Twitter Profile Photo

We find the StyleCLIP global direction method work reasonably well on human dress editing. Feel free to play it [Here](github.com/orpatashnik/St…)

We find the StyleCLIP global direction method work reasonably well on human dress editing. Feel free to play it [Here](github.com/orpatashnik/St…)
Omri Avrahami (@omriavr) 's Twitter Profile Photo

[1/5] Always wondered what people see when looking at a Rorschach test? SpaText - our recent #CVPR2023 paper from @MetaAI may give you a sneak peek! TL;DR: We extend text-to-image models with region-specific textual controllability. Project Page: omriavrahami.com/spatext/

Yotam Nitzan (@yotamnitzan) 's Twitter Profile Photo

LazyDiffusion is accepted to #ECCV2024! Traditional image editing methods regenerate unchanged pixels, wasting time and computation. LazyDiffusion generates only novel pixels while respecting the full image context, and does so up to x10 faster! lazydiffusion.github.io

LazyDiffusion is accepted to #ECCV2024! 

Traditional image editing methods regenerate unchanged pixels, wasting time and computation. LazyDiffusion generates only novel pixels while respecting the full image context, and does so up to x10 faster!

lazydiffusion.github.io
Adobe Research (@adoberesearch) 's Twitter Profile Photo

Adobe Research Principal Scientist Aaron Hertzmann won the Computer Graphics Achievement award by @SIGGRAPH,one of the highest honors in the field! Learn about his new theory of perception, and his work at the intersection of art and computer graphics. adobe.ly/46iiX7X

Adobe Research Principal Scientist <a href="/AaronHertzmann/">Aaron Hertzmann</a> won the Computer Graphics Achievement award by @SIGGRAPH,one of the highest honors in the field! Learn about his new theory of perception, and his work at the intersection of art and computer graphics. adobe.ly/46iiX7X
AK (@_akhaliq) 's Twitter Profile Photo

TurboEdit Instant text-based image editing discuss: huggingface.co/papers/2408.08… We address the challenges of precise image inversion and disentangled image editing in the context of few-step diffusion models. We introduce an encoder based iterative inversion technique. The

小互 (@imxiaohu) 's Twitter Profile Photo

TurboEdit:基于文本的即时图像编辑 由Adobe Research团队开发的工具,允许用户通过简单的文本描述快速编辑图像。 你可以只通过一句话就能控制图像中的某些制定区域的特定变化,比如改变头发颜色、长短或调整人物年龄等。 它能够在保持图像整体不变的情况下,只对指定部分进行修改。

Richard Zhang (@rzhang88) 's Twitter Profile Photo

(4/3) Links with additional information Webpage: betterze.github.io/TurboEdit/ Paper: arxiv.org/abs/2408.08332 Project video: youtube.com/watch?v=1LG2xC…

Zongze Wu (@zongze_wu) 's Twitter Profile Photo

TurboEdit can invert an image in 1s, and each following edit only takes 0.5s. Work w/ Richard Zhang, Nick Kolkin, Jon Brandt, Eli Shechtman Webpage: betterze.github.io/TurboEdit/ Paper: arxiv.org/abs/2408.08332… Video: youtube.com/watch?v=1LG2xC…

Xun Huang (@xunhuang1995) 's Twitter Profile Photo

Our team is recruting an intern to work on fundamental architectural redesign of visual generative models (Summer 2025). DM/email me if you are interested!

Rohit Gandikota (@rohitgandikota) 's Twitter Profile Photo

Can you ask a Diffusion Model to break down a concept? 👀 SliderSpace 🚀 reveals maps of the visual knowledge naturally encoded within diffusion models. It works by decomposing the model's capabilities into intuitive, composable sliders. Here's how 🧵👇