Yotam Nitzan (@yotamnitzan) 's Twitter Profile
Yotam Nitzan

@yotamnitzan

Research scientist at Adobe, previously PhD at Tel-Aviv University.

ID: 1291746503015510016

calendar_today07-08-2020 14:42:31

77 Tweet

371 Followers

165 Following

Zongze Wu (@zongze_wu) 's Twitter Profile Photo

Our StyleAlign paper gets accepted to ICLR 2022 as an oral presentation. paper openreview.net/pdf?id=Qg2vi4Z… github github.com/betterze/Style…

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

Excited to share GANimator - our #SIGGRAPH2022 work! A single sequence is all what you need to unlock new capabilities in the character #animation domain 🤯🦀.

Yael Vinker🎗 (@yvinker) 's Twitter Profile Photo

Excited to share our #SIGGRAPH2022 paper "CLIPasso: Semantically-Aware Object Sketching". CLIPasso converts images into sketches, with varying levels of abstraction. 🖼️🎨👩‍🎨 ✨ For more details visit our project page clipasso.github.io/clipasso/

Rinon Gal (@rinongal) 's Twitter Profile Photo

It's been a long time coming (almost a year!!!), but I'm happy to announce that StyleGAN-NADA has been accepted to #SIGGRAPH2022 ! 🥳🥳🥳 To celebrate, we created a new Hugging Face space for you to play with: huggingface.co/spaces/rinong/… 🤗 [1/5]

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

This effect is driven by our eye-gaze 🤯 Google AI Come to see our #CVPR22 work - “Deep Saliency Prior for Reducing Visual Distraction”, which shows that saliency models can be used to apply various image editing effects in zero-shot setting. deep-saliency-prior.github.io

Yotam Nitzan (@yotamnitzan) 's Twitter Profile Photo

Good things come in threes! Happy to announce that MyStyle has been accepted to SIGGRAPH Asia 2022, a new and improved version is on arXiv and our source code is finally out: github.com/google/mystyle 🎉

Or Patashnik (@opatashnik) 's Twitter Profile Photo

Happy to share our latest work, “Cross-Image Attention”! 🔀🖼️🔍 We show how we can perform zero-shot appearance transfer by building on the self-attention layers of image diffusion models 😲 Great collaboration led by Yuval Alaluf and Daniel Garibi garibida.github.io/cross-image-at…

Happy to share our latest work, “Cross-Image Attention”! 🔀🖼️🔍

We show how we can perform zero-shot appearance transfer by building on the self-attention layers of image diffusion models 😲

Great collaboration led by <a href="/yuvalalaluf/">Yuval Alaluf</a> and <a href="/DanielGaribi/">Daniel Garibi</a>

garibida.github.io/cross-image-at…
Rinon Gal (@rinongal) 's Twitter Profile Photo

TL;DR: We use SDS from text-to-video models to animate vector-graphics sketches! Please check our project page for more details, and more penguins: livesketch.github.io

Yotam Nitzan (@yotamnitzan) 's Twitter Profile Photo

Excited to be at #ECCV2024! 🇮🇹 Presenting LazyDiffusion (lazydiffusion.github.io) — an efficient approach for local image editing with diffusion models (Tuesday, 16:30). Stop by :) DM to chat about research, Italian food recommendations, and intern opportunities at Adobe.

Gaurav Parmar (@gauravtparmar) 's Twitter Profile Photo

[1/4] Ever wondered what it would be like to use images—rather than text—to generate object and background compositions? We introduce VisualComposer, a method for compositional image generation with object-level visual prompts.

Ryan Po (@po_lhr) 's Twitter Profile Photo

Most video models struggle to feel like real worlds. They forget what’s just out of view, slow down as videos get longer, or breaks causality. We think State Space Models are a natural fit for models with: 🧠 long-term memory across hundreds of frames ⚡ constant-speed