Daniel Winter (@_daniel_winter_) 's Twitter Profile
Daniel Winter

@_daniel_winter_

Intern at @GoogleAI

ID: 1714515636498395136

calendar_today18-10-2023 05:36:16

29 Tweet

102 Takipรงi

32 Takip Edilen

Daniel Winter (@_daniel_winter_) 's Twitter Profile Photo

ObjectDrop is accepted to #ECCV2024! ๐Ÿฅณ In this work from Google AI we tackle photorealistic object removal and insertion. Congrats to the team: Matan Cohen Shlomi Fruchter Yael Pritchย  Alex Rav-Acha Yedid Hoshen Checkout our project page: objectdrop.github.io

Nataniel Ruiz (@natanielruizg) 's Twitter Profile Photo

With friends at Google we announce ๐Ÿ’œ Magic Insert ๐Ÿ’œ - a generative AI method that allows you to drag-and-drop a subject into an image with a vastly different style achieving a style-harmonized and realistic insertion of the subject (Thread ๐Ÿงต) web: magicinsert.github.io

With friends at <a href="/Google/">Google</a> we announce ๐Ÿ’œ Magic Insert ๐Ÿ’œ - a generative AI method that allows you to drag-and-drop a subject into an image with a vastly different style achieving a style-harmonized and realistic insertion of the subject (Thread ๐Ÿงต)
web: magicinsert.github.io
Nataniel Ruiz (@natanielruizg) 's Twitter Profile Photo

I'm sharing something unique we've been making at Google (w/ UNC). We are releasing our work on a new class of interactive experiences that we call generative infinite games, essentially video games where the game mechanics and graphics are fully subsumed by generative models ๐Ÿงต

I'm sharing something unique we've been making at Google (w/ UNC). We are releasing our work on a new class of interactive experiences that we call generative infinite games, essentially video games where the game mechanics and graphics are fully subsumed by generative models ๐Ÿงต
Asaf Shul (@shulasaf) 's Twitter Profile Photo

๐Ÿšจ Excited to share ObjectMate our latest from Google AI for zero-shot subject-driven generation & insertion ๐Ÿšจ ๐Ÿ”— project page:ย object-mate.com ๐Ÿ“„Arxiv:ย arxiv.org/abs/2412.08645

Eliahu Horwitz | @ ICLR2025 (@eliahuhorwitz) 's Twitter Profile Photo

๐Ÿšจ New paper alert! ๐Ÿšจ Millions of neural networks now populate public repositories like Hugging Face ๐Ÿค—, but most lack documentation. So, we decided to build an Atlas ๐Ÿ—บ๏ธ Project: horwitz.ai/model-atlas Demo: huggingface.co/spaces/Eliahu/โ€ฆ ๐Ÿงต๐Ÿ‘‡๐Ÿป Here's what we found:

๐Ÿšจ New paper alert! ๐Ÿšจ

Millions of neural networks now populate public repositories like Hugging Face ๐Ÿค—, but most lack documentation. So, we decided to build an Atlas ๐Ÿ—บ๏ธ

Project: horwitz.ai/model-atlas
Demo: huggingface.co/spaces/Eliahu/โ€ฆ

๐Ÿงต๐Ÿ‘‡๐Ÿป Here's what we found:
Niv Cohen (@cohniv) 's Twitter Profile Photo

In our #ICLR2025 paper, we introduce WIND ๐ŸŒฌ๏ธ A method that embeds a distortion watermark directly in the diffusion noise! Our method ensures that the watermark in one image does not reveal information about the watermark in other images ๐Ÿคซ ๐Ÿ“ arxiv.org/abs/2412.04653 (1/5)

In our #ICLR2025 paper, we introduce WIND ๐ŸŒฌ๏ธ  
A method that embeds a distortion watermark directly in the diffusion noise! Our method ensures that the watermark in one image does not reveal information about the watermark in other images ๐Ÿคซ
๐Ÿ“ arxiv.org/abs/2412.04653

(1/5)
Kevin Lu (@kevinlu4588) 's Twitter Profile Photo

When we "erase" a concept from a diffusion model, is that knowledge truly gone? ๐Ÿค” We investigated, and the answer is often 'no'! Using simple probing techniques, the knowledge traces of the erased concept can be easily resurfaced ๐Ÿ” Here is what we learned ๐Ÿงต๐Ÿ‘‡

When we "erase" a concept from a diffusion model, is that knowledge truly gone? ๐Ÿค”

We investigated, and the answer is often 'no'!

Using simple probing techniques, the knowledge traces of the erased concept can be easily resurfaced ๐Ÿ”

Here is what we learned ๐Ÿงต๐Ÿ‘‡
Eliahu Horwitz | @ ICLR2025 (@eliahuhorwitz) 's Twitter Profile Photo

Andrej Karpathy Thanks for the inspiring talk (as always!). I'm the author of the Model Atlas. I'm delighted you liked our work, seeing the figure in your slides felt like an "achievement unlocked"๐Ÿ™ŒWould really appreciate a link to our work in your slides/tweet arxiv.org/abs/2503.10633

<a href="/karpathy/">Andrej Karpathy</a> Thanks for the inspiring talk (as always!). I'm the author of the Model Atlas. I'm delighted you liked our work, seeing the figure in your slides felt like an "achievement unlocked"๐Ÿ™ŒWould really appreciate a link to our work in your slides/tweet arxiv.org/abs/2503.10633
Nataniel Ruiz (@natanielruizg) 's Twitter Profile Photo

We are releasing a paper I'm very excited about. We know test-time scaling is a path to greatly improved results, and achieves reasoning in the case of LLMs. We present a new and promising way to amortize it into training using HyperNetworks for image generation models.

We are releasing a paper I'm very excited about. We know test-time scaling is a path to greatly improved results, and achieves reasoning in the case of LLMs. We present a new and promising way to amortize it into training using HyperNetworks for image generation models.