Daniel Winter (@_daniel_winter_) 's Twitter Profile
Daniel Winter

@_daniel_winter_

Intern at @GoogleAI

ID: 1714515636498395136

calendar_today18-10-2023 05:36:16

29 Tweet

102 Followers

32 Following

Daniel Winter (@_daniel_winter_) 's Twitter Profile Photo

ObjectDrop is accepted to #ECCV2024! πŸ₯³ In this work from Google AI we tackle photorealistic object removal and insertion. Congrats to the team: Matan Cohen Shlomi Fruchter Yael PritchΒ  Alex Rav-Acha Yedid Hoshen Checkout our project page: objectdrop.github.io

Nataniel Ruiz (@natanielruizg) 's Twitter Profile Photo

With friends at Google we announce πŸ’œ Magic Insert πŸ’œ - a generative AI method that allows you to drag-and-drop a subject into an image with a vastly different style achieving a style-harmonized and realistic insertion of the subject (Thread 🧡) web: magicinsert.github.io

With friends at <a href="/Google/">Google</a> we announce πŸ’œ Magic Insert πŸ’œ - a generative AI method that allows you to drag-and-drop a subject into an image with a vastly different style achieving a style-harmonized and realistic insertion of the subject (Thread 🧡)
web: magicinsert.github.io
Nataniel Ruiz (@natanielruizg) 's Twitter Profile Photo

I'm sharing something unique we've been making at Google (w/ UNC). We are releasing our work on a new class of interactive experiences that we call generative infinite games, essentially video games where the game mechanics and graphics are fully subsumed by generative models 🧡

I'm sharing something unique we've been making at Google (w/ UNC). We are releasing our work on a new class of interactive experiences that we call generative infinite games, essentially video games where the game mechanics and graphics are fully subsumed by generative models 🧡
Asaf Shul (@shulasaf) 's Twitter Profile Photo

🚨 Excited to share ObjectMate our latest from Google AI for zero-shot subject-driven generation & insertion 🚨 πŸ”— project page:Β object-mate.com πŸ“„Arxiv:Β arxiv.org/abs/2412.08645

Eliahu Horwitz | @ ICLR2025 (@eliahuhorwitz) 's Twitter Profile Photo

🚨 New paper alert! 🚨 Millions of neural networks now populate public repositories like Hugging Face πŸ€—, but most lack documentation. So, we decided to build an Atlas πŸ—ΊοΈ Project: horwitz.ai/model-atlas Demo: huggingface.co/spaces/Eliahu/… πŸ§΅πŸ‘‡πŸ» Here's what we found:

🚨 New paper alert! 🚨

Millions of neural networks now populate public repositories like Hugging Face πŸ€—, but most lack documentation. So, we decided to build an Atlas πŸ—ΊοΈ

Project: horwitz.ai/model-atlas
Demo: huggingface.co/spaces/Eliahu/…

πŸ§΅πŸ‘‡πŸ» Here's what we found:
Niv Cohen (@cohniv) 's Twitter Profile Photo

In our #ICLR2025 paper, we introduce WIND 🌬️ A method that embeds a distortion watermark directly in the diffusion noise! Our method ensures that the watermark in one image does not reveal information about the watermark in other images 🀫 πŸ“ arxiv.org/abs/2412.04653 (1/5)

In our #ICLR2025 paper, we introduce WIND 🌬️  
A method that embeds a distortion watermark directly in the diffusion noise! Our method ensures that the watermark in one image does not reveal information about the watermark in other images 🀫
πŸ“ arxiv.org/abs/2412.04653

(1/5)
Kevin Lu (@kevinlu4588) 's Twitter Profile Photo

When we "erase" a concept from a diffusion model, is that knowledge truly gone? πŸ€” We investigated, and the answer is often 'no'! Using simple probing techniques, the knowledge traces of the erased concept can be easily resurfaced πŸ” Here is what we learned πŸ§΅πŸ‘‡

When we "erase" a concept from a diffusion model, is that knowledge truly gone? πŸ€”

We investigated, and the answer is often 'no'!

Using simple probing techniques, the knowledge traces of the erased concept can be easily resurfaced πŸ”

Here is what we learned πŸ§΅πŸ‘‡
Eliahu Horwitz | @ ICLR2025 (@eliahuhorwitz) 's Twitter Profile Photo

Andrej Karpathy Thanks for the inspiring talk (as always!). I'm the author of the Model Atlas. I'm delighted you liked our work, seeing the figure in your slides felt like an "achievement unlocked"πŸ™ŒWould really appreciate a link to our work in your slides/tweet arxiv.org/abs/2503.10633

<a href="/karpathy/">Andrej Karpathy</a> Thanks for the inspiring talk (as always!). I'm the author of the Model Atlas. I'm delighted you liked our work, seeing the figure in your slides felt like an "achievement unlocked"πŸ™ŒWould really appreciate a link to our work in your slides/tweet arxiv.org/abs/2503.10633
Nataniel Ruiz (@natanielruizg) 's Twitter Profile Photo

We are releasing a paper I'm very excited about. We know test-time scaling is a path to greatly improved results, and achieves reasoning in the case of LLMs. We present a new and promising way to amortize it into training using HyperNetworks for image generation models.

We are releasing a paper I'm very excited about. We know test-time scaling is a path to greatly improved results, and achieves reasoning in the case of LLMs. We present a new and promising way to amortize it into training using HyperNetworks for image generation models.