Elad Richardson (@eladrichardson) 's Twitter Profile
Elad Richardson

@eladrichardson

Teaching Pixels New Tricks | Research Scientist @pika_labs

ID: 805882742273691648

linkhttps://eladrich.github.io/ calendar_today05-12-2016 21:13:13

771 Tweet

1,1K Takipรงi

1,1K Takip Edilen

Gradio (@gradio) 's Twitter Profile Photo

Not all models are built the same! Checkout the images generated by GPT4o ๐Ÿซ  and Piece-it-Together ๐Ÿ‘‘ using the same set of input images! Piece it Together (PiT) app is live on ๐Ÿค— Spaces!

Not all models are built the same! 

Checkout the images generated by GPT4o ๐Ÿซ   and Piece-it-Together ๐Ÿ‘‘ using the same set of input images!

Piece it Together (PiT) app is live on ๐Ÿค— Spaces!
Kfir Goldberg (@kfir99) 's Twitter Profile Photo

Thanks for sharing our demo for Piece-it-Together๐Ÿงฉ The training and inference code is now available at github.com/eladrich/PiT

Sigal Raab (@sigal_raab) 's Twitter Profile Photo

๐Ÿ””Excited to announce that #AnyTop has been accepted to #SIGGRAPH2025!๐Ÿฅณ โœ… A diffusion model that generates motion for arbitrary skeletons โœ… Using only a skeletal structure as input โœ… Learns semantic correspondences across diverse skeletons ๐ŸŒ Project: anytop2025.github.io/Anytop-page

Itay Hazan (@itayhzn) 's Twitter Profile Photo

๐Ÿงต1/ Text-to-video models generate stunning visuals, butโ€ฆ motion? Not so much. You get extra limbs, objects popping in and out... In our new paper, we present FlowMo -- an inference-time method that reduces temporal artifacts without retraining or architectural changes. ๐Ÿ‘‡

Amil Dravid (@_amildravid) 's Twitter Profile Photo

Artifacts in your attention maps? Forgot to train with registers? Use ๐™ฉ๐™š๐™จ๐™ฉ-๐™ฉ๐™ž๐™ข๐™š ๐™ง๐™š๐™œ๐™ž๐™จ๐™ฉ๐™š๐™ง๐™จ! We find a sparse set of activations set artifact positions. We can shift them anywhere ("Shifted") โ€” even outside the image into an untrained token. Clean maps, no retrain.

Artifacts in your attention maps? Forgot to train with registers? Use ๐™ฉ๐™š๐™จ๐™ฉ-๐™ฉ๐™ž๐™ข๐™š ๐™ง๐™š๐™œ๐™ž๐™จ๐™ฉ๐™š๐™ง๐™จ! We find a sparse set of activations set artifact positions. We can shift them anywhere ("Shifted") โ€” even outside the image into an untrained token. Clean maps, no retrain.
Elad Richardson (@eladrichardson) 's Twitter Profile Photo

Really impressive results for human-object interaction. They use a two-phase process where they optimize the diffusion noise, instead of the motion itself, to get to sub-centimeter precision while staying on manifold ๐Ÿง  HOIDiNi - hoidini.github.io

Really impressive results for human-object interaction. They use a two-phase process where they optimize the diffusion noise, instead of the motion itself, to get to sub-centimeter precision while staying on manifold ๐Ÿง 

HOIDiNi - hoidini.github.io
Itai Gat (@itai_gat) 's Twitter Profile Photo

Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: Neta Shaul Uriel Singer Yaron Lipman Link: arxiv.org/abs/2506.06215

Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text.
With: <a href="/shaulneta/">Neta Shaul</a> <a href="/urielsinger/">Uriel Singer</a> <a href="/lipmanya/">Yaron Lipman</a>
Link: arxiv.org/abs/2506.06215
Ron Mokady (@mokadyron) 's Twitter Profile Photo

Tel Aviv friends: we're hosting an amazing rooftop meetup with a killer speaker lineup (not including me ๐Ÿ˜…) lu.ma/q8bigfqn

Tel Aviv friends: we're hosting an amazing rooftop meetup with a killer speaker lineup (not including me ๐Ÿ˜…)

lu.ma/q8bigfqn
Guy Tevet (@guytvt) 's Twitter Profile Photo

1/ Can we teach a motion model to "dance like a chicken" Or better: Can LoRA help motion diffusion models learn expressive, editable styles without forgetting how to move? Led by Haim Sawdayee, Chuan Guo, we explore this in our latest work. ๐ŸŽฅ haimsaw.github.io/LoRA-MDM/ ๐Ÿงต๐Ÿ‘‡

Runway (@runwayml) 's Twitter Profile Photo

Introducing Act-Two, our next-generation motion capture model with major improvements in generation quality and support for head, face, body and hand tracking. Act-Two only requires a driving performance video and reference character. Available now to all our Enterprise

Yael Vinker๐ŸŽ— (@yvinker) 's Twitter Profile Photo

I'm very excited to announce our #SIGGRAPH2025 workshop: Drawing & Sketching: Art, Psychology, and Computer Graphics ๐ŸŽจ๐Ÿง ๐Ÿซ– ๐Ÿ”— lines-and-minds.github.io ๐Ÿ“… Sunday, August 10th Join us to explore how people draw, how machines draw, and how the two might draw together! ๐Ÿค–โœ๏ธ

I'm very excited to announce our #SIGGRAPH2025 workshop:
Drawing &amp; Sketching: Art, Psychology, and Computer Graphics ๐ŸŽจ๐Ÿง ๐Ÿซ–

๐Ÿ”— lines-and-minds.github.io  
๐Ÿ“… Sunday, August 10th

Join us to explore how people draw, how machines draw, and how the two might draw together! ๐Ÿค–โœ๏ธ
Anastasis Germanidis (@agermanidis) 's Twitter Profile Photo

Models just want to generalize. For the past years, weโ€™ve been pushing the frontier of controllability in video, releasing new models and techniques for inpainting, outpainting, segmentation, stylization, keyframing, motion and camera control. Aleph is a single in-context model

Vic ๐ŸŒฎ (@vicvijayakumar) 's Twitter Profile Photo

Now I see the issue! Youโ€™re absolutely right, that is a highly unsafe turn. Let me remove the bridge since it is not currently used.

Now I see the issue! Youโ€™re absolutely right, that is a highly unsafe turn. Let me remove the bridge since it is not currently used.