Willi Menapace (@willimenapace) 's Twitter Profile
Willi Menapace

@willimenapace

PhD Student - University of Trento, Italy

ID: 1406353610376728578

linkhttps://www.willimenapace.com calendar_today19-06-2021 20:50:15

23 Tweet

164 Takipçi

83 Takip Edilen

Roni Sengupta (@senguptroni) 's Twitter Profile Photo

Few fav papers from sess8,9,10 (phew hard to catch up!) 1/ "Playable Video Generation" Sergey Tulyakov & team. Now you can control Djokovic playing tennis! Cool idea of estimating action states and then generating target image for that action state from a tennis video.

Elisa Ricci (@eliricci_) 's Twitter Profile Photo

Congrats to Willi Menapace for receiving the Alfredo Petrosino Award as Best Master Thesis from the Italian Association for Computer Vision, Pattern Recognition and Machine Learning. It was a pleasure to work with you and Stéphane LATHUILIÈRE ! UniTrento_DISI #cvpl

Congrats to  <a href="/WilliMenapace/">Willi Menapace</a> for receiving the Alfredo Petrosino Award as Best Master Thesis from the Italian Association for Computer Vision, Pattern Recognition and Machine Learning.
It was a pleasure to work with you and <a href="/Steph_lat/">Stéphane LATHUILIÈRE</a> !
<a href="/UniTrento_DISI/">UniTrento_DISI</a> #cvpl
Lightning AI ⚡️ (@lightningai) 's Twitter Profile Photo

⚡️Lightning Spotlight⚡️ solo-learn: An easy-to-use library and a reproducible benchmark for assessing #selfsupervised representation learning methods. 👏🏽 Kudos Enrico Fini & Victor Turrisi!

Elisa Ricci (@eliricci_) 's Twitter Profile Photo

I am looking for a new PhD student to join my research group and work on Tiny Machine Learning. Send me a DM or email if you are interested! #research #phd #job_opportunities #jobposting

ICCV2021 (@iccv_2021) 's Twitter Profile Photo

Honourable Mention ICCV2021 Viewing Graph Solvability via Cycle Consistency Federica Arrigoni (University of Trento), Andrea Fusiello, Elisa, Tomas Pajdla [Session 5 A/B]

Vlad Golyanik (@vgolyanik) 's Twitter Profile Photo

"Quantum Multi-Model Fitting ", #CVPR2023 (Highlight). Our formulation can be efficiently sampled by a quantum annealer without the relaxation of the objective. We propose iterative and decomposed versions of QuMF. Draft: arxiv.org/pdf/2303.15444… Code: github.com/FarinaMatteo/q…

Sergey Tulyakov (@sergeytulyakov) 's Twitter Profile Photo

2. Want to generate the whole city in 3D? Checkout InfiniCity - a method that does exactly that! Project: hubert0527.github.io/infinicity/ with Chieh Hubert Lin (Job Hunting For 2025) Hsin-Ying Lee Willi Menapace Menglei Chai, Aliaksandr Siarohin, Ming-Hsuan Yang and yours truly.

Andrea Tagliasacchi 🇨🇦 (@taiyasaki) 's Twitter Profile Photo

📢📢📢 𝐀𝐂𝟑𝐃: Analyzing and Improving 3D Camera Control in Video Diffusion Transformers snap-research.github.io/ac3d TL;DR: for 3D camera control in generative video, it really helps knowing *which* part of your model you should mess with Internship by Sherwin Bahmani at Snap

Ziyi Wu (@dazitu_616) 's Twitter Profile Photo

📢MinT: Temporally-Controlled Multi-Event Video Generation📢 mint-video.github.io TL;DR: We identify a fundamental failure mode of existing video generators: they cannot produce videos with sequential events. MinT unlocks this capability with temporal grounding of events. 🧵

Ziyi Wu (@dazitu_616) 's Twitter Profile Photo

MinT beats Sora in multi-event generation! One week after the release of MinT, Sora also released a *storyboard* tool that targets the same task (sequential events + time control). Below are a few comparisons, where MinT shows better event transition and timing: (1/N)

Willi Menapace (@willimenapace) 's Twitter Profile Photo

Video-to-Audio and Audio-to-Video models struggle with temporal alignment. AV-Link solves the problem by conditioning on diffusion model features Great collaboration with Moayed Haji Ali , Aliaksandr Siarohin , Ivan Skorokhodov , Alper Canberk , Kwot Sin Lee, Vicente Ordonez and Sergey Tulyakov

Willi Menapace (@willimenapace) 's Twitter Profile Photo

Check out Video Alchemist Our latest work enables Multi-subject open-set personalization with no need for inference-time tuning 👇👇👇

Rameen Abdal (@abdalrameen) 's Twitter Profile Photo

What if you could compose videos— merging multiple clips, even capturing complex athletic moves where video models struggle - all while preserving motion and context? And yes, you can still edit them with text after! Stay tuned for more results. #AI #VideoGeneration #SnapResearch

Kfir Aberman (@abermankfir) 's Twitter Profile Photo

We discovered that imposing a spatio-temporal weight space via LoRAs on DIT-based video models unlocks powerful customization! It captures dynamic concepts with precision and even enables composition of multiple videos together!🎥✨

Ivan Skorokhodov (@isskoro) 's Twitter Profile Photo

In the past 1.5 weeks, there appeared 2 papers by 2 different research groups which develop the exactly same (and embarrassingly simple) trick to improve convergence of image/video diffusion models by 20-100+% (sic!) arxiv.org/abs/2502.14831 arxiv.org/abs/2502.09509

In the past 1.5 weeks, there appeared 2 papers by 2 different research groups which develop the exactly same (and embarrassingly simple) trick to improve convergence of image/video diffusion models by 20-100+% (sic!)

arxiv.org/abs/2502.14831
arxiv.org/abs/2502.09509