Tao (@ttao_tw) 's Twitter Profile
Tao

@ttao_tw

PhD student at @Cornell CS.

ID: 1121032379915591681

linkhttps://ttaoretw.github.io/ calendar_today24-04-2019 12:45:21

20 Tweet

58 Takipçi

311 Takip Edilen

Michael Black (@michael_j_black) 's Twitter Profile Photo

I get a lot of reviews that say my work is not novel and I bet I'm not alone. It's always frustrating because I see novelty where the reviewer doesn't. Rather than rebut every critique, I've written a blog post to help reviewers think about novelty. perceiving-systems.blog/en/news/novelt…

Calvin Luo (@calvinyluo) 's Twitter Profile Photo

Excited to share with everyone an accessible, intuitive tutorial on diffusion models! If you're curious about the math behind diffusion models and how their different interpretations can be unified, please check it out! Stay tuned for a blog post soon! arxiv.org/abs/2208.11970

Matt Deitke (@mattdeitke) 's Twitter Profile Photo

Introducing Objaverse-XL, an open dataset of over 10 million 3D objects! With it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: 🧵👇 📝 Paper: objaverse.allenai.org/objaverse-xl-p…

Tao (@ttao_tw) 's Twitter Profile Photo

📢 Excited to unveil ImGeoNet at ICCV'23: an image-based 3D object detection framework. 🌟 Unlike past methods, ImGeoNet learns geometry from multiple views, enhancing accuracy and efficiency. Say goodbye to confusion from free space voxels! Page: ttaoretw.github.io/imgeonet/ #ICCV23

Aleksander Holynski (@holynski_) 's Twitter Profile Photo

Check out our new paper that turns a (single image) => (interactive dynamic scene)! I’ve had so much fun playing around with this demo. Try it out yourself on the website: generative-dynamics.github.io

Zian Wang (@zianwang97) 's Twitter Profile Photo

🚀 Introducing our #SIGGRAPHAsia work “Adaptive Shells”, a novel #NeRF formulation that yields high visual fidelity and greatly accelerates rendering. TLDR: Auto-derived bounding shells result in up to 10x faster inference than InstantNGP! [1/n]

Agrim Gupta (@agrimgupta92) 's Twitter Profile Photo

We introduce W.A.L.T, a diffusion model for photorealistic video generation. Our model is a transformer trained on image and video generation in a shared latent space. 🧵👇

Michael Black (@michael_j_black) 's Twitter Profile Photo

WHAM defines the new state of the art in 3D human pose estimation from video. By a large margin. It’s fast, accurate, and it computes human pose in world coordinates. It’s also the first video-based method to be more accurate than single-image methods. 1/8

Karsten Kreis (@karsten_kreis) 's Twitter Profile Photo

📢📢 Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed Diffusion Models research.nvidia.com/labs/toronto-a… We generate dynamic 4D assets and scenes with score distillation! w/ the amazing Huan Ling*, Seung Wook Kim*, Antonio Torralba, Sanja Fidler (1/n)

Chieh Hubert Lin (Job Hunting For 2025) (@chiehhubertlin) 's Twitter Profile Photo

(1/5) We just released our new paper Virtual Pets! Recent generative models are getting us a bit closer to a fully synthetic virtual environment. BUT!! Isn't it too boring without a cat😾? Here we go! Project: yccyenchicheng.github.io/VirtualPets/ ArXiv: arxiv.org/abs/2312.14154

(1/5) We just released our new paper Virtual Pets!

Recent generative models are getting us a bit closer to a fully synthetic virtual environment. BUT!! Isn't it too boring without a cat😾? Here we go!

Project: yccyenchicheng.github.io/VirtualPets/
ArXiv: arxiv.org/abs/2312.14154
Zhiyang Dou (@frankzydou) 's Twitter Profile Photo

🔥Yes! You can achieve REAL-TIME text-to-motion generation using a simulated humanoid to perform various skills! This feat is realized through the integration of PHC and EMDM. 💬This combination addresses two pivotal challenges in human motion synthesis: ensuring physical

AK (@_akhaliq) 's Twitter Profile Photo

Learning the 3D Fauna of the Web paper page: huggingface.co/papers/2401.02… Learning 3D models of all animals on the Earth requires massively scaling up existing solutions. With this ultimate goal in mind, we develop 3D-Fauna, an approach that learns a pan-category deformable 3D

Matthias Niessner (@mattniessner) 's Twitter Profile Photo

Check out 𝐌𝐨𝐭𝐢𝐨𝐧𝟐𝐕𝐞𝐜𝐒𝐞𝐭𝐬, a 4D diffusion model for dynamic surface reconstruction from imperfect observations of sparse, noisy, or partial point clouds. Main idea: we represent time-varying shapes via 4D neural representation with latent vector sets, and then

AK (@_akhaliq) 's Twitter Profile Photo

Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation paper page: huggingface.co/papers/2401.08… Recent advances in generative modeling have led to promising progress on synthesizing 3D human motion from text, with methods that can generate character animations from

OpenAI (@openai) 's Twitter Profile Photo

Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. openai.com/sora Prompt: “Beautiful, snowy

Guy Tevet (@guytvt) 's Twitter Profile Photo

[1/4] Can we sample 3D from a 2D diffusion models instead of optimizing with SDS? In "MAS: Multi-view Ancestral Sampling for 3D motion generation using 2D diffusion" [CVPR2024🥳] we show that for 3D animation, the answer is - Yes!

[1/4] Can we sample 3D from a 2D diffusion models instead of optimizing with SDS?
In "MAS: Multi-view Ancestral Sampling for 3D motion generation using 2D diffusion" [CVPR2024🥳] we show that for 3D animation, the answer is - Yes!