Ting-Chun Wang
@tcwang0509
Deep learning research scientist at NVIDIA
ID: 936343015349555200
https://tcwang0509.github.io 30-11-2017 21:15:46
26 Tweet
1,1K Takipçi
53 Takip Edilen
Check out our #CVPR19 oral paper on a new conditional normalization layer for semantic image synthesis #SPADE and its demo app #GauGAN paper bit.ly/nvspade website bit.ly/wwwspade video1 bit.ly/2HEvOrD Ting-Chun Wang Jun-Yan Zhu
We are running a tutorial on deep learning for content creation in CVPR on Sunday nvlabs.github.io/dl-for-content…. We have a set of amazing speakers including Phillip Isola James Tompkin Tero Kerras, Sanja Fidler Sylvain Paris Ting-Chun Wang Jun-Yan Zhu Eli Shechtman Please come to join us.
So happy to share the news that @NVIDIADesign #GauGAN won both best real time live demo award and people’s choice of best demo award in #SIGGRAPH2019 Real-Time Live. Jun-Yan Zhu @gavriilklimov Ting-Chun Wang GPU 3D Chris Special thank to Ian Goodfellow to bring #GAN to this world.
Glad to share our #NeurIPS2019 paper on few-shot vid2vid where we address the scalability issue of our #vid2vid. Now, with 1model and as few as 1 example image provided in the test time, we could render the motion of a target subject. nvlabs.github.io/few-shot-vid2v… code coming soon.
1/4 Exciting to share our #ECCV2020 paper on world-consistent #vid2vid on generating consistent renderings of 3D world. NVIDIA AI #GAN with Arun Mallya Ting-Chun Wang Karan Sapra paper tinyurl.com/y45xxej6 project nvlabs.github.io/wc-vid2vid/ video youtu.be/rlCh6-2NfSg
Excited to share our #ECCV2020 paper on world-consistent #vid2vid. Compared to vid2vid, our new framework can render consistent views of the virtual 3D world. NVIDIA AI #GAN with Arun Mallya Karan Sapra Ming-Yu Liu project: nvlabs.github.io/wc-vid2vid/ video: youtu.be/rlCh6-2NfSg
Introducing #Imaginaire a #PyTorch library with optimized implementations of several #GAN image and video synthesis methods developed at #NVIDIA code github.com/NVlabs/imagina… video youtu.be/jgTX5OnAsYQ By Ming-Yu Liu Ting-Chun Wang Arun Mallya @xunhuang1995
If this works, it’s just nuts. Nvidia has come up with a video compression algorithm that lets you do Zoom calls with a tiny fraction of the bandwidth usually required. Could be a game-changer for people with poor internet connections. youtube.com/watch?v=NqmMnj… (Via Chris Messina!)
An update to #Imaginaire The pretrained world-consistent vid2vid (nvlabs.github.io/wc-vid2vid/) model for the MannequinChallenge dataset has been released! Code and a lot of other models at: github.com/NVlabs/imagina… By Ming-Yu Liu Ting-Chun Wang @xunhuang1995
Check out our new work on face-vid2vid, a neural talking-head model for video conferencing that is 10x more bandwidth efficient than H264 arxiv arxiv.org/abs/2011.15126 project nvlabs.github.io/face-vid2vid/ video youtu.be/nLYg9Waw72U Ting-Chun Wang Arun Mallya #GAN
Check out our new work on face-vid2vid, a neural talking-head model for video conferencing that is 10x more bandwidth efficient than H264, while also able to achieve face redirection project nvlabs.github.io/face-vid2vid/ video youtu.be/nLYg9Waw72U Arun Mallya Ming-Yu Liu #GAN