Ting-Chun Wang (@tcwang0509) 's Twitter Profile
Ting-Chun Wang

@tcwang0509

Deep learning research scientist at NVIDIA

ID: 936343015349555200

linkhttps://tcwang0509.github.io calendar_today30-11-2017 21:15:46

26 Tweet

1,1K Takipçi

53 Takip Edilen

Ming-Yu Liu (@liu_mingyu) 's Twitter Profile Photo

Check out our #CVPR19 oral paper on a new conditional normalization layer for semantic image synthesis #SPADE and its demo app #GauGAN paper bit.ly/nvspade website bit.ly/wwwspade video1 bit.ly/2HEvOrD Ting-Chun Wang Jun-Yan Zhu

Ming-Yu Liu (@liu_mingyu) 's Twitter Profile Photo

The #GauGAN beta version is now available to everyone as a web service via nvda.ws/2WsY2cM #NVIDIA AI Playground A short illustration video is available in bit.ly/2XIgTSr May everybody have fun with the app. #GAN, #SPADE

Ming-Yu Liu (@liu_mingyu) 's Twitter Profile Photo

We are running a tutorial on deep learning for content creation in CVPR on Sunday nvlabs.github.io/dl-for-content…. We have a set of amazing speakers including Phillip Isola James Tompkin Tero Kerras, Sanja Fidler Sylvain Paris Ting-Chun Wang Jun-Yan Zhu Eli Shechtman Please come to join us.

Ming-Yu Liu (@liu_mingyu) 's Twitter Profile Photo

In CVPR2019, the #StyleGAN paper won the best paper honorable mention and #SPADE/#GauGAN paper won the best paper finalist. Congratulates to all the GAN authors! NVIDIA

Ming-Yu Liu (@liu_mingyu) 's Twitter Profile Photo

So happy to share the news that @NVIDIADesign #GauGAN won both best real time live demo award and people’s choice of best demo award in #SIGGRAPH2019 Real-Time Live. Jun-Yan Zhu @gavriilklimov Ting-Chun Wang GPU 3D Chris Special thank to Ian Goodfellow to bring #GAN to this world.

So happy to share the news that @NVIDIADesign #GauGAN won both best real time live demo award and people’s choice of best demo award in #SIGGRAPH2019 Real-Time Live. <a href="/junyanz89/">Jun-Yan Zhu</a> @gavriilklimov <a href="/tcwang0509/">Ting-Chun Wang</a> <a href="/chrisjhebert/">GPU 3D Chris</a> Special thank to <a href="/goodfellow_ian/">Ian Goodfellow</a> to bring #GAN to this world.
Ming-Yu Liu (@liu_mingyu) 's Twitter Profile Photo

Glad to share our #NeurIPS2019 paper on few-shot vid2vid where we address the scalability issue of our #vid2vid. Now, with 1model and as few as 1 example image provided in the test time, we could render the motion of a target subject. nvlabs.github.io/few-shot-vid2v… code coming soon.

Orazio Gallo (@0razio) 's Twitter Profile Photo

Join us tomorrow (Sunday June 14th) for our #CVPR2020 tutorial on novel view synthesis. We'll be streaming all of it live! youtu.be/OEUHalxanuc Stream start at 9:15am PDT Info nvlabs.github.io/nvs-tutorial-c…

Join us tomorrow (Sunday June 14th) for our #CVPR2020 tutorial on novel view synthesis.
We'll be streaming all of it live! youtu.be/OEUHalxanuc 
Stream start at 9:15am PDT

Info nvlabs.github.io/nvs-tutorial-c…
Ming-Yu Liu (@liu_mingyu) 's Twitter Profile Photo

1/4 Exciting to share our #ECCV2020 paper on world-consistent #vid2vid on generating consistent renderings of 3D world. NVIDIA AI #GAN with Arun Mallya Ting-Chun Wang Karan Sapra paper tinyurl.com/y45xxej6 project nvlabs.github.io/wc-vid2vid/ video youtu.be/rlCh6-2NfSg

Ting-Chun Wang (@tcwang0509) 's Twitter Profile Photo

Excited to share our #ECCV2020 paper on world-consistent #vid2vid. Compared to vid2vid, our new framework can render consistent views of the virtual 3D world. NVIDIA AI #GAN with Arun Mallya Karan Sapra Ming-Yu Liu project: nvlabs.github.io/wc-vid2vid/ video: youtu.be/rlCh6-2NfSg

Ming-Yu Liu (@liu_mingyu) 's Twitter Profile Photo

Introducing #Imaginaire a #PyTorch library with optimized implementations of several #GAN image and video synthesis methods developed at #NVIDIA code github.com/NVlabs/imagina… video youtu.be/jgTX5OnAsYQ By Ming-Yu Liu Ting-Chun Wang Arun Mallya @xunhuang1995

Ting-Chun Wang (@tcwang0509) 's Twitter Profile Photo

Introducing #Imaginaire a #PyTorch library with optimized implementations of several #GAN image and video synthesis methods developed at #NVIDIA code github.com/NVlabs/imagina… video youtu.be/jgTX5OnAsYQ

David Pogue (@pogue) 's Twitter Profile Photo

If this works, it’s just nuts. Nvidia has come up with a video compression algorithm that lets you do Zoom calls with a tiny fraction of the bandwidth usually required. Could be a game-changer for people with poor internet connections. youtube.com/watch?v=NqmMnj… (Via Chris Messina!)

If this works, it’s just nuts. Nvidia has come up with a video compression algorithm that lets you do Zoom calls with a tiny fraction of the bandwidth usually required. Could be a game-changer for people with poor internet connections. youtube.com/watch?v=NqmMnj… (Via <a href="/chrismessina/">Chris Messina</a>!)
Arun Mallya (@arunmallya) 's Twitter Profile Photo

An update to #Imaginaire The pretrained world-consistent vid2vid (nvlabs.github.io/wc-vid2vid/) model for the MannequinChallenge dataset has been released! Code and a lot of other models at: github.com/NVlabs/imagina… By Ming-Yu Liu Ting-Chun Wang @xunhuang1995

Ming-Yu Liu (@liu_mingyu) 's Twitter Profile Photo

Check out our new work on face-vid2vid, a neural talking-head model for video conferencing that is 10x more bandwidth efficient than H264 arxiv arxiv.org/abs/2011.15126 project nvlabs.github.io/face-vid2vid/ video youtu.be/nLYg9Waw72U Ting-Chun Wang Arun Mallya #GAN

Ting-Chun Wang (@tcwang0509) 's Twitter Profile Photo

Check out our new work on face-vid2vid, a neural talking-head model for video conferencing that is 10x more bandwidth efficient than H264, while also able to achieve face redirection project nvlabs.github.io/face-vid2vid/ video youtu.be/nLYg9Waw72U Arun Mallya Ming-Yu Liu #GAN