Tae-Hyun Oh (@tae_hyun_oh) 's Twitter Profile
Tae-Hyun Oh

@tae_hyun_oh

Associate professor @ KAIST
Former postdoc researcher @ Facebook AI Research
Former postdoc associate @ MIT CSAIL

ID: 1294636328487796741

calendar_today15-08-2020 14:05:38

39 Tweet

74 Followers

117 Following

Tae-Hyun Oh (@tae_hyun_oh) 's Twitter Profile Photo

Our expressive 3D talking head generation work will be presented in this WACV 2024 (6th, Jan.). Our lab is making progress in understanding social expressions. The first step is laughing. Among many expressions of humans, laughing is one of the most important social signals.

Kim Youwang (@kim_youwang) 's Twitter Profile Photo

๐ŸŽ‰ Our paper, "Feed-Forward Photorealistic Style Transfer of Large-Scale 3D Neural Radiance Fields", has been accepted to AAAI 2024. TLDR; "Photorealistically Stylizable" city-level NeRF in a "Feed-Forward" manner! Paper: arxiv.org/abs/2401.05516 Page: kim-geonu.github.io/FPRF/

Kim Youwang (@kim_youwang) 's Twitter Profile Photo

This work is a nice collaboration with GeonU Kim and Tae-Hyun Oh. GeonU, the 1st author, was just a 1st semester M.S. student when he started this project! His great efforts and productivity led to this amazing result ๐Ÿ‘ Page: kim-geonu.github.io/FPRF/

Dreaming Tulpa ๐Ÿฅ“๐Ÿ‘‘ (@dreamingtulpa) 's Twitter Profile Photo

AI can restyle large 3D scenes based on reference images! FPRF is able to stylize NeRF scenes with multiple reference images without additional optimization while preserving multi-view appearance consistency. kim-geonu.github.io/FPRF/?ref=aiarโ€ฆ

Katherine Moretti (@kmoretticompsci) 's Twitter Profile Photo

A new special issue on Audio-Visual Generation is open for submissions in the International Journal of Computer Vision. For more information on the special issue & how to submit, access the CfP here: link.springer.com/journal/11263/โ€ฆ

A new special issue on Audio-Visual Generation is open for submissions in the International Journal of Computer Vision. For more information on the special issue & how to submit, access the CfP here: link.springer.com/journal/11263/โ€ฆ
Tae-Hyun Oh (@tae_hyun_oh) 's Twitter Profile Photo

๐Ÿšจ Call for Papers: Special Issue on Audio-Visual Generation ๐ŸŽฅ๐ŸŽต We, guest editors, prepared an exciting new Special Issue on Audio-Visual Generation! ๐ŸŒŸ The International Journal of Computer Vision (IJCV) is now accepting submissions. Please find the call for papers below!

Zhenjun Zhao (@zhenjun_zhao) 's Twitter Profile Photo

Dr. Splat: Directly Referring 3D Gaussian Splatting via Direct Language Embedding Registration Kim Jun-Seong, GeonU Kim, Yu-Ji Kim, Yu-Chiang Frank Wang, jaesung choe, Tae-Hyun Oh tl;dr: distil language knowledges into 3DGS arxiv.org/abs/2502.16652

Dr. Splat: Directly Referring 3D Gaussian Splatting via Direct Language Embedding Registration

Kim Jun-Seong, GeonU Kim, <a href="/ug___k/">Yu-Ji Kim</a>, Yu-Chiang Frank Wang, <a href="/choe_jaesung/">jaesung choe</a>, <a href="/Tae_Hyun_Oh/">Tae-Hyun Oh</a>

tl;dr: distil language knowledges into 3DGS

arxiv.org/abs/2502.16652
Tae-Hyun Oh (@tae_hyun_oh) 's Twitter Profile Photo

Please come and see this IJCV Special Issue on Audio-Visual Generation. (Deadline: 15th April, 2025) Fast process, conference extension, and potential opportunities to be presented at an ICCV workshop!

Kim Youwang (@kim_youwang) 's Twitter Profile Photo

Atย #ICLR2025, we will present "NeuFace: A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized Optimization." ๐Ÿ“Œ Hall 3 + Hall 2B #69 ๐Ÿ“… Thu, Apr 24, 3:00โ€“5:30 pm Singapore Time I'd really like to meet & discuss with fellow researchers. Let's connect! (1/3)

Atย #ICLR2025, we will present "NeuFace: A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized Optimization." 

๐Ÿ“Œ Hall 3 + Hall 2B #69
๐Ÿ“… Thu, Apr 24, 3:00โ€“5:30 pm Singapore Time

I'd really like to meet &amp; discuss with fellow researchers. Let's connect!
(1/3)
Puyuan Peng (@puyuanpeng) 's Twitter Profile Photo

The work is led by the amazing Sungbin Kim sites.google.com/view/kimsungbin, and collaborated with Jeongsoo Choi, Joon Son Chung, Tae-Hyun Oh, David Harwath Checkout voicecraft-dub.github.io for more samples, and the forthcoming code and model!

Kim Youwang (@kim_youwang) 's Twitter Profile Photo

๐—˜๐—Ÿ๐—œ๐—ง๐—˜, ๐—ผ๐˜‚๐—ฟ ๐—ฟ๐—ฒ๐—ฐ๐—ฒ๐—ป๐˜ ๐—ฝ๐—ฟ๐—ผ๐—ท๐—ฒ๐—ฐ๐˜ ๐—ผ๐—ป ๐—ต๐—ถ๐—ด๐—ต-๐—ณ๐—ถ๐—ฑ๐—ฒ๐—น๐—ถ๐˜๐˜† ๐Ÿฏ๐—— ๐—š๐—ฎ๐˜‚๐˜€๐˜€๐—ถ๐—ฎ๐—ป ๐—ฎ๐˜ƒ๐—ฎ๐˜๐—ฎ๐—ฟ ๐˜€๐˜†๐—ป๐˜๐—ต๐—ฒ๐˜€๐—ถ๐˜€ ๐—ต๐—ฎ๐˜€ ๐—ฏ๐—ฒ๐—ฒ๐—ป ๐—ฎ๐—ฐ๐—ฐ๐—ฒ๐—ฝ๐˜๐—ฒ๐—ฑ ๐˜๐—ผ ๐—–๐—ฉ๐—ฃ๐—ฅ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฒ! ๐Ÿ’ก๐—ง๐—Ÿ;๐——๐—ฅ We study a mutually reinforcing synergy of 2D & 3D face priors (generative &

Yuhta Takida (@takiko_san) 's Twitter Profile Photo

๐ŸŽ‰PAVAS, a framework for generating physically plausible audio from video, by integrating physics estimation at #CVPR2026! Led by our intern Hyun-Bin Oh (x.gd/pE0IB), in collaboration with ้Žๅฏ†้ƒฝๅธ‚, Tae-Hyun Oh, and Yuki Mitsufuji. ๐ŸŽง&๐Ÿ“: x.gd/ObKwe

๐ŸŽ‰PAVAS, a framework for generating physically plausible audio from video, by integrating physics estimation at #CVPR2026! 

Led by our intern Hyun-Bin Oh (x.gd/pE0IB), in collaboration with <a href="/kamitsutoshi/">้Žๅฏ†้ƒฝๅธ‚</a>, <a href="/Tae_Hyun_Oh/">Tae-Hyun Oh</a>, and <a href="/mittu1204/">Yuki Mitsufuji</a>.

๐ŸŽง&amp;๐Ÿ“: x.gd/ObKwe