Chenliang Xu (@chenliangxu) 's Twitter Profile
Chenliang Xu

@chenliangxu

Associate Professor of Computer Science at the University of Rochester

ID: 192459149

linkhttps://www.cs.rochester.edu/~cxu22/ calendar_today19-09-2010 06:02:00

22 Tweet

192 Takipçi

150 Takip Edilen

Chenliang Xu (@chenliangxu) 's Twitter Profile Photo

We are presenting three papers in a row in tomorrow's Sight & Sound Workshop at CVPR'19. From event localization to explainable captioning and to cross-modal synthesizing, it's going to a feast for audio-visual modeling!!

We are presenting three papers in a row in tomorrow's Sight & Sound Workshop at CVPR'19. From event localization to explainable captioning and to cross-modal synthesizing, it's going to a feast for audio-visual modeling!!
DeepAI (@deepai) 's Twitter Profile Photo

Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution deepai.org/publication/zo… by Xiaoyu Xiang et al. including Yapeng Tian, Yulun Zhang, @chenliangxu #Interpolation #ComputerVision

Chenliang Xu (@chenliangxu) 's Twitter Profile Photo

Our Temporally-Deformable Alignment Network (TDAN) at #CVPR2020 restores a photo-realistic high-resolution video from its low-resolution version. Check out our poster at: rb.gy/unozi9.

Our Temporally-Deformable Alignment Network (TDAN) at 
#CVPR2020 restores a photo-realistic high-resolution video from its low-resolution version. Check out our poster at: rb.gy/unozi9.
Chenliang Xu (@chenliangxu) 's Twitter Profile Photo

Our one-stage Zooming Slow-Mo method at #CVPR2020 simultaneously increases temporal resolution and spatial resolution for an input video. Check out our poster here: rb.gy/thrav7

Our one-stage Zooming Slow-Mo method at #CVPR2020  simultaneously increases temporal resolution and spatial resolution for an input video. Check out our poster here: rb.gy/thrav7
Chenliang Xu (@chenliangxu) 's Twitter Profile Photo

Our Deep Grouping Model (DGM) at #CVPR2020 combines a top-down segmentation process and a bottom-up grouping process for a unified perceptual scene parsing. The result is higher accuracy, better interpretability, and lower computation! Check out it here: rb.gy/xt7wsk

Our Deep Grouping Model (DGM) at #CVPR2020 combines a top-down segmentation process and a bottom-up grouping process for a unified perceptual scene parsing. The result is higher accuracy, better interpretability, and lower computation! Check out it here: rb.gy/xt7wsk
Chenliang Xu (@chenliangxu) 's Twitter Profile Photo

Our method learns a weakly-supervised video actor-action segmentation model with a wise selection of pseudo-annotations in iterative training. Its #CVPR2020 oral presentation is here: rb.gy/0mlg1a

Our method learns a weakly-supervised video actor-action segmentation model with a wise selection of pseudo-annotations in iterative training. Its #CVPR2020 oral presentation is here: rb.gy/0mlg1a
Chenliang Xu (@chenliangxu) 's Twitter Profile Photo

4 papers accepted by #CVPR2021. Congrats to my students and collaborators! Topics include audio-visual grounding-separation, audio-visual robustness, 3D video avatars, and language-driven image editing. Plus, we will co-organize a tutorial on Audio-Visual Scene Understanding.

Yapeng Tian (@yapengtian) 's Twitter Profile Photo

Welcome to join us this Saturday for our #cvpr2021 audio-visual scene understanding tutorial to learn recent advances in audio-visual learning. Website: …-visual-scene-understanding.github.io

Welcome to join us this Saturday for our #cvpr2021 audio-visual scene understanding tutorial to learn recent advances in audio-visual learning. 
Website: …-visual-scene-understanding.github.io
Andrew Owens (@andrewhowens) 's Twitter Profile Photo

Want to make computers that can see *and* hear? Come to the CVPR Sight and Sound workshop today! Schedule: sightsound.org Invited talks by: Justin Salamon, Chenliang Xu, Kristen Grauman, Dima Damen, Chuang Gan, John Hershey, Efthymios Tzinis, and James Traer

Want to make computers that can see *and* hear? Come to the CVPR Sight and Sound workshop today!
Schedule: sightsound.org

Invited talks by: <a href="/justin_salamon/">Justin Salamon</a>, <a href="/ChenliangXu/">Chenliang Xu</a>, Kristen Grauman, <a href="/dimadamen/">Dima Damen</a>, <a href="/gan_chuang/">Chuang Gan</a>, John Hershey, <a href="/ETzinis/">Efthymios Tzinis</a>, and James Traer
Jason Corso (@_jasoncorso_) 's Twitter Profile Photo

I have a postdoc opening (2 years) in continual and transfer learning from high-d imagery. Please contact me if you're interested. Start soon. #jobopening #ComputerVision #ai

Zhiheng Li (@zhi_heng_li) 's Twitter Profile Photo

I am a final-year PhD student actively looking for researcher and postdoc positions. My research focuses on trustworthy AI in computer vision, e.g., fairness and robustness. Let me know if you are interested. More information is on my homepage: zhiheng.li

Chenliang Xu (@chenliangxu) 's Twitter Profile Photo

Highly recommended my graduating PhD student, Zhiheng Li, who researches trustworthy issues in computer vision and has published numerous papers at top CV venues. If you have an opening for research scientist/post docs, please contact him!

Chenliang Xu (@chenliangxu) 's Twitter Profile Photo

Feel proud to hood three PhDs at a time. From left to right: Lele Chen Oppo US Research, Yapeng Tian UTD TT faculty, Jing Shi Adobe Research. Will meet again at Vancouver CVPR~

Feel proud to hood three PhDs at a time. From left to right: Lele Chen Oppo US Research, <a href="/YapengTian/">Yapeng Tian</a> UTD TT faculty, Jing Shi Adobe Research. Will meet again at Vancouver CVPR~
Jian Kang (@jiank_uiuc) 's Twitter Profile Photo

We will organize a virtual open house (on Nov 10 & 11). Learn all about our department, graduate programs, admissions, and more! Prospective PhD students are welcome to register it as soon as possible! Please Register here: t.ly/UyTp3 Likes and RT appreciated!

Anurag Kumar (@acouintel) 's Twitter Profile Photo

At #Neurips2023 next week to talk about our paper/AV-NeRF and all things AI, Audio/Speech & Multimodal😀 AV-NeRF:Learning Neural Fields for Real-World Audio-Visual Scene Synthesis Demo: t.ly/UuXkG Paper t.ly/0oIhV (w. Susan,Chao,Yapeng Tian,Chenliang Xu)

At #Neurips2023 next week to talk about our paper/AV-NeRF and all things AI, Audio/Speech &amp; Multimodal😀
AV-NeRF:Learning Neural Fields for Real-World Audio-Visual Scene Synthesis
Demo: t.ly/UuXkG
Paper t.ly/0oIhV
(w. Susan,Chao,<a href="/YapengTian/">Yapeng Tian</a>,<a href="/ChenliangXu/">Chenliang Xu</a>)
Chenliang Xu (@chenliangxu) 's Twitter Profile Photo

Can machines recording an audio-visual scene produce realistic, matching audio-visual experiences at novel positions and novel view directions? We present AV-NeRF to synthesize new videos with spatial audios along arbitrary novel camera trajectories in that scene. #NeurIPS2023

Jason Corso (@_jasoncorso_) 's Twitter Profile Photo

Is Open Source AI Bull? Open source AI is widely talked about. Yet, it doesn't have a clear definition. And, the status quo mostly falls short of any rigorous notion of openness. I wrote up an article with a principled definition of open source AI. medium.com/@jasoncorso/is…