Min-Hung (Steve) Chen (@cmhungsteven) 's Twitter Profile
Min-Hung (Steve) Chen

@cmhungsteven

Senior Research Scientist, NVR TW @NVIDIAAI @NVIDIA (Project Lead: DoRA, EoRA | Ph.D. @GeorgiaTech | Multimodal AI | github.com/cmhungsteve

ID: 329683616

linkhttps://minhungchen.netlify.app/ calendar_today05-07-2011 13:26:04

696 Tweet

2,2K Takipçi

1,1K Takip Edilen

Yi-Ting Chen (@chen_yiting_tw) 's Twitter Profile Photo

📣 Call for Submissions - X-Sense Workshop #ICCV2025! We have extended the submission ddl!! Feel free to submit your accepted papers. Papers are “non-archived”! Deadline: Sep. 8 09:59 AM GMT

📣 Call for Submissions - X-Sense Workshop #ICCV2025!

We have extended the submission ddl!! Feel free to submit your accepted papers. Papers are “non-archived”!

Deadline: Sep. 8 09:59 AM GMT
Miran Heo (@miran_heo) 's Twitter Profile Photo

We connect the autoregressive pipeline of LLMs with streaming video perception. Introducing AUSM: Autoregressive Universal Video Segmentation Model. A step toward unified, scalable video perception — inspired by how LLMs unified NLP. 📝 arxiv.org/abs/2508.19242

Min-Hung (Steve) Chen (@cmhungsteven) 's Twitter Profile Photo

📣 Still Open for Submissions - X-Sense Workshop #ICCV2025! 📅 Deadline: September 8, 2025, 09:59 AM GMT 📝 Submission Portal: openreview.net/group?id=thecv… 🌐 More info: x-sense-ego-exo.github.io #ICCV2025 #ICCV #ICCV25 #CFP #NYCU #Cornell #NVIDIA #USYD #MIT #UCSD #TUDelft #UCLA

Min-Hung (Steve) Chen (@cmhungsteven) 's Twitter Profile Photo

[#EMNLP2025] Super excited to share MovieCORE EMNLP 2025 (Oral) — New #VideoUnderstanding Benchmark on System-2 Reasoning! 👉Check the original post from Josmy Faure for more details! 📷 Project: joslefaure.github.io/assets/html/mo… #VLM #LLM #Video #multimodal #AI #NVIDIA #NTU #NTHU

Min-Hung (Steve) Chen (@cmhungsteven) 's Twitter Profile Photo

[#hiring] I'm seeking PhD #Interns for 2026 at #NVIDIAResearch Taiwan! If interested, please send your CV and cover letter to minhungc [at] nvidia [dot] com 🔎Research topics: Efficient Video/4D Understanding & Reasoning. 📍Location: Taiwan / Remote (mainly APAC) #internships

𝚐𝔪𝟾𝚡𝚡𝟾 (@gm8xx8) 's Twitter Profile Photo

Tina proved that LoRA can match or surpass full-parameter RL. Tora builds directly on that result, turning it into a full framework. Built on torchtune, it extends RL post-training to LoRA, QLoRA, DoRA, and QDoRA under one interface with GRPO, FSDP, and compile support. QLoRA

Tina proved that LoRA can match or surpass full-parameter RL. Tora builds directly on that result, turning it into a full framework.

Built on torchtune, it extends RL post-training to LoRA, QLoRA, DoRA, and QDoRA under one interface with GRPO, FSDP, and compile support. QLoRA
琥珀青葉@LyCORIS (@kblueleaf) 's Twitter Profile Photo

(1/6) I built KohakuHub — a fully self-hosted HF alternative, with HF compatibility and a familiar experience 🧠Host your own data, Keep your workflow Check more information in the repository and our community! 🔗github.com/KohakuBlueleaf… 💬discord.gg/xWYrkyvJ2s

(1/6) I built KohakuHub — a fully self-hosted HF alternative, with HF compatibility and a familiar experience

🧠Host your own data, Keep your workflow

Check more information in the repository and our community!
🔗github.com/KohakuBlueleaf…
💬discord.gg/xWYrkyvJ2s
Min-Hung (Steve) Chen (@cmhungsteven) 's Twitter Profile Photo

#ICCV2025 is around the corner! Don't hesitate to visit Josmy Faure's HERMES poster to learn our latest efficient video understanding work! 🌐 Website: joslefaure.github.io/assets/html/he…

Hsu-kuang Chiu (@hsukuangchiu) 's Twitter Profile Photo

Excited to have a poster presentation for our latest research V2V-GoT at #ICCV2025 X-Sense Workshop! 🗓 Date & Time: Oct 20th, Monday, 11:40am ~ 12:30pm 📍 Location: Exhibition Hall II (No 188 ~ 210) 🌐 Paper, code, and dataset: eddyhkchiu.github.io/v2vgot.github.… #NVIDIA #CMU

Excited to have a poster presentation for our latest research V2V-GoT at #ICCV2025 X-Sense Workshop!

🗓 Date & Time: Oct 20th, Monday, 11:40am ~ 12:30pm
📍 Location: Exhibition Hall II (No 188 ~ 210)
🌐 Paper, code, and dataset: eddyhkchiu.github.io/v2vgot.github.…
#NVIDIA #CMU
Min-Hung (Steve) Chen (@cmhungsteven) 's Twitter Profile Photo

#ICCV2025 is around the corner! Don't hesitate to visit Hsu-kuang Chiu's V2V-GoT poster @ X-Sense Workshop to learn our latest LLM-based Cooperative Driving work! Workshop: x-sense-ego-exo.github.io V2V-GoT: eddyhkchiu.github.io/v2vgot.github.… #ICCV2025 #V2V #LLM #iccv25 #NVIDIA #CMU

Katie Luo (@katielulula) 's Twitter Profile Photo

If you're at #ICCV2025, Hawaii, make sure to drop by the X-Sense workshop at Hawai'i Convention Center @ Room 323C on Monday, Oct. 20. Join us for a discussion on the future of x-modal sensing! 📸📍 Link: x-sense-ego-exo.github.io

Yi-Ting Chen (@chen_yiting_tw) 's Twitter Profile Photo

📣 Join us for the ICCV’25 X-Sense Workshop at Hawai'i Convention Center @ Room 323C on Monday, Oct. 20!! Link: x-sense-ego-exo.github.io

📣 Join us for the ICCV’25 X-Sense Workshop at Hawai'i Convention Center @ Room 323C on Monday, Oct. 20!! 

Link:  x-sense-ego-exo.github.io
Bolei Zhou (@zhoubolei) 's Twitter Profile Photo

Welcome to the workshop at ICCV. In the afternoon session, I will give a talk of our effort towards learning physical AI for sidewalk autonomy.

X. Dong (@simonxindong) 's Twitter Profile Photo

We, at NVIDIA, presents - Length Penalty Done Right - Cut CoT length by 3/4 without sacrificing accuracy using only RL - This makes DeepSeek-R1-7B running ~8 times faster on AIME-24 while maintaining the same accuracy.