Yue Zhang (@zhan1624) 's Twitter Profile
Yue Zhang

@zhan1624

Current postdoc @uncnlp with @mohitban47 in NLP and multimodal research, PhD @msu with @kordjamshidi. She/her

ID: 1248006602118086663

linkhttps://zhangyuejoslin.github.io/ calendar_today08-04-2020 21:55:46

32 Tweet

158 Followers

259 Following

Elias Stengel-Eskin (on the faculty job market) (@eliaseskin) 's Twitter Profile Photo

🌵 I'm going to be presenting PBT at #NAACL2025 today at 2PM! Come by poster session 2 if you want to hear about: -- balancing positive and negative persuasion -- improving LLM teamwork/debate -- training models on simulated dialogues Had a fun time working on this with

David Wan (@meetdavidwan) 's Twitter Profile Photo

🎉 Our paper "MAMM-Refine" on Multi-Agent × Multi-Model Refinement for Faithful Generation will be presented at #NAACL2025! We introduce a recipe for improving model faithfulness through multi-agent and multi-model (e.g. GPT-4o, Claude-3.5, ​​Gemini-1.5) refinement. Key

🎉 Our paper "MAMM-Refine" on Multi-Agent × Multi-Model Refinement for Faithful Generation will be presented at #NAACL2025!

We introduce a recipe for improving model faithfulness through multi-agent and multi-model (e.g. GPT-4o, Claude-3.5, ​​Gemini-1.5) refinement.

Key
Elias Stengel-Eskin (on the faculty job market) (@eliaseskin) 's Twitter Profile Photo

Extremely excited to announce that I will be joining UT Austin Computer Science at UT Austin in August 2025 as an Assistant Professor! 🎉 I’m looking forward to continuing to develop AI agents that interact/communicate with people, each other, and the multimodal world. I’ll be recruiting PhD

Extremely excited to announce that I will be joining <a href="/UTAustin/">UT Austin</a> <a href="/UTCompSci/">Computer Science at UT Austin</a> in August 2025 as an Assistant Professor! 🎉

I’m looking forward to continuing to develop AI agents that interact/communicate with people, each other, and the multimodal world. I’ll be recruiting PhD
Jaemin Cho (on faculty job market) (@jmin__cho) 's Twitter Profile Photo

Sharing some personal updates 🥳: - I've completed my PhD at UNC Computer Science! 🎓 - Starting Fall 2026, I'll be joining the Computer Science dept. at Johns Hopkins University (JHU Computer Science) as an Assistant Professor 💙 - Currently exploring options + finalizing the plan for my gap year (Aug

Sharing some personal updates 🥳:
- I've completed my PhD at <a href="/unccs/">UNC Computer Science</a>! 🎓
- Starting Fall 2026, I'll be joining the Computer Science dept. at Johns Hopkins University (<a href="/JHUCompSci/">JHU Computer Science</a>) as an Assistant Professor 💙
- Currently exploring options + finalizing the plan for my gap year (Aug
Jaehong Yoon (on the faculty job market) (@jaeh0ng_yoon) 's Twitter Profile Photo

Thrilled to share that I’ll be joining the College of Computing and Data Science at Nanyang Technological University (NTU) (NTU Singapore) as an Assistant Professor, starting in August 2025 🇸🇬🥳 I’ll continue my research on building trustworthy and continually adaptable multimodal AI,

Thrilled to share that I’ll be joining the College of Computing and Data Science at Nanyang Technological University (NTU) (<a href="/NTUsg/">NTU Singapore</a>) as an Assistant Professor, starting in August 2025 🇸🇬🥳

I’ll continue my research on building trustworthy and continually adaptable multimodal AI,
Zun Wang (@zunwang919) 's Twitter Profile Photo

🚨Thrilled to introduce EPiC🎥: Efficient Video Camera Control Learning with Precise Anchor-Video Guidance A generative model enables precise 3D camera trajectory control over user-provided videos or images. It achieves highly efficient training, completing within just 16

Jialu Li (@jialuli96) 's Twitter Profile Photo

🚨Check out our new video generation work EPiC! 🌟EPiC enables precise 3D camera trajectory control for both image-to-video generation and video-to-video generation! 💡Key highlights: ▶️ Efficient training within 16 GPU-hours ▶️ No need for paired video-camera trajectory data

Han Lin (@hanlin_hl) 's Twitter Profile Photo

Check out our new paper (EPiC) for video generation with camera-control 🔥 Here are the two highlights for easy and efficient training: ➡️The model can be trained directly on videos in the wild, without requiring extra camera trajectory annotations. ➡️With a novel

Yue Zhang (@zhan1624) 's Twitter Profile Photo

🚀Check out our new paper EPiC for video generation with efficient and precise 3D camera control! Just 16 GPU-hours (vs. 200+), with higher quality results! We innovate on both data & model-level: ✅Data: Visibility-based masking—no video-camera trajectory paired data needed

Jaehong Yoon (on the faculty job market) (@jaeh0ng_yoon) 's Twitter Profile Photo

🚨 New paper alert! EPiC: Video Generation with Precise 3D Camera Control 🎬 Tackles both I2V & V2V tasks with: ▶️ Visibility-based masking—no need for video-camera trajectories ▶️ Lightweight ControlNet, guided by anchor video as structural prior Details in the thread !👇

Jaemin Cho (on faculty job market) (@jmin__cho) 's Twitter Profile Photo

Introducing EPiC - precise & efficient camera control for video generation! 📽️⚙️ Previous methods had drawbacks: ❌ Noisy anchor videos from point cloud estimates ❌ Expensive camera pose annotations ❌ 200+ GPU hours to train EPiC addresses this with: ✅ Visibility-based

Minghao Wu (@wuminghao_nlp) 's Twitter Profile Photo

Excited to share that I’ll be joining UNC Computer Science and UNC NLP as a Postdoctoral Research Associate, working with the incredible Mohit Bansal! Can’t wait to collaborate with the amazing students and faculty there! 🎉 A huge thank you to my supervisor Reza Haffari, my colleagues at

Excited to share that I’ll be joining <a href="/unccs/">UNC Computer Science</a> and <a href="/uncnlp/">UNC NLP</a> as a Postdoctoral Research Associate, working with the incredible <a href="/mohitban47/">Mohit Bansal</a>! Can’t wait to collaborate with the amazing students and faculty there! 🎉

A huge thank you to my supervisor Reza Haffari, my colleagues at
Daeun Lee (@danadaeun) 's Twitter Profile Photo

Excited to share Video-Skill-CoT🎬🛠️– a new framework for domain-adaptive video reasoning with skill-aware Chain-of-Thought (CoT) supervision! ⚡️Key Highlights: ➡️ Automatically extracts domain-specific reasoning skills from questions and organizes them into a unified taxonomy,

David Wan (@meetdavidwan) 's Twitter Profile Photo

Excited to share our new work, CLaMR! 🚀 We tackle multimodal content retrieval by jointly considering video, speech, OCR, and metadata. CLaMR learns to dynamically pick the right modality for your query, boosting retrieval by 25 nDCG@10 over single modality retrieval! 🧐

Excited to share our new work, CLaMR! 🚀

We tackle multimodal content retrieval by jointly considering video, speech, OCR, and metadata. CLaMR learns to dynamically pick the right modality for your query, boosting retrieval by 25 nDCG@10 over single modality retrieval!

🧐
Jaewoo Lee (@jaew00_lee) 's Twitter Profile Photo

🎉Excited to share that I’ll be starting my CS PhD journey at UNC-Chapel Hill UNC Computer Science this fall! 🎓 I’ll be working with the renowned Mohit Bansal at UNC NLP — a dream comes true! ✨ Huge thanks to everyone who's helped me get here. Can't wait to begin this new life and research journey! 🧳🚀

🎉Excited to share that I’ll be starting my CS PhD journey at <a href="/UNC/">UNC-Chapel Hill</a> <a href="/unccs/">UNC Computer Science</a> this fall! 🎓
I’ll be working with the renowned <a href="/mohitban47/">Mohit Bansal</a> at <a href="/uncnlp/">UNC NLP</a> — a dream comes true! ✨
Huge thanks to everyone who's helped me get here. Can't wait to begin this new life and research journey! 🧳🚀
David Wan (@meetdavidwan) 's Twitter Profile Photo

Excited to share GenerationPrograms! 🚀 How do we get LLMs to cite their sources? GenerationPrograms is attributable by design, producing a program that executes text w/ a trace of how the text was generated! Gains of up to +39 Attribution F1 and eliminates uncited sentences,

Excited to share GenerationPrograms! 🚀

How do we get LLMs to cite their sources? GenerationPrograms is attributable by design, producing a program that executes text w/ a trace of how the text was generated! Gains of up to +39 Attribution F1 and eliminates uncited sentences,
Elias Stengel-Eskin (on the faculty job market) (@eliaseskin) 's Twitter Profile Photo

🚨 Announcing CINGS, a new method for improving grounding in LLMs and VLMs! CINGS works at the instruction-tuning stage, teaching models to incorporate contextual info instead of over-relying on parametric knowledge. Gains in both text and multimodal settings, and nice