Shoubin Yu✈️ICLR 2025🇸🇬 (@shoubin621) 's Twitter Profile
Shoubin Yu✈️ICLR 2025🇸🇬

@shoubin621

Ph.D. Student at @unccs @uncnlp, advised by @mohitban47. Previously @sjtu1896. Interested in multimodal video understanding&generation.

ID: 1462792592760786946

linkhttp://yui010206.github.io calendar_today22-11-2021 14:38:35

304 Tweet

607 Takipçi

755 Takip Edilen

Microsoft Research (@msftresearch) 's Twitter Profile Photo

Microsoft researchers introduce MatterGen, a model that can discover new materials tailored to specific needs—like efficient solar cells or CO2 recycling—advancing progress beyond trial-and-error experiments. msft.it/6012U8zX8

Huaxiu Yao✈️ICLR 2025🇸🇬 (@huaxiuyaoml) 's Twitter Profile Photo

❗️Self-evolution is quietly pushing LLM agents off the rails. ⚠️ Even perfect alignment at deployment can gradually forget human alignment and shift toward self-serving strategies. Over time, LLM agents stop following values, imitate bad strategies, and even spread misaligned

Hanqi Xiao (@hanqi_xiao) 's Twitter Profile Photo

Landed in Montreal 🇨🇦 for #COLM2025 to present my first-author work on task-conditioned mixed-precision quantization: “Task-Circuit Quantization” (Thursday 11am, Poster Session 5). I'm applying to PhD programs this cycle and am excited to chat about this or other interests (LLM

Han Wang (@hanwang98) 's Twitter Profile Photo

Excited to be at #COLM2025 🇨🇦 this week! I’ll be presenting our work on RAG with Conflicting Evidence at Poster Session 5 — Oct 9, 11:00 AM. Say hi if you’re around! Always up for chats about Knowledge Conflict, RAG, or all things LLM. 😃 Check this thread for details:

Zun Wang (@zunwang919) 's Twitter Profile Photo

🚨 Thrilled to introduce Self-Improving Demonstrations (SID) for Goal-Oriented Vision-and-Language Navigation — a scalable paradigm where navigation agents learn to explore by teaching themselves. ➡️ Agents iteratively generate and learn from their own successful trajectories ➡️

🚨 Thrilled to introduce Self-Improving Demonstrations (SID) for Goal-Oriented Vision-and-Language Navigation — a scalable paradigm where navigation agents learn to explore by teaching themselves.

➡️ Agents iteratively generate and learn from their own successful trajectories
➡️
Shoubin Yu✈️ICLR 2025🇸🇬 (@shoubin621) 's Twitter Profile Photo

🚨 New Paper Alert! Introducing SciVideoBench — a comprehensive benchmark for scientific video reasoning! 🔬SciVideoBench: 1. Spans Physics, Chemistry, Biology & Medicine with authentic experimental videos. 2. Features 1,000 challenging MCQs across three reasoning types:

🚨 New Paper Alert! Introducing SciVideoBench — a comprehensive benchmark for scientific video reasoning!

🔬SciVideoBench:

1. Spans Physics, Chemistry, Biology & Medicine with authentic experimental videos.

2. Features 1,000 challenging MCQs across three reasoning types:
Andong Deng (@dengandong1227) 's Twitter Profile Photo

Proud to introduce SciVideoBench (scivideobench.github.io) together with Shoubin Yu and an amazing group of collaborators Taojiannan Yang Mohit Bansal Serena Yeung-Levy Xiaohan Wang! A new benchmark for challenging scientific video reasoning tasks — looking forward to seeing how

Xiaohan Wang (@xiaohanwang96) 's Twitter Profile Photo

🚀 Excited to release SciVideoBench — a new benchmark that pushes Video-LMMs to think like scientists! Designed to probe video reasoning and the synergy between accurate perception, expert knowledge, and logical inference. 1,000 research-level Qs across Physics, Chemistry,

Zaid Khan (@codezakh) 's Twitter Profile Photo

How can an agent reverse engineer the underlying laws of an unknown, hostile & stochastic environment in “one life”, without millions of steps + human-provided goals / rewards? In our work, we: 1️⃣ infer an executable symbolic world model (a probabilistic program capturing

Archiki Prasad (@archikiprasad) 's Twitter Profile Photo

🚨 Excited to share our new work ✨ OneLife ✨, which investigates how an agent can infer executable symbolic world models 🌐 from a single unguided trajectory in a stochastic environment. I’m especially excited about our planning + evaluation contributions: 1️⃣ We support

Jaemin Cho (on faculty job market) (@jmin__cho) 's Twitter Profile Photo

🚨Introducing OneLife, a new framework to learn world dynamics as a executable probabilistic program, from a single, unguided episode in a stochastic, complex environment. ✨Highlights: ➡️ Inference only routes through relevant laws, solving scaling challenges in complex state

Mohit Bansal (@mohitban47) 's Twitter Profile Photo

🚨 Excited to announce "One Life to Learn: Inferring Symbolic World Models for Stochastic Environments from Unguided Exploration" --> (1) Our agent can infer/reverse engineer the laws of an unknown, stochastic environment from a single, unguided episode -- without requiring

Yichen Li (@antheayli) 's Twitter Profile Photo

Will be sharing our work on Multimodal Action conditioned Video Prediction @ ICCV 2025 📰 paper: arxiv.org/abs/2510.02287 💻 code: github.com/AntheaLi/MMVid… See you in Hawaii. Come and say Hi! (Wed OCT 22)

Kevin Lin (@kevinqhlin) 's Twitter Profile Photo

Egocentric Vision & Science, a lovely match. I think would be fun to have PhdOS, with the communication with supervisor and labmates.😂