Yuki Wang (@yukiwang_hw) 's Twitter Profile
Yuki Wang

@yukiwang_hw

Third Year CS Ph.D. Student at Cornell University @CornellCIS A member of the PoRTaL group @PortalCornell!

ID: 1661118113340071937

linkhttps://lunay0yuki.github.io/ calendar_today23-05-2023 21:13:29

58 Tweet

60 Takipçi

99 Takip Edilen

Gokul Swamy (@g_k_swamy) 's Twitter Profile Photo

Say ahoy to 𝚂𝙰𝙸𝙻𝙾𝚁⛵: a new paradigm of *learning to search* from demonstrations, enabling test-time reasoning about how to recover from mistakes w/o any additional human feedback! 𝚂𝙰𝙸𝙻𝙾𝚁 ⛵ out-performs Diffusion Policies trained via behavioral cloning on 5-10x data!

Rajat Kumar Jenamani (@rkjenamani) 's Twitter Profile Photo

Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. 🏆 Outstanding Paper & Systems Paper Finalist Robotics: Science and Systems 🧵1/8

Rajat Kumar Jenamani (@rkjenamani) 's Twitter Profile Photo

Really excited to share that FEAST won the Best Paper Award at #RSS2025! Huge thanks to everyone who’s shaped this work, from roboticists to care recipients, caregivers, and occupational therapists. ❤️

Really excited to share that FEAST won the Best Paper Award at #RSS2025!

Huge thanks to everyone who’s shaped this work, from roboticists to care recipients, caregivers, and occupational therapists. ❤️
Kushal (@kushalk_) 's Twitter Profile Photo

Teleoperation is slow, expensive, and difficult to scale. So how can we train our robots instead? Introducing X-Sim: a real-to-sim-to-real framework that trains image-based policies 1) learned entirely in simulation 2) using rewards from human videos. portal-cornell.github.io/X-Sim

Kushal (@kushalk_) 's Twitter Profile Photo

Huge thanks to my co-lead Prithwish Dan, collaborators Angela Chao, Edward Duan & Maximus Pace, and co-advisors Wei-Chiu Ma Sanjiban Choudhury! Thrilled that our X-Sim paper received Best Paper (Runner-Up) at the EgoAct Workshop Robotics: Science and Systems — winning a cool pair of Meta Ray-Bans! 😎

Huge thanks to my co-lead <a href="/prithwish_dan/">Prithwish Dan</a>, collaborators Angela Chao, Edward Duan &amp; Maximus Pace, and co-advisors <a href="/weichiuma/">Wei-Chiu Ma</a> <a href="/sanjibac/">Sanjiban Choudhury</a>!

Thrilled that our X-Sim paper received Best Paper (Runner-Up) at the EgoAct Workshop <a href="/RoboticsSciSys/">Robotics: Science and Systems</a> — winning a cool pair of <a href="/Meta/">Meta</a> Ray-Bans! 😎
Anne Wu (@anne_youw) 's Twitter Profile Photo

How to learn from temporally misaligned demo videos? Come and chat on Wed 16 11-1.30pm @ W. Exhibition Hall B2-B3 W-709! #ICML2025

Anne Wu (@anne_youw) 's Twitter Profile Photo

🗣️We can listen and speak simultaneously when we talk, and so should the spoken dialogue models (SDMs)! 💬Unlike typical "walkie-talkie" voice AIs, full-duplex SDMs let both sides talk at once - more like real, natural conversation. But this makes alignment harder: - No

🗣️We can listen and speak simultaneously when we talk, and so should the spoken dialogue models (SDMs)!

💬Unlike typical "walkie-talkie" voice AIs, full-duplex SDMs let both sides talk at once - more like real, natural conversation.

But this makes alignment harder:
- No
Anne Wu (@anne_youw) 's Twitter Profile Photo

Laurent Mazare Neil Zeghidour Alexandre Défossez kyutai I will be presenting this work at ICML 2025: ⏰ Thu 17 Jul 11am-1:30pm PDT 📍West Exhibition Hall B2-B3 W-316 DM if you want to chat about multimodal models / interactions / anything!

<a href="/lmazare/">Laurent Mazare</a> <a href="/neilzegh/">Neil Zeghidour</a> <a href="/honualx/">Alexandre Défossez</a> <a href="/kyutai_labs/">kyutai</a> I will be presenting this work at ICML 2025: 

⏰ Thu 17 Jul 11am-1:30pm PDT 
📍West Exhibition Hall B2-B3 W-316

DM if you want to chat about multimodal models / interactions / anything!
Will Huey (@willhuey9) 's Twitter Profile Photo

Come see the pitfalls of aligning video frames, and my first ever paper at #ICML2025. Huge thanks to Yuki for helping me with all the nuances of academic research, she was an outstanding mentor and co-lead on this.

Arnav Jain (@arnavkj95) 's Twitter Profile Photo

Curious about a simple and scalable approach to multi-turn code generation? Come check out μCode — our framework built on one-step recoverability and multi-turn BoN search. Stop by and say hi during Poster Session 4 at #ICML2025 today at East Hall A-B # E-2600.

Curious about a simple and scalable approach to multi-turn code generation?

Come check out μCode — our framework built on one-step recoverability and multi-turn BoN search.

Stop by and say hi during Poster Session 4 at #ICML2025 today at East Hall A-B # E-2600.
Gokul Swamy (@g_k_swamy) 's Twitter Profile Photo

Congrats to Arnav Jain, Vibhakar Mohta, and all the authors on their #NeurIPS2025 Spotlight! We have one more surprise up our sleeves I'm excited to share soon 😉