Minyoung Hwang (@robominyoung) 's Twitter Profile
Minyoung Hwang

@robominyoung

Grad student @MIT_CSAIL, Previously @carnegiemellon, @allen_ai, @SNU | Robotics | Preference-based RL | Human-Robot Interaction

ID: 1513726720654057472

linkhttps://minyoung1005.github.io/ calendar_today12-04-2022 03:52:44

72 Tweet

378 Followers

273 Following

Minyoung Hwang (@robominyoung) 's Twitter Profile Photo

This paper inspires me to not only selectively choose partial goals (demonstration, language, environment states like objects, etc.) for robot learning, but also to use a combination of them (e.g., demonstration + language)! Using CVAE and a learned prior seems to be the keyšŸ¤”

Joel Jang (@jang_yoel) 's Twitter Profile Photo

Excited to introduce š‹š€šš€: the first unsupervised pretraining method for Vision-Language-Action models. Outperforms SOTA models trained with ground-truth actions 30x more efficient than conventional VLA pretraining šŸ“: arxiv.org/abs/2410.11758 🧵 1/9

Dhruv Shah (@shahdhruv_) 's Twitter Profile Photo

Gemini 2.0 can reason about the physical world! Try it out today at aistudio.google.com/starter-apps/s… Your robots will thank you for it :)

Minyoung Hwang (@robominyoung) 's Twitter Profile Photo

Happening now! Drop by poster #168 at CVPR to see our work! Also giving a spotlight talk at CVPR EAI workshop at 3:50-4pm. Happy to chat w/ anyone interested during the conference😊

Jyo Pari (@jyo_pari) 's Twitter Profile Photo

What if an LLM could update its own weights? Meet SEAL🦭: a framework where LLMs generate their own training data (self-edits) to update their weights in response to new inputs. Self-editing is learned via RL, using the updated model’s downstream performance as reward.

What if an LLM could update its own weights?

Meet SEAL🦭: a framework where LLMs generate their own training data (self-edits) to update their weights in response to new inputs.

Self-editing is learned via RL, using the updated model’s downstream performance as reward.
Rajat Kumar Jenamani (@rkjenamani) 's Twitter Profile Photo

Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. šŸ† Outstanding Paper & Systems Paper Finalist Robotics: Science and Systems 🧵1/8

Minyoung Hwang (@robominyoung) 's Twitter Profile Photo

I’ll be giving a spotlight talk at the RSS SemRob workshop (OHE #122, 9:50-10am) about this work today! The talk is followed by the poster session, so feel free to stop by if you’re interested :) Happy to catch up or chat about research and potential collaboration during the

Roozbeh Mottaghi (@roozbehmottaghi) 's Twitter Profile Photo

I’ll talk about the PARTNR framework and how LLMs perform in planning in dynamic environments. I’ll also talk about a unified memory architecture for robotics, an alternative to recent scene representations that rely on ad-hoc combination of multiple large models.

Sammy Joe Christen (@sammy_j_c) 's Twitter Profile Photo

If you are at #RSS2025, check out our workshop on Generative AI for Human-Robot Interaction. We have a stacked lineup of speakers and panelists!

Jianglong Ye (@jianglong_ye) 's Twitter Profile Photo

How to generate billion-scale manipulation demonstrations easily? Let us leverage generative models! šŸ¤–āœØ We introduce Dex1B, a framework that generates 1 BILLION diverse dexterous hand demonstrations for both grasping šŸ–ļøand articulation šŸ’» tasks using a simple C-VAE model.

Yu Xiang (@yuxiang_irvl) 's Twitter Profile Photo

ā€œAs a PHD student, your job is not publishing a paper every quarter. Focus on a problem in deep understanding and solve it in years under the protect of your adviserā€ from Russ Tedrake #RSS2025

ā€œAs a PHD student, your job is not publishing a paper every quarter. Focus on a problem in deep understanding and solve it in years under the protect of your adviserā€ from <a href="/RussTedrake/">Russ Tedrake</a> #RSS2025
Joey Hejna (@joeyhejna) 's Twitter Profile Photo

It's almost time for #CoRL 2025! A reminder that we're hosting the Data in Robotics workshop this Saturday Sept 27th. We have a packed schedule and are also attempting to livestream the event for those who can't attend in person.

It's almost time for #CoRL 2025! A reminder that we're hosting the Data in Robotics workshop this Saturday Sept 27th. We have a packed schedule and are also attempting to livestream the event for those who can't attend in person.
Tapomayukh "Tapo" Bhattacharjee (@tapobhat) 's Twitter Profile Photo

Physical caregiving is one of robotics' hardest frontiers: it is contact-rich, physically intensive, long-horizon, safety-critical, and full of deformable objects. Physical caregiving tasks such as bathing, dressing, transferring, toileting, and grooming require professional

Andreea Bobu (@andreea7b) 's Twitter Profile Photo

PhD application season is here! The CLEAR Lab @ MIT CSAIL is recruiting students excited about human-centered robot learning and algorithmic HRI. If you're interested in working on: šŸ¤ IRL & preference learning šŸŽ›ļø steering & finetuning large behavior models (diffusion