Minho Park (@mpark1999) 's Twitter Profile
Minho Park

@mpark1999

Ph.D. student @kaist_ai (advised by Jaegul Choo, DAVIAN Lab)

ID: 1488848492164358145

linkhttp://pmh9960.github.io calendar_today02-02-2022 12:15:32

119 Tweet

112 Takipçi

148 Takip Edilen

Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

There are videos of dancing robots everywhere but they are very rarely using perception to decide what to do like this. Just need to integrate manipulation

Yilun Du (@du_yilun) 's Twitter Profile Photo

How do we build long-horizon memory of dynamic environments in existing world models? Excited to share our recent work on flow-equivariant world models, which helps us model dynamic objects in our memory by using group symmeteries!

AgiBot (@agibot_zhiyuan) 's Twitter Profile Photo

🤖Meet the OmniHand Pro 2025 from AGIBOT: High DOF: 19 Total DOF (12 active + 7 passive) in a highly compact, lightweight 750g design. Precision Sensing: Features 150+ taxels with 0.01N force resolution. Critical sensitivity for demanding tasks. Industrial Ready: Provides up

Sergey Levine (@svlevine) 's Twitter Profile Photo

Q-learning with adjoint matching is our latest method for offline RL & offline -> online RL with diffusion/flow models, and (so far?) the best-performing method we've developed for training flow policies with RL. Check out Colin's thread!

Xiao Fu (@lemonaddie0909) 's Twitter Profile Photo

RoboMaster is accepted to ICLR 2026🎉 We verify a paradigm for robot learning: Interactive 2D Trajectory Input →I2V Robotic Manipulation Demonstrations → Actions via Inverse Dynamic Model 1. Project Page: fuxiao0719.github.io/projects/robom… 2. Code: github.com/KlingTeam/Robo…

Sangwon Jang (@jangsangwon7) 's Twitter Profile Photo

What if your video generator could refine itself—at inference time? ❌No new models. ❌No retraining. ❌No external verifier. 💡 Introducing Self-Refining Video Sampling By reinterpreting a pretrained generator (Wan2.2, Cosmos) as a denoising autoencoder, we enable iterative

Ai2 (@allen_ai) 's Twitter Profile Photo

Introducing MolmoSpaces, a large-scale, fully open platform + benchmark for embodied AI research. 🤖 230k+ indoor scenes, 130k+ object models, & 42M annotated robotic grasps—all in one ecosystem.

Siddhant Haldar (@haldar_siddhant) 's Twitter Profile Photo

Robot foundation models are limited by costly real data, while simulation data is plentiful but visually mismatched to reality. We present Point Bridge, a method that enables zero-shot sim-to-real transfer for robot learning with minimal visual alignment. pointbridge3d.github.io

Sergey Levine (@svlevine) 's Twitter Profile Photo

If we train VLAs to respond to diverse multimodal prompts, then we can steer them better: [grasp the carrot]/[move to x,y,z]/[put the carrot on the plate]. With many levels of detail, powerful VLMs can step in and steer the model to success much more often! More below 👇

Chen Tessler (@chentessler) 's Twitter Profile Photo

The interplay between various components in RL can sometimes be very frustrating 😅 An experiment that wouldn't work is now fine after turning off obs and value normalization. Every other experiment works better with both turned on.

Kinam Kim (@kinam_0252) 's Twitter Profile Photo

🚀Excited to share that our paper EgoX👀 has been accepted to #CVPR2026 ! Huge thanks to my co-first authors (taewoongkang, dohyeon), co-authors (Minho Park,junhahyung) and Prof. Jaegul Choo. See you in Denver!🏔️ #VideoGeneration #WorldModeling #Robotics

Yixuan Wang (@yxwangbot) 's Twitter Profile Photo

1/ World models are getting popular in robotics 🤖✨ But there’s a big problem: most are slow and break physical consistency over long horizons. 2/ Today we’re releasing Interactive World Simulator: An action-conditioned world model that supports stable long-horizon interaction.

Vector Wang (@vectorwang2) 's Twitter Profile Photo

Fast Foundation Stereo + SAM2 basically = zero shot Foundation Pose😂 No CAD model, no object image, just click the target. Run directly on my 3070 at 13fps, with a 30$ stereo camera (calibrated in 10min) Thanks Bowen Wen for his contribution to the community!