Simon LC (@simonlc_) 's Twitter Profile
Simon LC

@simonlc_

research scientist @ the AI institute
robotics & optimization phd @ stanford

ID: 1172407457529786371

calendar_today13-09-2019 07:11:53

80 Tweet

557 Followers

220 Following

Thomas Lew (@thomas__lew) 's Twitter Profile Photo

📢Excited to share our #ICRA2023 work on robotic table wiping via RL + optimal control! 📖 arxiv.org/abs/2210.10865 🎥 youtu.be/inORKP4F3EI 💡RL (for high-level planning) + trajectory optimization (for precise control) can solve complex tasks without on-robot data collection ⬇️

Danny Driess (@dannydriess) 's Twitter Profile Photo

What happens when we train the largest vision-language model and add in robot experiences? The result is PaLM-E 🌴🤖, a 562-billion parameter, general-purpose, embodied visual-language generalist - across robotics, vision, and language. Website: palm-e.github.io

Kevin Zakka (@kevin_zakka) 's Twitter Profile Photo

Introducing 𝗥𝗼𝗯𝗼𝗣𝗶𝗮𝗻𝗶𝘀𝘁 🎹🤖, a new benchmark for high-dimensional robot control! Solving it requires mastering the piano with two anthropomorphic hands. This has been one year in the making, and I couldn’t be happier to release it today! Some highlights below:

John Zhang (@johnzhangx) 's Twitter Profile Photo

want your next robot 🤖 to move like your favorite pet 🐶? youtube videos might be all you need. checkout our recent work -- SLoMo paper: rexlab.ri.cmu.edu/papers/slomo.p… website: slomo-www.github.io/website/ video: youtu.be/bvoM-nBd7lM?si…

Naoki Yokoyama (@naokiyokoyama0) 's Twitter Profile Photo

Excited to share our latest work, Vision-Language Frontier Maps – a SOTA approach for semantic navigation in robotics. VLFM enables robots to navigate and find objects in novel environments using vision-language foundation models, zero-shot! Accepted to #ICRA2024! 🧵

Albert Li (@albert_h_li) 's Twitter Profile Photo

Excited to share our new📰, DROP: Dexterous Reorientation via Online Planning! Overview: 🔹We tackle cube rotation🧊♻️on hardware 🔹DROP is the first 🧊♻️sampling-based MPC demo. No reinforcement learning! 🔹Median 30.5 rotations w/o dropping, max of 81👑🦾 See 🧵below👇

Simon LC (@simonlc_) 's Twitter Profile Photo

We're presenting Jacta: a versatile planner for learning dexterous and whole-body manipulation this week at CoRL! website jacta-manipulation.github.io paper arxiv.org/abs/2408.01258

Preston Culbertson (@pdculbert) 's Twitter Profile Photo

ICYMI: For #CoRL2024 we released a dataset of 3.5M (!) dexterous grasps, with multi-trial labels and perceptual data for 4.3k objects. Our takeaways: scale matters, and refining grasps > better sampling. Hoping our data can enable more vision-based grasps in hardware!

Xuanlin Li (Simon) (@xuanlinli2) 's Twitter Profile Photo

Learning bimanual, contact-rich robot manipulation policies that generalize over diverse objects has long been a challenge. Excited to share our work: Planning-Guided Diffusion Policy Learning for Generalizable Contact-Rich Bimanual Manipulation! glide-manip.github.io 🧵1/n

Kuan Fang (@kuanfang) 's Twitter Profile Photo

Our new paper shows how task representations learned via temporal alignment enable compositional generalization for conditional policies. This allows robots to solve compound tasks by implicitly decomposing them into subtasks.