John Zhang (@johnzhangx) 's Twitter Profile
John Zhang

@johnzhangx

phd student @CarnegieMellon, @cmurexlab|prev @GeorgiaTech

ID: 839866279892815872

linkhttp://johnzhang3.github.io calendar_today09-03-2017 15:51:39

37 Tweet

161 Followers

148 Following

Bardienus Duisterhof (@bduisterhof) 's Twitter Profile Photo

Deformable objects are common in household, industrial and healthcare settings. Tracking them would unlock many applications in robotics, gen-AI, and AR. How? Check out MD-Splatting: a method for dense 3D tracking and dynamic novel view synthesis on deformable cloths. 1/6🧵

Rohan Choudhury (@rchoudhury997) 's Twitter Profile Photo

Can we effectively use LLMs for video question answering? Excited to announce our latest paper, Zero-Shot Video Question Answering with Procedural Programs, which uses LLMs to generate programs that answer questions about videos! [1/6]

Haoyu Xiong (@haoyu_xiong_) 's Twitter Profile Photo

Introducing Open-World Mobile Manipulation 🦾🌍 – A full-stack approach for operating articulated objects in open-ended unstructured environments: Unlocking doors with lever handles/ round knobs/ spring-loaded hinges 🔓🚪 Opening cabinets, drawers, and refrigerators 🗄️ 👇

Peter Schaldenbrand (@peteyrobots) 's Twitter Profile Photo

To support richer, human-robot interaction, we made FRIDA more collaborative. CoFRIDA can take turns with a person to create drawings and paintings 🧵 ICRA'24 IEEE ICRA CMUBig Gaurav Parmar Jun-Yan Zhu 1x @JeanOhCmuBIG

GRASP Laboratory (@grasplab) 's Twitter Profile Photo

Join us TOMORROW in welcoming Dr. Zac Manchester (Zac Manchester ) as he presents “Composable Optimization for Robotic Motion Planning and Control” from 10:30AM - 11:45AM. More info: grasp.upenn.edu/events/spring-… #GRASP #GRASPLab #GRASPonRobotics @GRASPSeminar

Join us TOMORROW in welcoming Dr. Zac Manchester
(<a href="/zacinaction/">Zac Manchester</a> ) as he presents “Composable Optimization for Robotic Motion Planning and Control” from 10:30AM - 11:45AM.
More info:
grasp.upenn.edu/events/spring-…
#GRASP #GRASPLab #GRASPonRobotics @GRASPSeminar
Carnegie Mellon University (@carnegiemellon) 's Twitter Profile Photo

Robots developed by CMU Robotics Institute are helping to paint the future. Literally. 🤖🎨 Collaborative FRIDA (CoFRIDA) interactively co-paints with people, working with users of any artistic ability to invite collaboration to create art in the real world. cmu.is/CoFRIDA

Gengshan Yang (@gengshany) 's Twitter Profile Photo

Sharing my recent project, agent-to-sim: From monocular videos taken over a long time horizon (e.g., 1 month), we learn an interactive behavior model of an agent (e.g., a 🐱) grounded in 3D. gengshan-y.github.io/agent2sim-www/

Uksang Yoo (@uksangyoo) 's Twitter Profile Photo

Can robots make pottery🍵? Throwing a pot is a complex manipulation task of continuously deforming clay. We will present RoPotter, a robot system that uses structural priors to learn from demonstrations and make pottery IEEE-RAS Int. Conf. on Humanoid Robots (HUMANOIDS) CMU Robotics Institute 👇robot-pottery.github.io 1/8🧵

Simon LC (@simonlc_) 's Twitter Profile Photo

We're presenting Jacta: a versatile planner for learning dexterous and whole-body manipulation this week at CoRL! website jacta-manipulation.github.io paper arxiv.org/abs/2408.01258

Rohan Choudhury (@rchoudhury997) 's Twitter Profile Photo

Excited to finally release our NeurIPS 2024 (spotlight) paper! We introduce Run-Length Tokenization (RLT), a simple way to significantly speed up your vision transformer on video with no loss in performance!

Eliot Xing (@etaoxing) 's Twitter Profile Photo

RL is notoriously sample inefficient. How can we scale RL on tasks much slower to simulate than rigid body physics, such as soft bodies? In our #ICLR2025 spotlight, we introduce both a new first-order RL algorithm, SAPO, and differentiable simulation platform, Rewarped. 1/n

Uksang Yoo (@uksangyoo) 's Twitter Profile Photo

🎉Excited to share that our paper was a finalist for best paper at #HRI2025! We introduce MOE-Hair, a soft robot system for hair care 💇🏻💆🏼 that uses mechanical compliance and visual force sensing for safe, comfortable interaction. Check it out: moehair.github.io 🧵1/7

Hongyu Li (@hongyu_lii) 's Twitter Profile Photo

We interact with dogs through touch -- a simple pat can communicate trust or instruction. Shouldn't interacting with robot dogs be as intuitive? Most commercial robots lack tactile skins. We present UniTac: a method to sense touch using only existing joint sensors! [1/5]