Siddhant Haldar (@haldar_siddhant) 's Twitter Profile
Siddhant Haldar

@haldar_siddhant

Excited about generalizing AI | PhD student @CILVRatNYU | Undergrad @IITKgp

ID: 974015514908418048

linkhttps://siddhanthaldar.github.io/ calendar_today14-03-2018 20:12:49

155 Tweet

730 Followers

1,1K Following

Irmak Guzey (@irmakkguzey) 's Twitter Profile Photo

Check out this awesome work from my friend Siddhant! By using keypoints instead of images, it works efficiently, significantly increases generalizability, and enables morphology transfer. Excited to see frameworks that bring similar capabilities to robot hands!

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Turns out you do not need 100Ms of funding to get robust manipulation policies. The robot behaviors shown below are trained without any teleop, sim2real, genai, or motion planning. More details on Point Policy created by Siddhant Haldar in the thread below 👇

Irmak Guzey (@irmakkguzey) 's Twitter Profile Photo

Despite great advances in learning dexterity, hardware remains a major bottleneck. Most dexterous hands are either bulky, weak or expensive. I’m thrilled to present the RUKA Hand — a powerful, accessible research tool for dexterous manipulation that overcomes these limitations!

Anya Zorin (@anyazorin) 's Twitter Profile Photo

Super excited to present our open-source robot hand RUKA! I had a lot of fun working on this with Irmak Guzey and all our amazing collaborators: Billy Yan, Aadhithya, Lisa Kondrich, Nikhil Bhattasali, and Lerrel Pinto. Check out our website at ruka-hand.github.io

NYU Center for Data Science (@nyudatascience) 's Twitter Profile Photo

CDS-affiliated Lerrel Pinto and NYU Courant PhD Siddhant Haldar @ ICRA 2025 have created "Point Policy," a system that teaches robots manipulation tasks by watching humans. The method uses key points to bridge human demonstrations with robot actions. arxiv.org/abs/2502.20391

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

Constructing interactive simulated worlds has been a challenging problem, requiring considerable manual effort for asset creation and articulation, and composing assets to form full scenes. In our new work - DRAWER, we made the process of creating scenes in simulation as simple

Raunaq Bhirangi (@raunaqmb) 's Twitter Profile Photo

If you're at #ICRA2025, stop by GWCC Building A, Room 412 at 3:30pm today (May 19) to chat about AnySkin and try it for yourself!

Mahi Shafiullah 🏠🤖 (@notmahi) 's Twitter Profile Photo

Morning, #ICRA2025 IEEE ICRA! Bring something small 🍋🍑 and have our Robot Utility Model pick it up at our EXPO demo today from 1-5 PM, between hall A2/A3! Talk and poster is right before, 11:15-12:15 in room 411. Also, DM if you want to chat 🤖 for the messy, real world!

Irmak Guzey (@irmakkguzey) 's Twitter Profile Photo

RUKA is warming up for our EXPO demo today ICRA with the help of our first-time teleoperators, Venkatesh and Peiqi Liu @ ICRA 2025 🫰 Come try teleoperating RUKA yourself from 1–5 PM, at exhibit hall! 🧤 For more info before coming -> ruka-hand.github.io :) #ICRA2025 Anya Zorin

Raunaq Bhirangi (@raunaqmb) 's Twitter Profile Photo

Tactile feedback is key for force-aware manipulation—but collecting teleoperated data for this is difficult. Feel the Force uses Point Policy and AnySkin to skip teleop entirely. Just wear a glove to record, and the robot runs zero-shot, force-aware control.

Ademi Adeniji (@ademiadeniji) 's Twitter Profile Photo

Everyday human data is robotics’ answer to internet-scale tokens. But how can robots learn to feel—just from videos?📹 Introducing FeelTheForce (FTF): force-sensitive manipulation policies learned from natural human interactions🖐️🤖 👉 feel-the-force-ftf.github.io 1/n

Zhuoran Chen (@joliachen) 's Twitter Profile Photo

🎉 Excited to share our latest work: Feel the Force (FTF)! Can robots learn to feel—not just see? FTF enables precise, force-sensitive manipulation by learning from human tactile behavior—no robot data or teleoperation needed. 👉 feel-the-force-ftf.github.io