Vaibhav Mathur (@vaibhavheretoo) 's Twitter Profile
Vaibhav Mathur

@vaibhavheretoo

ID: 983196527240790016

calendar_today09-04-2018 04:14:53

21 Tweet

62 Followers

296 Following

NASA's Perseverance Mars Rover (@nasapersevere) 's Twitter Profile Photo

I’ve come nearly 300 million miles, and I’m just getting started. Hear from the team about my picture-perfect landing and what comes next. LIVE at 2:30 p.m. PST (5:30 p.m. EST/20:30 UTC) go.nasa.gov/3ojDWkj

I’ve come nearly 300 million miles, and I’m just getting started. Hear from the team about my picture-perfect landing and what comes next.

LIVE at 2:30 p.m. PST (5:30 p.m. EST/20:30 UTC) go.nasa.gov/3ojDWkj
Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

We just released ROT, a new imitation learning algorithm that can learn vision-based robotic policies with just 1 demonstration, 1 hour of interactive learning and without any pre-training! Project: rot-robot.github.io w/ Siddhant Haldar,Vaibhav Mathur,Denis Yarats (1/N)

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Almost ♾ unlabeled data is the “secret sauce” for today's ML, but how do we use uncurated datasets in robot learning? Conditional Behavior Transformer makes sense of "play" style robot demos w/ no labels and no RL to extract conditional policies! Play-to-policy.github.io 🧵

Perplexity (@perplexity_ai) 's Twitter Profile Photo

Announcing Perplexity Ask, a new search interface that uses OpenAI GPT 3.5 and Microsoft Bing to directly answer any question you ask. perplexity.ai discord.com/invite/kWJZsxP…

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Wonderful to see ROT being named a finalist for the Best Paper Award at #CoRL2022 ! And congratulations to the eventual winners @kunhuang1998, Edward Hu, and Dinesh Jayaraman from UPenn.

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

While we are going gaga over large models and big data, there is still incredible value left to extract in small models and data, especially in robotics. All the skills shown below were each trained with <1 min of human data and <20 min of online RL fast-imitation.github.io 🧵👇

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Tactile feedback is one of the most important modalities in manipulation, but has been underutilized in dexterous hands. T-Dex is a framework for learning dexterous policies from tactile play data, beating vision and torque-based methods by 1.7x. tactile-dexterity.github.io 🧵👇

Irmak Guzey (@irmakkguzey) 's Twitter Profile Photo

TAVI - A framework that learns an online visuo-tactile residual policy using image only guidance. Learned policy can adapt to new environments under only 1 hour of training! It was great working with @yinlongdai , Ben Evans , Soumith Chintala and Lerrel Pinto ! :)

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Amazing work from Xiaolong Wang's group at UCSD. Truly mind-boggling to see this level of control and expressivity generated by an academic team of 6 people.

Irmak Guzey (@irmakkguzey) 's Twitter Profile Photo

We just released Open-Teach, a teleoperation framework that enables control across various robot and simulation platforms using a single VR headset. Open-Teach is fully open-source, and adding a new robot only requires a few files (for which we provide templates). Try it out! :)

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

It is really hard to get robot policies that are both precise (small margins for error) and general (robust to env variations). We just released ViSk, where skin sensing is used to train fine-grained policies with ~1 hour of data. Below is a single-take video.

Venkatesh (@venkyp2000) 's Twitter Profile Photo

Excited to present Visuo-Skin (ViSk), a simple, effective framework for precise robot manipulation! Key: Low-dimensional skin sensing (AnySkin), BAKU and rich tactile data This was co-led with Raunaq Bhirangi and help from collaborators Yifeng Cao, Siddhant Haldar and Lerrel Pinto.

Irmak Guzey (@irmakkguzey) 's Twitter Profile Photo

Learning dexterous policies from human videos is challenging due to differences between human and robot hands. We present HuDOR, a method that learns dexterous policies within the robot's physical constraints using just one human video and an hour of online interactions! [1/n]

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Imagine robots learning new skills—without any robot data. Today, we're excited to release EgoZero: our first steps in training robot policies that operate in unseen environments, solely from data collected through humans wearing Aria smart glasses. 🧵👇