Vishnu Mano (@vishnuman0) 's Twitter Profile
Vishnu Mano

@vishnuman0

Building Exo | CS + Math @GeorgiaTech

ID: 701524299371433984

calendar_today21-02-2016 21:50:01

5 Tweet

38 Takipçi

86 Takip Edilen

Sergey Levine (@svlevine) 's Twitter Profile Photo

We just released results for our newest VLA from Physical Intelligence: π*0.6. This one is trained with RL, and it makes it quite a bit better: often doubles throughput, enables real-world tasks like folding real laundry and making espresso drinks at the office.

Karl Pertsch (@karlpertsch) 's Twitter Profile Photo

Our first real-world RL results! Hours of reliable, autonomous operation, on some of the most challenging manipulation tasks we have done so far! Side benefit: plenty of nice coffee in the office! :) Kudos to all my colleagues at pi who worked very hard to get this to work!!

Chelsea Finn (@chelseabfinn) 's Twitter Profile Photo

For robots to be actually useful, they need to be reliable. We’re sharing an RL recipe for VLA models that takes a step in this direction, allowing robots to operate autonomously for hours at a time. Blog & paper: pi.website/blog/pistar06

The Humanoid Hub (@thehumanoidhub) 's Twitter Profile Photo

Physical Intelligence unveiled π*0.6 (Pi-Star 0.6): a vision-language-action (VLA) model upgraded via their new Recap method (RL with Experience & Corrections via Advantage-conditioned Policies). Recap combines three human-like learning stages: initial demonstrations, real-time

atharva (@k7agar) 's Twitter Profile Photo

i adore how every physical intelligence release comes with a cute little blog and a high signal research paper. no point in gate keeping when you know you have no competition

Homanga Bharadhwaj (@mangahomanga) 's Twitter Profile Photo

No teleoperation. No simulation. No RL. Multi-fingered robot manipulation policies directly by watching videos of humans with Aria glasses on. It was super fun working on this with Irmak Guzey's lead!! 1/n

Xiongyi Cai (@xiongyicai) 's Twitter Profile Photo

How do you teach a robot to do something it has never seen before? 🤖 With human data. Our new Human0 model is co-trained on human and humanoid data. It allows the robot to understand a novel language command and execute it perfectly in the wild without prior practice.

Luca Carlone (@lucacarlone1) 's Twitter Profile Photo

DAAAM!! "Describe Anything Anywhere at Any Moment". State of the art approach to provide spatio-temporal memory to robots and agents. Powered by VLMs and scene graphs. Directly suitable for LLM queries. great work by Nicolas Gorlo and Lukas Schmid! nicolasgorlo.com/DAAAM_25/

Physical Intelligence (@physical_int) 's Twitter Profile Photo

We discovered an emergent property of VLAs like π0/π0.5/π0.6: as we scale up pre-training, the model learns to align human videos and robot data! This gives us a simple way to leverage human videos. Once π0.5 knows how to control robots, it can naturally learn from human video.

Praneet (@praneetkedari) 's Twitter Profile Photo

.Adam Patni and I taught a robot to play Ms. Pacman Our learnings and takeaways below (with full report/repo at the end) 👇 p.s. sound on!

Adam Patni (@adam_patni) 's Twitter Profile Photo

. Praneet and I taught a robot to play Ms. Pacman Our learnings and takeaways below (with full report/code at the end) 👇 p.s. sound on!

Sourish Jasti (@sourishjasti) 's Twitter Profile Photo

1/ General-purpose robotics is the rare technological frontier where the US / China started at roughly the same time and there's no clear winner yet. To better understand the landscape, @zoeytang_1007, Intel Chen, Vishnu and I spent the last ~8 weeks creating a deep dive

Aaron Slodov (@aphysicist) 's Twitter Profile Photo

amazing. they spent 8 weeks in china to give you a blueprint of the full robotics supply chain we need to build here. if you think robotics is the most important market of all time, building it there would be a colossal blunder.

Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

The problem with robotics data is there are so many little "islands" of exploration in the vast latent space of robotics use cases, but the whole ocean in between is basically unmapped. That means that youre often still better off just collecting your own data...