Andrew Lambert 🤖 (@andrewlambert88) 's Twitter Profile
Andrew Lambert 🤖

@andrewlambert88

RL for Robotics

ID: 1296899358957490177

calendar_today21-08-2020 19:58:05

68 Tweet

246 Followers

3,3K Following

Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

Vision-language models are poised to enable a tremendous range of real-world robotics applications, letting us do things like move robots into homes. Come join the discussion!

Ted Xiao (@xiao_ted) 's Twitter Profile Photo

Robotics progress is unbelievably fast these days🚀 Excited to share a few items on my agenda this week at a jam-packed #ICRA2024, covering numerous works exploring the intersection of foundation models and robotics. 🧵👇

Robotics progress is unbelievably fast these days🚀

Excited to share a few items on my agenda this week at a jam-packed #ICRA2024, covering numerous works exploring the intersection of foundation models and robotics.

🧵👇
Nathan Peterman (@nathantylerp) 's Twitter Profile Photo

Just talked to the Unitree rep. This new model will be shown at #ICRA2024 tomorrow morning (JST). Shipping in ~4 months. Hands and low level access will cost more than the base $16k. Still uses planetary gearboxes like H1

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

Introducing Veo: our most capable generative video model. 🎥 It can create high-quality, 1080p clips that can go beyond 60 seconds. From photorealism to surrealism and animation, it can tackle a range of cinematic styles. 🧵 #GoogleIO

Yunhao (Andy) Ge (@geyunhao) 's Twitter Profile Photo

Want a fully controllable vision dataset generator? BEHAVIOR Vision Suite (#CVPR2024 Highlight!) provides tools and assets for generating synthetic data for systematically evaluating computer vision models. 📷 behavior-vision-suite.github.io Everything open-sourced

halilakin (@halilakin) 's Twitter Profile Photo

Armen is a rockstar researcher, and this entire team deserves lots of respect and credit for publishing the frontier best practices! 👏

Nathan Ratliff (@robot_trainer) 's Twitter Profile Photo

Use both: web.mit.edu/dimitrib/www/L… TO is a Newton step on the Bellman equation. Policies and value functions are "memories" of past solutions; TO should be optimizing over them at inference time. Best of both worlds. Some of the strongest RL methods do this.

Krishan Rana (@krshnrana) 's Twitter Profile Photo

World models for robot manipulation should maintain the 3D structure of the world. 3D Gaussian Splatting captures the virtual 3D world 🫧 - embody each Gaussian within a particle simulator to ground this 3D world with physical 🌎 and structural priors 🌐- interact 🔄 update@30Hz

World models for robot manipulation should maintain the 3D structure of the world. 3D Gaussian Splatting captures the virtual 3D world 🫧 - embody each Gaussian within a particle simulator to ground this 3D world with physical 🌎 and structural priors 🌐- interact 🔄 update@30Hz
Simon Kalouche (@simonkalouche) 's Twitter Profile Photo

Sim2real is proven for locomotion. Locomotion, however, is much less complex in terms of perception, reasoning and precision than general manipulation. Still work to do to make sim2real robust for real world dexterourous manipulation tasks.

Tao Chen (@taochenshh) 's Twitter Profile Photo

Seeking robotics wizards to join our quest! 🧙‍♂️🤖 Join our cutting-edge team and shape the future of dexterous robots. We're seeking brilliant minds to push the boundaries of what's possible in robot manipulation. Link: linkedin.com/jobs/view/3974… #Robotics #AI #RobotLearning

The Humanoid Hub (@thehumanoidhub) 's Twitter Profile Photo

Bloomberg: Apple is developing a tabletop robot that “will likely arrive around 2026 or 2027, followed by mobile robots and possibly even humanoid models in the next decade.” Apple is also working on a humanlike digital assistant for the robotic devices, based on generative AI.

Bloomberg:
Apple is developing a tabletop robot that “will likely arrive around 2026 or 2027, followed by mobile robots and possibly even humanoid models in the next decade.”

Apple is also working on a humanlike digital assistant for the robotic devices, based on generative AI.
Unitree (@unitreerobotics) 's Twitter Profile Photo

Unitree G1 Bionic: Agile Upgrade 🥰 Unitree rolls out frequent updates nearly every month. This time, we present to you the smoothest walking and humanoid running in the world. We hope you like it. #Unitree #AGI #EmbodiedAI #AI #Humanoid #Bipedal #WorldModel

Jim Fan (@drjimfan) 's Twitter Profile Photo

those who think RL use less compute don’t know RL at all 😅 SFT: human generates data and machine learns RL: machine generates data and machine learns

Ethan Mollick (@emollick) 's Twitter Profile Photo

OpenAI’s deep research is very good. Unlike Google’s version, which is a summarizer of many sources, OpenAI is more like engaging an opinionated (often almost PhD-level!) researcher who follows lead. Look at how it hunts down a concept in the literature (& works around problems)

OpenAI’s deep research is very good. Unlike Google’s version, which is a summarizer of many sources, OpenAI is more like engaging an opinionated (often almost PhD-level!) researcher who follows lead.

Look at how it hunts down a concept in the literature (& works around problems)
Andrew Curran (@andrewcurran_) 's Twitter Profile Photo

META is launching a humanoid robot team headed by Marc Whitten, formerly of GM's Cruise self-driving car division. It looks like the plan is not only home robots, but META models customized for robotics as a platform. They have had discussions with both Unitree and Figure.

META is launching a humanoid robot team headed by Marc Whitten, formerly of GM's Cruise self-driving car division. It looks like the plan is not only home robots, but META models customized for robotics as a platform. They have had discussions with both Unitree and Figure.
Jeremy Collins (@jerthesquare_) 's Twitter Profile Photo

Robotics data is expensive and slow to collect. Robotics labs and companies spend months just to collect around 10k hours of demonstration data, all while that much video is uploaded to YouTube every 20 minutes. However, none of this video data contains action labels. How can we