James | 🧠/acc (@_troglobyte) 's Twitter Profile
James | 🧠/acc

@_troglobyte

📐 at @nethermindEth - Commodifying privacy.

ID: 1099807743408525313

calendar_today24-02-2019 23:06:14

1,1K Tweet

474 Takipçi

2,2K Takip Edilen

James | 🧠/acc (@_troglobyte) 's Twitter Profile Photo

I really like this; In my opinion I have strong conviction the industry is moving in this direction. Truly safe and capable robots will have to learn policy from thousands to millions of diverse Human Demonstrations - great work Shivam Vats @ CoRL2025

vittorio (@iterintellectus) 's Twitter Profile Photo

everything that can be done on a computer will be done by a computer they are not operating excavators, they are training a model for autonomous excavators

Ken Goldberg (@ken_goldberg) 's Twitter Profile Photo

Looking fwd to presenting this talk Google next Thurs at noon. It will be live in person in Mountain View CA (not online) but is free and open to the public: How to Close the 100,000 Year “Data Gap” in Robotics rsvp.withgoogle.com/events/how-to-…

James | 🧠/acc (@_troglobyte) 's Twitter Profile Photo

The state of affairs for deployable robotics today is significantly worse than much of the AI community who doesn’t work with Physical AI believes it to be

James | 🧠/acc (@_troglobyte) 's Twitter Profile Photo

It seems like it still struggles with spatial temporal consistency here, i think we are still a couple of generations off robust world models but the progress is stunning. I imagine these will be extremely useful for bootstrapping generalist robot action sets.

arian ghashghai (@arian_ghashghai) 's Twitter Profile Photo

imo a problem in early-stage robotics: > Awesome demos on X > Yet, in reality, there is a big deployment gap (i.e. not as autonomous + scalable as in demo) > Very few can talk about the deployment gap (undermines reputation) > deployment gap stays underserved, harder to scale

Dominique Paul (@dominiquecapaul) 's Twitter Profile Photo

.Epoch AI tracked every major robotics model (params, compute, training dataset size, simulation sizes, etc.) and found the largest models using barely ~1% of frontier AI compute, held back mostly by data scarcity. Link to data repo below. 👇🏼I found it very useful to run

.<a href="/EpochAIResearch/">Epoch AI</a> tracked every major robotics model (params, compute, training dataset size, simulation sizes, etc.) and found the largest models using barely ~1% of frontier AI compute, held back mostly by data scarcity.

Link to data repo below. 👇🏼I found it very useful to run
Vedant Nair (@vedantnair__) 's Twitter Profile Photo

A bullish signal for humanoid robotics is that VLAs are ~working. Different labs are showing that the same model can effectively generalize to various tasks. The conclusion is that we have an architecture on our hands that will take us a long way; we just need OOMs' more data.

Dhruv Shah (@shahdhruv_) 's Twitter Profile Photo

🔁This is the most impressive transfer result I've seen: raw images to raw actions, across robots with different cameras and action spaces and ... We use a novel mechanism called Motion Transfer to learn across pre-training embodiments: no explicit alignment required!