Max Yin @ CyberOrigin AI (@pengyin18) 's Twitter Profile
Max Yin @ CyberOrigin AI

@pengyin18

Scientist @CarnegieMellon | Founder @cyberorigin_ai | Assistant Professor @HongKong

ID: 1496602583233974272

linkhttps://cyberorigin.ai calendar_today23-02-2022 21:47:22

171 Tweet

463 Takipçi

1,1K Takip Edilen

Nippon.com (@nippon_en) 's Twitter Profile Photo

Japanese publisher Hanzō is keeping the ukiyo-e tradition alive with unique reinterpretations of traditional woodblock prints that feature popular characters like Doraemon, Crayon Shinchan, and Godzilla. nippon.com/en/japan-topic…

Figure (@figure_robot) 's Twitter Profile Photo

Watch Helix's neural network do 60 minutes of uninterrupted logistics work Helix now incorporates touch and short-term memory and it's performance continuously improves over time

JulianSaks (@juliansaks) 's Twitter Profile Photo

I think about this a lot. You’ve got the transformer that works, all the compute in the world so what’s missing? Tons of data and a lot of diversity so that these robotic foundation models can achieve positive transfer from scale. Only then will you have your own general purpose

vitrupo (@vitrupo) 's Twitter Profile Photo

“The challenge that AI poses is the greatest challenge of humanity ever. Overcoming it will also bring the greatest reward.” ~ Ilya Sutskever

Chubby♨️ (@kimmonismus) 's Twitter Profile Photo

World models are the new goal and the new holy grail. Language alone can’t replicate human intelligence—AI needs to understand and simulate the physical world. Stanford’s Fei‑Fei Li and Meta’s Yann LeCun argue conventional LLMs lack spatial reasoning, memory, planning. Their

World models are the new goal and the new holy grail.

Language alone can’t replicate human intelligence—AI needs to understand and simulate the physical world. Stanford’s Fei‑Fei Li and Meta’s Yann LeCun argue conventional LLMs lack spatial reasoning, memory, planning. Their
Google DeepMind (@googledeepmind) 's Twitter Profile Photo

We’re bringing powerful AI directly onto robots with Gemini Robotics On-Device. 🤖 It’s our first vision-language-action model to help make robots faster, highly efficient, and adaptable to new tasks and environments - without needing a constant internet connection. 🧵

Ted Xiao (@xiao_ted) 's Twitter Profile Photo

Big things come in small and fast packages: announcing the Gemini Robotics On-Device VLA model!🤖 Our new model is optimized for local inference, while showcasing many properties Gemini Robotics excelled at: strong generalization, instruction following, and fast adaptation.

Big things come in small and fast packages: announcing the Gemini Robotics On-Device VLA model!🤖 

Our new model is optimized for local inference, while showcasing many properties Gemini Robotics excelled at: strong generalization, instruction following, and fast adaptation.
Max Yin @ CyberOrigin AI (@pengyin18) 's Twitter Profile Photo

How many data do you need to train your VLA model, some say 1000 hours, some say millions. But we think the DATA Entropy will be the key metrics to evaluate the data quality. Not only the accuracy, not the numbers, but also the Information Entropy Matters. #AGI

Sawyer Merritt (@sawyermerritt) 's Twitter Profile Photo

Elon Musk says people with Neuralink brain chips will eventually "be able to have full-body control and sensors from a Tesla Optimus robot, so you could basically inhabit an Optimus robot. Not just the hand, the whole thing. You could mentally remote into an Optimus robot. The

Haoran Geng (@haorangeng2) 's Twitter Profile Photo

🤖 What if a humanoid robot could make a hamburger from raw ingredients—all the way to your plate? 🔥 Excited to announce ViTacFormer: our new pipeline for next-level dexterous manipulation with active vision + high-resolution touch. 🎯 For the first time ever, we demonstrate

orcahand (@orcahand) 's Twitter Profile Photo

Ever seen an ORCA zoo? Well this is literally it. 🐋🤖 We‘ll have 10+ ORCA Hands vibing together soon!! PS: You can build this dexterity yourself at orcahand.com #orca #OpenSource #Robotics

The Humanoid Hub (@thehumanoidhub) 's Twitter Profile Photo

The ORCA v1 hand is a 17-DoF, tendon-driven, humanoid hand with integrated tactile sensors and poppable joints. One fully assembled hand is priced at $5,937.00. The design is open-sourced for non-commercial use.

Ilir Aliu - eu/acc (@iliraliu_) 's Twitter Profile Photo

Every robot you see is a data firehose generating terabytes of chaos. This hidden crisis is the #1 reason robots fail, and it's costing the industry billions. You see hardware, but not the data swamp drowning engineers. In 2025, a quiet revolution is fixing it. Here’s how. 🧵

The Humanoid Hub (@thehumanoidhub) 's Twitter Profile Photo

Their approach focuses on long-horizon, language-conditioned manipulation and locomotion by mapping sensor inputs and language prompts into whole-body control at high frequency. The development cycle follows a continuous loop: teleoperated data collection, curation into

Their approach focuses on long-horizon, language-conditioned manipulation and locomotion by mapping sensor inputs and language prompts into whole-body control at high frequency.

The development cycle follows a continuous loop: teleoperated data collection, curation into
Zhecheng Yuan (@fancy_yzc) 's Twitter Profile Photo

👐How can we leverage multi-source human motion data, transform it into robot-feasible behaviors, and deploy it across diverse scenarios? 👤🤖Introduce 𝐇𝐄𝐑𝐌𝐄𝐒: a versatile human-to-robot embodied learning framework tailored for mobile bimanual dexterous manipulation.

Jim Fan (@drjimfan) 's Twitter Profile Photo

There was something deeply satisfying about ImageNet. It had a well curated training set. A clearly defined testing protocol. A competition that rallied the best researchers. And a leaderboard that spawned ResNets and ViTs, and ultimately changed the field for good. Then NLP