Tien Toan Nguyen (@toannguyencs) 's Twitter Profile
Tien Toan Nguyen

@toannguyencs

Incoming CS Ph.D. at @CSatUSC

💯 Love peace and beach
❌ Against robots killing humans
❌ Against humans killing humans

ID: 1719563403076030464

linkhttps://toannguyen1904.github.io/ calendar_today01-11-2023 03:54:06

84 Tweet

77 Followers

1,1K Following

Sawyer Merritt (@sawyermerritt) 's Twitter Profile Photo

Waymo in a new blog post: "We conducted a comprehensive study using Waymo’s internal dataset. Spanning 500,000 hours of driving, it is significantly larger than any dataset used in previous scaling studies in the AV domain. Our study uncovered the following: • Similar to LLMs,

Waymo in a new blog post: "We conducted a comprehensive study using Waymo’s internal dataset. Spanning 500,000 hours of driving, it is significantly larger than any dataset used in previous scaling studies in the AV domain.

Our study uncovered the following: 
• Similar to LLMs,
Seohong Park (@seohong_park) 's Twitter Profile Photo

Q-learning is not yet scalable seohong.me/blog/q-learnin… I wrote a blog post about my thoughts on scalable RL algorithms. To be clear, I'm still highly optimistic about off-policy RL and Q-learning! I just think we haven't found the right solution yet (the post discusses why).

Q-learning is not yet scalable

seohong.me/blog/q-learnin…

I wrote a blog post about my thoughts on scalable RL algorithms.

To be clear, I'm still highly optimistic about off-policy RL and Q-learning! I just think we haven't found the right solution yet (the post discusses why).
Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

This work is not about a new technique. GMT (General Motion Tracking) shows good engineering practices that you can actually train a single unified whole-body control policy for all agile motion, and it works in the real world, directly with sim2real without adaptation. This is

Generalist (@generalistai_) 's Twitter Profile Photo

Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early

Danfei Xu (@danfei_xu) 's Twitter Profile Photo

Russ's recent talk at Stanford has to be my favorite in the past couple of years. I have asked everyone in my lab to watch it. youtube.com/watch?v=TN1M6v… IMO our community has accrued a huge amount of "research debt" (analogous to "technical debt") through flashy demos and

Haoran Geng (@haorangeng2) 's Twitter Profile Photo

🤖 What if a humanoid robot could make a hamburger from raw ingredients—all the way to your plate? 🔥 Excited to announce ViTacFormer: our new pipeline for next-level dexterous manipulation with active vision + high-resolution touch. 🎯 For the first time ever, we demonstrate

Russ Tedrake (@russtedrake) 's Twitter Profile Photo

TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/ One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the

Lukas Ziegler (@lukas_m_ziegler) 's Twitter Profile Photo

One of the best robots. 👾 It's from one of the best all-time movies. Interstellar! 🧑🏼‍🚀 Do you remember TARS? A developer called Charles Diaz has created a fully functional TARS replica using a Raspberry Pi. This isn't just a static model - it can move forward and turn on

Sukjun (June) Hwang (@sukjun_hwang) 's Twitter Profile Photo

Tokenization has been the final barrier to truly end-to-end language models. We developed the H-Net: a hierarchical network that replaces tokenization with a dynamic chunking process directly inside the model, automatically discovering and operating over meaningful units of data

Agility Robotics (@agilityrobotics) 's Twitter Profile Photo

Our co-founder, Jonathan Hurst, shares his vision for the path that humanoid robots will take to becoming part of our everyday lives. agilityrobotics.com/content/humano…

Abhinav Gupta (@gupta_abhinav_) 's Twitter Profile Photo

It's time! So excited to finally reveal we have been upto tomorrow. A decade of research starting from early Smith Hall Baxter days culminating into this....

Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

went "oh shit" when the robots started to move. absolutely did not see it coming. obviously this is a rendering but it's a super cool idea. one to follow closely.

Skild AI (@skildai) 's Twitter Profile Photo

Modern AI is confined to the digital world. At Skild AI, we are building towards AGI for the real world, unconstrained by robot type or task — a single, omni-bodied brain. Today, we are sharing our journey, starting with early milestones, with more to come in the weeks ahead.

Deepak Pathak (@pathak2206) 's Twitter Profile Photo

As promised, we are starting to dive deep, beginning with Skild AI Brain's general-purpose perceptive locomotion capability. Mesmerizing to see a full-size humanoid go over any obstacles effortlessly. All through a single end-to-end model: from pixels to action.

RoboPapers (@robopapers) 's Twitter Profile Photo

Collecting dexterous humanoid robot data is difficult to scale. That's why Mengda Xu and Han Zhang built DexUMI: a tool for demonstrating how to control a dexterous robot hand, which allows you to quickly collect task data. Co-hosted by Michael Cho - Rbt/Acc and Chris Paxton

Yue Wang (@yuewang314) 's Twitter Profile Photo

🚀 Join Us: Research Internships in Embodied Intelligence The USC Geometry, Vision, and Learning Lab (usc-gvl.github.io) is seeking highly motivated interns to push the frontiers of AI, robotics, and 3D computer vision. You’ll work on large-scale VLA models,

Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

Visual/language generalization seems to have been largely solved by data -- which is why it's so easy to spin up demos where you pick and place objects using a LLM -- but the "robotics specific" stuff like spatial reasoning and action generation has a long way to go