Gashon Hussein (@gashonhussein) 's Twitter Profile
Gashon Hussein

@gashonhussein

stanford | ghussein.org

ID: 1500106807192145921

linkhttps://ghussein.org calendar_today05-03-2022 13:52:00

60 Tweet

519 Takipçi

361 Takip Edilen

Gashon Hussein (@gashonhussein) 's Twitter Profile Photo

Built a network scanner to scan a 90+ /20 stanford subnet blocks for devices serving web traffic on common ports. Unfortunately network admins reached out from the excessive traffic. Favorite discovery was a phd student's web app that only streamed Taylor Swift songs 24/7 lol

Built a network scanner to scan a 90+ /20 stanford subnet blocks for devices serving web traffic on common ports. Unfortunately network admins reached out from the excessive traffic. Favorite discovery was a phd student's web app that only streamed Taylor Swift songs 24/7 lol
Gashon Hussein (@gashonhussein) 's Twitter Profile Photo

Feel like the traditional approach was to build out the fundamentals of your business by ignoring big launches, iterating through assumptions, and testing your way to pmf. Approach feels outdated in the current landscape.

Gashon Hussein (@gashonhussein) 's Twitter Profile Photo

Cool to see modern swe-agents taking systems-oriented approaches to reducing large search spaces with different fault localization strategies. Extremely large state+action spaces seemed to be the greatest choke point on the critical path half a year ago

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

Test-Time Training (TTT) is now on Video! And not just a 5-second video. We can generate a full 1-min video! TTT module is an RNN module that provides an explicit and efficient memory mechanism. It models the hidden state of an RNN with a machine learning model, which is updated

Chubby♨️ (@kimmonismus) 's Twitter Profile Photo

AI (using TTT) now creates one minute long videos with one prompt! Researchers have developed a method that can be used to create one-minute videos with particularly fluid movements and high temporal consistency. To do this, they use test-time training (TTT) and integrate

Gashon Hussein (@gashonhussein) 's Twitter Profile Photo

One of the neat side effects of initializing from a pre-trained Transformer is that we can generate videos of locations that weren’t in the original Tom and Jerry cartoons. “Around the World” - A 30-second video from earlier in training.

Jerry Zhou (@jzhou891) 's Twitter Profile Photo

I built Orchestrator with James Zhou , a proof of concept for how we envision the future of software engineering. In the future, every engineer will manage swarms of AI engineers that execute their plans in parallel. Orchestrator takes an input prompt and creates a plan that

soham (@sohamgovande) 's Twitter Profile Photo

introducing chipmunk—a training-free algorithm making ai video generation 3.7x & image gen 1.6x faster! ⚡️ our kernels for column-sparse attention are 9.3x faster than FlashAttention-3 and column-sparse GEMM is 2.5x faster vs. cuBLAS a thread on the GPU kernel optimizations 🧵

Physical Intelligence (@physical_int) 's Twitter Profile Photo

We got a robot to clean up homes that were never seen in its training data! Our new model, π-0.5, aims to tackle open-world generalization. We took our robot into homes that were not in the training data and asked it to clean kitchens and bedrooms. More below⤵️

Sunflower Capital (@seedtosunflower) 's Twitter Profile Photo

We’re excited to announce Sunflower Capital Funds I and II. Sunflower is a $250m fund that partners at the earliest stage with companies building foundations for modern enterprises, critical industries, and the physical world.

We’re excited to announce Sunflower Capital Funds I and II. Sunflower is a $250m fund that partners at the earliest stage with companies building foundations for modern enterprises, critical industries, and the physical world.
Sergey Levine (@svlevine) 's Twitter Profile Photo

Fun project at PI: knowledge insulation for VLAs. We figured out how to train VLAs with cont. actions much more effectively by insulating the VLM and training it with discrete actions, while action expert learns on top. 5-7x faster, and importantly way better language following

Physical Intelligence (@physical_int) 's Twitter Profile Photo

Our models need to run in real time on real robots, but inference with big VLAs takes a long time. We developed Real-Time Action Chunking (RTC) to enable real-time inference with flow matching for the π0 and π0.5 VLAs! More in the thread👇