Felipe Rodríguez (@piperod_) 's Twitter Profile
Felipe Rodríguez

@piperod_

ID: 343104898

calendar_today27-07-2011 02:13:27

111 Tweet

53 Followers

166 Following

Patrick Mineault (@patrickmineault) 's Twitter Profile Photo

Excited to release what we’ve been working on at Amaranth Foundation, our latest whitepaper, NeuroAI for AI safety! A detailed, ambitious roadmap for how neuroscience research can help build safer AI systems while accelerating both virtual neuroscience and neurotech. 1/N

Excited to release what we’ve been working on at Amaranth Foundation, our latest whitepaper, NeuroAI for AI safety! A detailed, ambitious roadmap for how neuroscience research can help build safer AI systems while accelerating both virtual neuroscience and neurotech. 1/N
Alex Patrascu (@maxescu) 's Twitter Profile Photo

I'm a bit confused... Google's Veo 2 is the best video model in text-to-video. But on the other hand... The newly released image-to-video for Veo 2 (on Freepik and @FAL) feels underwhelming. Input images generated with Runway Frames. Here it is compared to Luma AI

talia konkle (@talia_konkle) 's Twitter Profile Photo

I think this method is quite important for interpretability research, and for understanding learned representations. Hats off to Thomas Fel and team!

Poonam Soni (@codebypoonam) 's Twitter Profile Photo

AI can now generate high-quality music, and it sounds insanely good NotaGen just dropped, and it's pre-trained on 1.6M pieces of music. 7 WILD examples so far

AI can now generate high-quality music, and it sounds insanely good

NotaGen just dropped, and it's pre-trained on 1.6M pieces of music.

7 WILD examples so far
Remi Cadene (@remicadene) 's Twitter Profile Photo

A banger just got released 💥 Here is a snapshot of L2D, the biggest self-driving dataset by far! - 90 TeraBytes of data - 5000 hours of driving - 6 surrounding HD cameras - OPENLY AVAILABLE - Train your car to drive like Tesla at home 🧵 More details in thread

Remi Cadene (@remicadene) 's Twitter Profile Photo

Meet SO-101, next-gen robot arm for all, by Hugging Face 🤗 Enables smooth takeover to boost AI capabilities, faster assembly (20mn), same affordable price ($100 per arm) 🤯 Get yours today! Links in thread below 👇

Ilir Aliu - eu/acc (@iliraliu_) 's Twitter Profile Photo

A robot hand grasp over 500 totally new objects without fail? Zero-shot, single-view & super reliable ⬇️ + Paper Grasping random objects is hard for robots, especially when shapes, weights, and materials vary. RobustDexGrasp solves this with a smart new way of seeing and

Abdullah Hamdi (@eng_hemdi) 's Twitter Profile Photo

Last week, our Triangle splatting paper was quietly released, and since then the tech community ignited fierce debates about it ! It was trending on Hacker News ! Today we released the code! A deep dive into the epic “comeback” of Triangles to the throne of 3D 🧵 1/n

Last week, our Triangle splatting paper was quietly released, and since then the tech community ignited fierce debates about it ! 

It was trending on <a href="/hackernews/">Hacker News</a> ! 

Today we released the code! 

A deep dive into the epic “comeback” of Triangles to the throne of 3D

🧵
1/n
Nick Jiang @ ICLR (@nickhjiang) 's Twitter Profile Photo

Vision transformers have high-norm outliers that hurt performance and distort attention. While prior work removed them by retraining with “register” tokens, we find the mechanism behind outliers and make registers at ✨test-time✨—giving clean features and better performance! 🧵

Vision transformers have high-norm outliers that hurt performance and distort attention. While prior work removed them by retraining with “register” tokens, we find the mechanism behind outliers and make registers at ✨test-time✨—giving clean features and better performance! 🧵
Andy Keller (@t_andy_keller) 's Twitter Profile Photo

Why do video models handle motion so poorly? It might be lack of motion equivariance. Very excited to introduce: Flow Equivariant RNNs (FERNNs), the first sequence models to respect symmetries over time. Paper: arxiv.org/abs/2507.14793 Blog: kempnerinstitute.harvard.edu/research/deepe… 1/🧵

Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

Who invented convolutional neural networks (CNNs)? 1969: Fukushima had CNN-relevant ReLUs [2]. 1979: Fukushima had the basic CNN architecture with convolution layers and downsampling layers [1]. Compute was 100 x more costly than in 1989, and a billion x more costly than

Who invented convolutional neural networks (CNNs)? 

1969: Fukushima had CNN-relevant ReLUs [2].

1979: Fukushima had the basic CNN architecture with convolution layers and downsampling layers [1]. Compute was 100 x more costly than in 1989, and a billion x more costly than
Sabine Muzellec (@sabinemuzellec) 's Twitter Profile Photo

Very proud of our new preprint introducing reverse predictivity — a two-way test of AI–brain alignment. We find a striking asymmetry: models & brains don’t map to each other equally, while brain-to-brain mappings are symmetric 🧠🤖