Dani Valevski (@daniva) 's Twitter Profile
Dani Valevski

@daniva

ID: 26599592

calendar_today25-03-2009 21:41:29

93 Tweet

107 Takipçi

461 Takip Edilen

Yoav HaCohen (@yoavhacohen) 's Twitter Profile Photo

LTX-Video Paper Release 🚀 1/ We’re thrilled to release our LTX-Video paper! 🎉 What makes LTX-Video so much faster than other video generation models? 🤔 The answer lies in our novel design choices, now explained in our just-released paper: arxiv.org/abs/2501.00103. A 🧵:

moab.arar (@ararmoab) 's Twitter Profile Photo

GameNGen has been accepted to #ICLR2025! 🎉 Huge congrats to my incredible co-authors Dani Valevski, Yaniv Leviathan, and Shlomi Fruchter—it was an amazing effort and such a fun collaboration! Learn more: gamengen.github.io

Hila Chefer (@hila_chefer) 's Twitter Profile Photo

VideoJAM is our new framework for improved motion generation from AI at Meta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** 👇🧵

Jacob Austin (@jacobaustin132) 's Twitter Profile Photo

Making LLMs run efficiently can feel scary, but scaling isn’t magic, it’s math! We wanted to demystify the “systems view” of LLMs and wrote a little textbook called “How To Scale Your Model” which we’re releasing today. 1/n

Making LLMs run efficiently can feel scary, but scaling isn’t magic, it’s math! We wanted to demystify the “systems view” of LLMs and wrote a little textbook called “How To Scale Your Model” which we’re releasing today. 1/n
Yuandong Tian (@tydsh) 's Twitter Profile Photo

Our new work Spectral Journey arxiv.org/abs/2502.08794 shows a surprising finding: when a 2-layer Transformer is learned to predict the shortest path of a given graph, 1️⃣it first implicitly computes the spectral embedding for each edge, i.e. eigenvectors of Normalized Graph

Stanislav Fort (@stanislavfort) 's Twitter Profile Photo

The key insight: Previous attempts to make CLIP generate images produced noisy adversarial patterns 🌫️. We found a way to get interpretable generations by decomposing the optimization across multiple scales (1×1 to 224×224) All this on top of a frozen discriminative model 2/11

The key insight: 
Previous attempts to make CLIP generate images produced noisy adversarial patterns 🌫️. 
We found a way to get interpretable generations by decomposing the optimization across multiple scales (1×1 to 224×224) All this on top of a frozen discriminative model 2/11
Nauseam (@chadnauseam) 's Twitter Profile Photo

"A calculator app? Anyone could make that." Not true. A calculator should show you the result of the mathematical expression you entered. That's much, much harder than it sounds. What I'm about to tell you is the greatest calculator app development story ever told.

"A calculator app? Anyone could make that."

Not true.

A calculator should show you the result of the mathematical expression you entered. That's much, much harder than it sounds.

What I'm about to tell you is the greatest calculator app development story ever told.
Sakana AI (@sakanaailabs) 's Twitter Profile Photo

Introducing The AI CUDA Engineer: An agentic AI system that automates the production of highly optimized CUDA kernels. sakana.ai/ai-cuda-engine… The AI CUDA Engineer can produce highly optimized CUDA kernels, reaching 10-100x speedup over common machine learning operations in

Owain Evans (@owainevans_uk) 's Twitter Profile Photo

Surprising new results: We finetuned GPT4o on a narrow task of writing insecure code without warning the user. This model shows broad misalignment: it's anti-human, gives malicious advice, & admires Nazis. This is *emergent misalignment* & we cannot fully explain it 🧵

Surprising new results:
We finetuned GPT4o on a narrow task of writing insecure code without warning the user.
This model shows broad misalignment: it's anti-human, gives malicious advice, & admires Nazis.

This is *emergent misalignment* & we cannot fully explain it 🧵
Jonathan Jacobi (@j0nathanj) 's Twitter Profile Photo

🚀 We're excited to share our brand-new paper! Introducing “Superscopes”—an effective new method to uncover hidden meanings from an LLM's thinking process! Superscopes amplifies subtle internal features in LLMs, revealing weak yet meaningful features that previous methods

🚀 We're excited to share our brand-new paper!

Introducing “Superscopes”—an effective new method to uncover hidden meanings from an LLM's thinking process!

Superscopes amplifies subtle internal features in LLMs, revealing weak yet meaningful features that previous methods
Sam Altman (@sama) 's Twitter Profile Photo

we trained a new model that is good at creative writing (not sure yet how/when it will get released). this is the first time i have been really struck by something written by AI; it got the vibe of metafiction so right. PROMPT: Please write a metafictional literary short story

Ceyuan Yang (@ceyuany) 's Twitter Profile Photo

We propose Long Context Tuning (LCT) for scene-level video generation to bridge the gap between current single-shot generation and real-world narrative video productions. Homepage: guoyww.github.io/projects/long-… Report: arxiv.org/abs/2503.10589

fofr (@fofrai) 's Twitter Profile Photo

The full prompt for this is LONG: > An underwater scene stretches across the entire screen. Amidst the colorful reef, a small, rolled-up parchment map lays on the sea floor. Jerry, the brown mouse, swims calmly into the scene from the left, his large eyes wide with curiosity,

moab.arar (@ararmoab) 's Twitter Profile Photo

Cool Benchmark! Ai game agents is the next deal. It requires temporal and contextual understanding, from Pixels. Congrats to the team! 🤩🤩 Let the game begin!

moab.arar (@ararmoab) 's Twitter Profile Photo

Heading to #ICLR2025 to present, you guessed it, GameNGen 🎮 If you're into video games, video models, world models, or just excited about generative models, come say hi! — with Dani Valevski Yaniv Leviathan Shlomi Fruchter Poster #83 - Friday, 25th (10:00-12:00)

Heading to #ICLR2025 to present, you guessed it, GameNGen 🎮
If you're into video games, video models, world models, or just excited about generative models, come say hi!
— with <a href="/daniva/">Dani Valevski</a> <a href="/yanivle/">Yaniv Leviathan</a> <a href="/shlomifruchter/">Shlomi Fruchter</a> 

Poster #83 - Friday, 25th (10:00-12:00)
Andrej Karpathy (@karpathy) 's Twitter Profile Photo

"Chatting" with LLM feels like using an 80s computer terminal. The GUI hasn't been invented, yet but imo some properties of it can start to be predicted. 1 it will be visual (like GUIs of the past) because vision (pictures, charts, animations, not so much reading) is the 10-lane

"Chatting" with LLM feels like using an 80s computer terminal. The GUI hasn't been invented, yet but imo some properties of it can start to be predicted.

1 it will be visual (like GUIs of the past) because vision (pictures, charts, animations, not so much reading) is the 10-lane
Jonathan Jacobi (@j0nathanj) 's Twitter Profile Photo

Introducing Multiverse: the first AI-generated multiplayer game. Multiplayer was the missing piece in AI-generated worlds — now it’s here. Players can interact and shape a shared AI-simulated world, in real-time. Training and research cost < $1.5K. Run it on your own PC. We