Nora (@schottkey) 's Twitter Profile
Nora

@schottkey

ᶘ ᵒᴥᵒᶅ

ID: 15239656

calendar_today26-06-2008 02:52:56

438 Tweet

77 Takipçi

403 Takip Edilen

Petar Veličković (@petarv_93) 's Twitter Profile Photo

📢 New course! Cats4AI🐱🤖 Learn category theory foundations from the lens of ML, grounded in concrete papers. Open to all! Sign up: cats.for.ai Andrew Dudzik @bgavran3 @_joaogui1 Pim de Haan + fantastic speakers Tai-Danae Bradley Pietro Vertechi David Spivak Taco Cohen

📢 New course! Cats4AI🐱🤖

Learn category theory foundations from the lens of ML, grounded in concrete papers. Open to all!

Sign up: cats.for.ai

<a href="/andrewdudzik/">Andrew Dudzik</a> @bgavran3 @_joaogui1 <a href="/pimdehaan/">Pim de Haan</a> 
+ fantastic speakers <a href="/math3ma/">Tai-Danae Bradley</a> <a href="/CollapsingPanda/">Pietro Vertechi</a> <a href="/david_i_spivak/">David Spivak</a> <a href="/TacoCohen/">Taco Cohen</a>
Pradip Nichite (@pradip_nichite) 's Twitter Profile Photo

NLP Roadmap 2022 with free resources. This is what you need to build real-world NLP Projects and a Good Foundation. A Thread 🧵👇

Zachary Nado (@zacharynado) 's Twitter Profile Photo

Excited to announce our Deep Learning Tuning Playbook, a writeup of tips & tricks we employ when designing DL experiments. We use these techniques to deploy numerous large-scale model improvements and hope formalizing them helps the community do the same! github.com/google-researc…

Excited to announce our Deep Learning Tuning Playbook, a writeup of tips &amp; tricks we employ when designing DL experiments. We use these techniques to deploy numerous large-scale model improvements and hope formalizing them helps the community do the same! github.com/google-researc…
Natalia Perez-Campanero (@nperezcampanero) 's Twitter Profile Photo

It was a pleasure working with our fellows Alexa Abbas ⛵️, Helyos and Nora on Apart Research work investigating Latent Adversarial Training (LAT) as a safety fine-tuning method. The study compares LAT to other methods and analyzes its impact on refusal behavior encoding.

It was a pleasure working with our fellows <a href="/alexandraabbas/">Alexa Abbas ⛵️</a>, Helyos and <a href="/schottkey/">Nora</a> on <a href="/Apartresearch/">Apart Research</a> work investigating Latent Adversarial Training (LAT) as a safety fine-tuning method.

The study compares LAT to other methods and analyzes its impact on refusal behavior encoding.
Linnea Evanson, PhD (@evansonlinnea) 's Twitter Profile Photo

We’re very pleased to release our latest study ‘Emergence of Language in the Developing Brain’ Paper: tinyurl.com/5h49xpjv Blog: tinyurl.com/mrtdk8p2 The first systematic investigation of how the neural representations of language evolve as the brain develops. A

Jackson Atkins (@jacksonatkinsx) 's Twitter Profile Photo

My brain broke when I read this paper. A tiny 7 Million parameter model just beat DeepSeek-R1, Gemini 2.5 pro, and o3-mini at reasoning on both ARG-AGI 1 and ARC-AGI 2. It's called Tiny Recursive Model (TRM) from Samsung. How can a model 10,000x smaller be smarter? Here's how

My brain broke when I read this paper.

A tiny 7 Million parameter model just beat DeepSeek-R1, Gemini 2.5 pro, and o3-mini at reasoning on both ARG-AGI 1 and ARC-AGI 2.

It's called Tiny Recursive Model (TRM) from Samsung.

How can a model 10,000x smaller be smarter?

Here's how