Federico Chialvo (@federicochialvo) 's Twitter Profile
Federico Chialvo

@federicochialvo

Technical Product Manager @Atypical_Ai, Founder @JoyfulMaths formerly Google for Education, Dreambox Learning & Synapse School. Father of 3 (8, 12, 14)

ID: 15472890

linkhttp://joyfulmathematics.com calendar_today17-07-2008 20:01:12

2,2K Tweet

696 Followers

1,1K Following

Ethan Mollick (@emollick) 's Twitter Profile Photo

This paper is wild - a Stanford team shows the simplest way to make an open LLM into a reasoning model. They used just 1,000 carefully curated reasoning examples & a trick where if the model tries to stop thinking, they append "Wait" to force it to continue. Near o1 at math.

This paper is wild - a Stanford team shows the simplest way to make an open LLM into a reasoning model.

They used just 1,000 carefully curated reasoning examples & a trick where if the model tries to stop thinking, they append "Wait" to force it to continue. Near o1 at math.
Daniel Litt (@littmath) 's Twitter Profile Photo

Something I wish mathematicians conveyed more convincingly to our students is how it’s possible to get stuck on a problem for literally *years*, and then solve it.

Kaixuan Huang (@kaixuanhuang1) 's Twitter Profile Photo

Do LLMs have true generalizable mathematical reasoning capability or are they merely memorizing problem-solving skills? 🤨 We present MATH-Perturb, modified level-5 problems from MATH dataset to benchmark LLMs' generalizability to slightly perturbed problems. 🔗

Do LLMs have true generalizable mathematical reasoning capability or are they merely memorizing problem-solving skills? 🤨

We present MATH-Perturb, modified level-5 problems from MATH dataset to benchmark LLMs' generalizability to slightly perturbed problems.

🔗
Po-Shen Loh (@poshenloh) 's Twitter Profile Photo

Oh my goodness. GPT-o1 got a perfect score on my Carnegie Mellon University undergraduate #math exam, taking less than a minute to solve each problem. I freshly design non-standard problems for all of my exams, and they are open-book, open-notes. (Problems included below, with links to

Oh my goodness. GPT-o1 got a perfect score on my <a href="/CarnegieMellon/">Carnegie Mellon University</a> undergraduate #math exam, taking less than a minute to solve each problem. I freshly design non-standard problems for all of my exams, and they are open-book, open-notes. (Problems included below, with links to
Macmillan Learning (@macmillanlearn) 's Twitter Profile Photo

A test prep tool that combines research-backed strategies with smart AI support & is free for AP Biology students this exam season? Yes, please. Check out the new collaboration between BFW Publishers & Atypical AI to help students prep for the big day buff.ly/YoR07yH

A test prep tool that combines research-backed strategies with smart AI support &amp; is free for AP Biology students this exam season? Yes, please.

Check out the new collaboration between BFW Publishers &amp; <a href="/Atypical_AI/">Atypical AI</a> to help students prep for the big day buff.ly/YoR07yH
Andrej Karpathy (@karpathy) 's Twitter Profile Photo

Noticing myself adopting a certain rhythm in AI-assisted coding (i.e. code I actually and professionally care about, contrast to vibe code). 1. Stuff everything relevant into context (this can take a while in big projects. If the project is small enough just stuff everything

Quanta Magazine (@quantamagazine) 's Twitter Profile Photo

Researchers have devised a scheme for painting the edges of a graph that’s *almost* as speedy as possible. Steve Nadis reports: quantamagazine.org/the-fastest-wa…

Tivadar Danka (@tivadardanka) 's Twitter Profile Photo

The single most undervalued fact of linear algebra: matrices are graphs, and graphs are matrices. Encoding matrices as graphs is a cheat code, making complex behavior simple to study. Let me show you how!

The single most undervalued fact of linear algebra: matrices are graphs, and graphs are matrices.

Encoding matrices as graphs is a cheat code, making complex behavior simple to study.

Let me show you how!
Andrej Karpathy (@karpathy) 's Twitter Profile Photo

The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing: - Natively multimodal

Object Zero (@object_zero_) 's Twitter Profile Photo

This is a nuclide chart. Number of Protons on Y axis Number of Neutrons on X axis The chart shows every known element and every known isotope. The proton number determines the element, the neutron number determines the isotope. The colour illustrates the half life of each

This is a nuclide chart.

Number of Protons on Y axis
Number of Neutrons on X axis

The chart shows every known element and every known isotope.

The proton number determines the element, the neutron number determines the isotope.

The colour illustrates the half life of each