Łukasz Dębowski (@lukaszjdebowski) 's Twitter Profile
Łukasz Dębowski

@lukaszjdebowski

Associate professor @ IPI PAN: information theory, stochastic processes, statistical language models, quantitative linguistics

@[email protected]

ID: 777224906937344000

linkhttp://home.ipipan.waw.pl/l.debowski/ calendar_today17-09-2016 19:17:12

1,1K Tweet

510 Takipçi

829 Takip Edilen

Ziming Liu (@zimingliu11) 's Twitter Profile Photo

Interested in the science of language models but tired of neural scaling laws? Here's a new perspective: our new paper presents neural thermodynamic laws -- thermodynamic concepts and laws naturally emerge in language model training! AI is naturAl, not Artificial, after all.

Interested in the science of language models but tired of neural scaling laws? Here's a new perspective: our new paper presents neural thermodynamic laws -- thermodynamic concepts and laws naturally emerge in language model training!

AI is naturAl, not Artificial, after all.
Carlos E. Perez (@intuitmachine) 's Twitter Profile Photo

"The emergence of Large Language Models has unexpectedly revealed a profound truth hiding in plain sight: writing has always been more than representation. It has always been operational. It has always been, in a very real sense, code." medium.com/intuitionmachi…

Therfer (@therfer) 's Twitter Profile Photo

Very interesting work by Jiří Milička and cols: Humans can learn to detect AI-generated texts, or at least learn when they can't 🤖👇 arxiv.org/abs/2505.01877 But, Is automatic detection of AI-generated texts then possible? 🤔

Very interesting work by <a href="/JiriMilicka/">Jiří Milička</a> and cols:

Humans can learn to detect AI-generated texts, or at least learn when they can't  🤖👇

arxiv.org/abs/2505.01877

But, Is automatic detection of AI-generated texts then possible? 🤔
Jiří Milička (@jirimilicka) 's Twitter Profile Photo

You wake up in a small, dimly lit room, only you, a pencil, six walls and a narrow slit in one of them. Then a paper comes through with some stupid question written on it. What do you write on the paper to convince somebody outside that you are a conscious being?

Lean (@leanprover) 's Twitter Profile Photo

🔥 Google DeepMind just dropped their "formal conjectures" project - formalizing statements of math's biggest unsolved mysteries in #LeanLang and #Mathlib! This Google-backed project is a HUGE step toward developing "a much richer dataset of formalized conjectures", valuable

🔥 <a href="/GoogleDeepMind/">Google DeepMind</a>  just dropped their "formal conjectures" project - formalizing statements of math's biggest unsolved mysteries in #LeanLang and #Mathlib!

This Google-backed project is a HUGE step toward developing "a much richer dataset of formalized conjectures", valuable
Ravid Shwartz Ziv (@ziv_ravid) 's Twitter Profile Photo

You know all those arguments that LLMs think like humans? Turns out it's not true. 🧠 In our paper "From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning" we test it by checking if LLMs form concepts the same way humans do Yann LeCun Chen Shani Dan Jurafsky

You know all those arguments that LLMs think like humans? Turns out it's not true.

🧠 In our paper  "From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning" we test it by checking if LLMs form concepts the same way humans do  <a href="/ylecun/">Yann LeCun</a> <a href="/ChenShani2/">Chen Shani</a>  <a href="/jurafsky/">Dan Jurafsky</a>
Sakana AI (@sakanaailabs) 's Twitter Profile Photo

Introducing The Darwin Gödel Machine: AI that improves itself by rewriting its own code sakana.ai/dgm The Darwin Gödel Machine (DGM) is a self-improving agent that can modify its own code. Inspired by evolution, we maintain an expanding lineage of agent variants,

Introducing The Darwin Gödel Machine: AI that improves itself by rewriting its own code

sakana.ai/dgm

The Darwin Gödel Machine (DGM) is a self-improving agent that can modify its own code. Inspired by evolution, we maintain an expanding lineage of agent variants,
Ali Behrouz (@behrouz_ali) 's Twitter Profile Photo

What makes attention the critical component for most advances in LLMs and what holds back long-term memory modules (RNNs)? Can we strictly generalize Transformers? Presenting Atlas (A powerful Titan): a new architecture with long-term in-context memory that learns how to

What makes attention the critical component for most advances in LLMs and what holds back long-term memory modules (RNNs)? Can we strictly generalize Transformers?

Presenting Atlas (A powerful Titan): a new architecture with long-term in-context memory that learns how to
𝚐𝔪𝟾𝚡𝚡𝟾 (@gm8xx8) 's Twitter Profile Photo

DeepTheorem: Advancing LLM Reasoning for Theorem Proving Through Natural Language and Reinforcement Learning No Lean/Coq… just LaTeX-style proofs. Trains LLMs on 121K IMO-level problems with: - Dataset: 121K theorem–proof pairs, decontaminated, with difficulty/topic labels -

DeepTheorem: Advancing LLM Reasoning for Theorem Proving Through Natural Language and Reinforcement Learning

No Lean/Coq… just LaTeX-style proofs. Trains LLMs on 121K IMO-level problems with:

- Dataset: 121K theorem–proof pairs, decontaminated, with difficulty/topic labels
-
adam morgan (same handle at oo-blay eye-skay) (@adumbmoron) 's Twitter Profile Photo

🧠🗞️🗣️Finally out! Paper with a way-too-long name for social media. How does the brain turn words into sentences? We tracked words in participants' brains while they produced sentences, and found some unexpectedly neat patterns. 🧵1/9 rdcu.be/epA1J in Communications Psychology

🧠🗞️🗣️Finally out! Paper with a way-too-long name for social media. How does the brain turn words into sentences? We tracked words in participants' brains while they produced sentences, and found some unexpectedly neat patterns. 🧵1/9
rdcu.be/epA1J in <a href="/CommsPsychol/">Communications Psychology</a>
Charles Goddard (@chargoddard) 's Twitter Profile Photo

🤯 MIND-BLOWN! A new paper just SHATTERED everything we thought we knew about AI reasoning! This is paradigm-shifting. A MUST-READ. Full breakdown below 👇 🧵 1/23

🤯 MIND-BLOWN! A new paper just SHATTERED everything we thought we knew about AI reasoning!

This is paradigm-shifting. A MUST-READ. Full breakdown below 👇
🧵 1/23
Morph (@morph_labs) 's Twitter Profile Photo

We are excited to announce Trinity, an autoformalization system for verified superintelligence that we have developed at Morph. We have used it to automatically formalize in Lean a classical result of de Bruijn that the abc conjecture is true almost always.

We are excited to announce Trinity, an autoformalization system for verified superintelligence that we have developed at <a href="/morph_labs/">Morph</a>. We have used it to automatically formalize in Lean a classical result of de Bruijn that the abc conjecture is true almost always.
Benjamin Todd (@ben_j_todd) 's Twitter Profile Photo

Why can AIs code for 1h but not 10h? A simple explanation: if there's a 10% chance of error per 10min step (say), the success rate is: 1h: 53% 4h: 8% 10h: 0.002% Toby Ord has tested this 'constant error rate' theory and shown it's a good fit for the data chance of

Why can AIs code for 1h but not 10h?

A simple explanation: if there's a 10% chance of error per 10min step (say), the success rate is:

1h: 53%
4h: 8%
10h: 0.002%

<a href="/tobyordoxford/">Toby Ord</a> has tested this 'constant error rate' theory and shown it's a good fit for the data

chance of
Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

It’s a hefty 206-page research paper, and the findings are concerning. "LLM users consistently underperformed at neural, linguistic, and behavioral levels" This study finds LLM dependence weakens the writer’s own neural and linguistic fingerprints. 🤔🤔 Relying only on EEG,

It’s a hefty 206-page research paper, and the findings are concerning.

"LLM users consistently underperformed at neural, linguistic, and behavioral levels"

This study finds LLM dependence weakens the writer’s own neural and linguistic fingerprints. 🤔🤔

Relying only on EEG,