Christian Internò (@chrisinterno) 's Twitter Profile
Christian Internò

@chrisinterno

AI Scientist at @unibielefeld (@HammerLabML) and @Honda Research Institute EU, Visiting Researcher at @CSHL at the Department of Computational Neuroscience.

ID: 1838837077745766402

linkhttps://github.com/ChristianInterno calendar_today25-09-2024 07:05:20

9 Tweet

41 Takipçi

559 Takip Edilen

Riccardo Cadei (@riccardocadeii) 's Twitter Profile Photo

Foundational models’ predictions (🦙♊️🦖) can propagate biases in causal downstream tasks, posing a significant risk in AI-supported scientific discovery (👩‍🔬🔎). Our solution: Causal Lifting of Neural Representation 🪜

Foundational models’ predictions (🦙♊️🦖) can propagate biases in causal downstream tasks, posing a significant risk in AI-supported scientific discovery (👩‍🔬🔎).

Our solution: Causal Lifting of Neural Representation 🪜
David Klindt (@klindt_david) 's Twitter Profile Photo

🧵 New paper! We explore sparse coding, superposition, and the Linear Representation Hypothesis (LRH) through identifiability theory, compressed sensing, and interpretability research. If neural representations intrigue you, read on! 🤓 arxiv.org/abs/2503.01824

Riccardo Cadei (@riccardocadeii) 's Twitter Profile Photo

A true pleasure attending #MlssSenegal2025 in M’bour — a fun and intellectually vibrant gathering of world-class speakers and exceptionally driven students.

Nina Miolane 🦋 @ninamiolane.bsky.social (@ninamiolane) 's Twitter Profile Photo

You're into neuroscience and AI? 🧠 🤖 You're working on the mathematics that drives biological and artificial neural networks? We want to hear from you! Submit to NeurReps 2025 at NeurIPS Conference! 📅 Deadline: Aug 22 📄 Two tracks: 9p proceedings & 4p extended abstracts

David Klindt (@klindt_david) 's Twitter Profile Photo

**How can we tell if a video is AI generated?** 👉 new paper: arxiv.org/abs/2507.00583 As scientists, we want to know if video models actually learned the laws of physics 🌍 As users, we want to make sure that we can trust and know when something is real 🏛️

David Klindt (@klindt_david) 's Twitter Profile Photo

Very proud of our summer intern Isabela Camacho Isa Camacho giving her final presentation 🧠 –and for everyone in the group who made this a great summer with awesome science 🧬 and even better vibes 🌞 #cshl #neuroai

Very proud of our summer intern Isabela Camacho <a href="/isacama_phys/">Isa Camacho</a> giving her final presentation 🧠

–and for everyone in the group who made this a great summer with awesome science 🧬 and even better vibes 🌞

#cshl #neuroai
Riccardo Cadei (@riccardocadeii) 's Twitter Profile Photo

The Narcissus Hypothesis: --Recursive training on semi-synthetic corpora enforcing human alignment induces a Social Desirability Bias: world-models (Narcissus) aim to please rather than represent, polluting data lakes and charming us (Echo) into hanging on their every word.

The Narcissus Hypothesis:
--Recursive training on semi-synthetic corpora enforcing human alignment induces a Social Desirability Bias: world-models (Narcissus) aim to please rather than represent, polluting data lakes and charming us (Echo) into hanging on their every word.
Riccardo Cadei (@riccardocadeii) 's Twitter Profile Photo

Traditional trial analyses rely on effect hypotheses (Metthew Effect), i.e., what does a treatment affect? We propose a novel empiricist approach to the problem, generating effect hypotheses without supervision. How? Disentangling effects within interpretable representations

Traditional trial analyses rely on effect hypotheses (Metthew Effect), i.e., what does a treatment affect? 

We propose a novel empiricist approach to the problem, generating effect hypotheses without supervision. 

How? Disentangling effects within interpretable representations
Judea Pearl (@yudapearl) 's Twitter Profile Photo

Your paper sounds very interesting in its scope, but I must confess that I cannot appreciate its findings, primarily because you focus on the methods rather the input-output problems. What is given and what kind of question are you trying to answer. I suggest you write an

Riccardo Cadei (@riccardocadeii) 's Twitter Profile Photo

Judea Pearl Treatment Effect estimation requires priors defining candidate effects. Without priors there are not ‘computable causal claims’, i.e., what is the treatment affecting, in the sense of maximal causal abstraction with semantic interpretation? Example: what is the effect of a

Riccardo Cadei (@riccardocadeii) 's Twitter Profile Photo

Boris Sobolev Similarly, AI-powered experiments raise new questions, challenging epistemic robustness. If the premises are wrong, e.g., biased measurements of the world, causal reasoning is insufficient, even misleading, It requires climbing Judea Pearl 's ladder from the very bottom, where

Judea Pearl (@yudapearl) 's Twitter Profile Photo

Our weekly harvest of CI papers has turned monthly, and now comes to us in 5 packages, laden with new results and new thoughts: ucla.in/475t9CS ucla.in/4hlbt9K ucla.in/48qkYlE ucla.in/4nYMaME ucla.in/3Kl8UIt Select, Read, Enjoy, React.

David Klindt (@klindt_david) 's Twitter Profile Photo

That's it. If you train SAEs, try adding whitening. It might work better 👍 Awesome job Ashwin Saraswatula *watch out he is on the gradschool market 🌟 [11/11]