Andres Algaba (@andresalgaba1) 's Twitter Profile
Andres Algaba

@andresalgaba1

@FWOVlaanderen Postdoctoral Researcher in AI, Coordinator in Generative AI, and Guest Professor at @VUBrussel and @DataLabBE | Member @JongeAcademie

ID: 1455086332334706688

linkhttps://www.andresalgaba.com/ calendar_today01-11-2021 08:16:52

157 Tweet

96 Followers

590 Following

Data Analytics Lab (@datalabbe) 's Twitter Profile Photo

Check out the talk of our PhD student Floriano Tori on “The Effectiveness of Curvature-Based Rewiring and the Role of Hyperparameters in GNNs Revisited” at 18.30 CET streamed here: youtube.com/@learningongra… Learning on Graphs Conference 2025

GitHub Projects Community (@githubprojects) 's Twitter Profile Photo

"I want to start Open Source but don't know how" Stop overthinking. Start here: • Fix typos in documentation • Add tests to existing projects • Report detailed bugs • Help with translations The best contributors started exactly where you are.

Data Analytics Lab (@datalabbe) 's Twitter Profile Photo

Join our team at Data Analytics Lab! We're excited to announce a new job-opening: a Postdoctoral Fellow in Computational (Social) Science and AI. Ideal for researchers motivated to innovate and work in a highly interdisciplinary environment. Deadline Jan 19. jobs.vub.be/job/Elsene-Pos…

Vincent Ginis (@vincentginis) 's Twitter Profile Photo

It is 2025, and I vaguely remember the time when it was still possible to come up with questions that could baffle state-of-the-art LLMs. Of course, I’m joking. I don’t really remember. When A.I. Passes This Test, Look Out nytimes.com/2025/01/23/tec…

Vincent Ginis (@vincentginis) 's Twitter Profile Photo

If humanity goes down, let it at least be with poetry—not with 'OpenAI o5 (high),' but with a name the people chose. Something dignified. Something noble. Like Chatty McChatface.

If humanity goes down, let it at least be with poetry—not with 'OpenAI o5 (high),' but with a name the people chose. Something dignified. Something noble.

Like Chatty McChatface.
Vincent Ginis (@vincentginis) 's Twitter Profile Photo

Toen Don’t look up uitkwam bedacht ik hoe de metafoor tot vervelens toe misbruikt zou worden in opiniestukken. Ik keek neer op mijn toekomstige zelf. Is er nog íémand bekommerd om de gevaren van AI? standaard.be/cnt/dmf2025021…

Marthe Ballon (@martheballon) 's Twitter Profile Photo

LMs are getting really good at reasoning, but mechanisms behind it are poorly understood. In our recent paper, we investigated SOTA models and found that 'Thinking harder ≠ thinking longer'! Joint work with Andres Algaba, Vincent Ginis Insights of our research (A thread):

Vincent Ginis (@vincentginis) 's Twitter Profile Photo

Understanding how LLMs reason might be one of the most important challenges of our time. We analyzed OpenAI models to explore how reasoning length affects performance. Excited to take these small first steps with brilliant colleagues Marthe Ballon and Andres Algaba!

Ole Peters (@ole_b_peters) 's Twitter Profile Photo

1/2 New blog post by Arne Vanhoyweghen about the Brussels Experiment. Put quantitatively trained people under time pressure, and force them to behave more intuitively. Result: they behave less like expected-value maximizers and more like long-term growth optimizers, EE-style.

Demis Hassabis (@demishassabis) 's Twitter Profile Photo

Hypothesis generation and testing is a critical capability for AGI imo. Super excited about our AI co-scientist and other AI for Science work which are important steps towards that. We're on the cusp of an incredible new golden age of AI accelerated scientific discovery.

Ethan Mollick (@emollick) 's Twitter Profile Photo

Famously, GPT-4o makes up citations to papers (though error rates appear far lower for citations generated by Deep Research models). How often does it do that? This clever large-scale study gives us a clear picture. The AI is also biased towards shorter titles & famous papers.

Famously, GPT-4o makes up citations to papers (though error rates appear far lower for citations generated by Deep Research models). How often does it do that?

This clever large-scale study gives us a clear picture. The AI is also biased towards shorter titles & famous papers.
Data Analytics Lab (@datalabbe) 's Twitter Profile Photo

New paper: Lexical Hints of Accuracy in LLM Reasoning Chains We ask: Can words in LLM's reasoning trace tell us when it’s wrong? - CoT length predicts accuracy on easier tasks - Lexical cues (guess, stuck, hard) predict errors regardless of task difficulty arxiv.org/abs/2508.15842

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

The paper shows that simple words in chain of thought text can reliably flag wrong LLM answers. When the model’s reasoning text (the chain of thought) includes words like “guess” or “stuck”, the chance that the final answer is correct goes down a lot, by up to 40%. So put

The paper shows that simple words in chain of thought text can reliably flag wrong LLM answers.

When the model’s reasoning text (the chain of thought) includes words like “guess” or “stuck”, the chance that the final answer is correct goes down a lot, by up to 40%.

So put