Emile van Krieken (@emilevankrieken) 's Twitter Profile
Emile van Krieken

@emilevankrieken

Postdoc @ University of Edinburgh | Neurosymbolic Machine Learning

ID: 32432449

linkhttp://emilevankrieken.com calendar_today17-04-2009 14:48:54

10,10K Tweet

1,1K Followers

1,1K Following

Wolfgang Stammer (@wolfstammer) 's Twitter Profile Photo

Can AI learn better by explaining itself? ๐Ÿง ๐Ÿค– Our paper 'Learning by Self-Explaining,' published in TMLR, explores how AI models improve generalization and avoid shortcuts by evaluating their explanations. Dive in for more: tinyurl.com/2z7v6kbt #AI #ML #ExplainableAI #TMLR

Can AI learn better by explaining itself? ๐Ÿง ๐Ÿค– Our paper 'Learning by Self-Explaining,' published in TMLR, explores how AI models improve generalization and avoid shortcuts by evaluating their explanations. Dive in for more: tinyurl.com/2z7v6kbt #AI #ML #ExplainableAI #TMLR
Emile van Krieken (@emilevankrieken) 's Twitter Profile Photo

Very excited to be in Barcelona for NeSy 2024! We're presenting two papers: - Our ICML paper x.com/EmilevanKriekeโ€ฆ - A preview of our new NeSy library ULLER together with Samy Badreddine, Eleonora Giunchiglia and Robin Manhaeve. We will give a special tutorial on Thursday! Hit me up!

Samy Badreddine (@sbadredd) 's Twitter Profile Photo

Excited to be at NeSy! ULLER is a project for better accessibility in Neurosymbolic research. Come to our talks for more details ๐Ÿš€

Emile van Krieken (@emilevankrieken) 's Twitter Profile Photo

Check out Samy Badreddine presenting our work on ULLER! We will share more soon :) If you're here in Barcelona, come join us Thursday morning for our tutorial & discussion session! Let's make NeSy accessible ๐Ÿ”ฅ

kareem ahmed (@kareemyousrii) 's Twitter Profile Photo

Check out our work on tokenization led by the amazing Renato Lui Geh! It turns out you should consider *many* ways of tokenizing a sentence, which surprisingly gives rise to a #neurosymbolic problem formulation!

Emile van Krieken (@emilevankrieken) 's Twitter Profile Photo

Happy with all the feedback from a packed room! We gave a tutorial of our upcoming ULLER NeSy library, and are excited to share more soon.

Eleonora Giunchiglia (@e_giunchiglia) 's Twitter Profile Photo

Today we presented our new ULLER library for Neuro-symbolic AI! We are very grateful for the engagement shown by the community and the feedback received! ๐Ÿ’ช

xuan (ษ•ษฅษ›n / sh-yen) (@xuanalogue) 's Twitter Profile Photo

This is also close to my guess of how o1 is trained (my default frame is to view it as "inferring the best latent CoT that explains the data using a branching process like SMC", but of course the RL / MCTS frame would be more natural to folks at OpenAI).

Thomas G. Dietterich (@tdietterich) 's Twitter Profile Photo

martin_casado Seems to me Gary Marcus has been right about almost everything related to the limits of LLMs. We are now seeing systems with search, which is something all of us old timers have been waiting for the LLM/AGI crowd to rediscover. Maybe symbolic abstraction is next?

Delip Rao e/ฯƒ (@deliprao) 's Twitter Profile Photo

Unless you are an OpenAI employee working on improving their products, I donโ€™t understand why such efforts are science. Why are we (question to faculty) spending taxpayer dollars in doing QA for a closed product by a well-capitalized company that does not give back to science?

Unless you are an OpenAI employee working on improving their products, I donโ€™t understand why such efforts are science. Why are we (question to faculty) spending taxpayer dollars in doing QA for a closed product by a well-capitalized company that does not give back to science?
Robert McHardy (@robert_mchardy) 's Twitter Profile Photo

Very nice to see Qwen ditching the mess that MMLU is, and using our reannotated MMLU-Redux for evaluation ๐Ÿ”ฅ arxiv.org/abs/2406.04127 huggingface.co/datasets/edinbโ€ฆ

Jiaxin Wen (@jiaxinwen22) 's Twitter Profile Photo

RLHF is a popular method. It makes your human eval score better and Elo rating ๐Ÿš€๐Ÿš€. But reallyโ“Your model might be โ€œcheatingโ€ you! ๐Ÿ˜ˆ๐Ÿ˜ˆ We show that LLMs can learn to mislead human evaluators via RLHF. ๐Ÿงตbelow

RLHF is a popular method. It makes your human eval score better and Elo rating ๐Ÿš€๐Ÿš€.

But reallyโ“Your model might be โ€œcheatingโ€ you! ๐Ÿ˜ˆ๐Ÿ˜ˆ

We show that LLMs can learn to mislead human evaluators via RLHF.

๐Ÿงตbelow
Jakub Tomczak (@jmtomczak) 's Twitter Profile Photo

๐ŸŽŠ It has arrived ๐ŸŽŠ, the 2nd edition of my "Deep Generative Modeling" book. It has 100 new pages, 3 new chapters (incl. #LLMs) and new sections. It covers all deep generative models that constitute the core of all #GenerativeAI techs! Check it out: ๐Ÿ’ปtinyurl.com/mwj9dw83

๐ŸŽŠ It has arrived ๐ŸŽŠ, the 2nd edition of my "Deep Generative Modeling" book. It has 100 new pages, 3 new chapters (incl. #LLMs) and new sections. It covers all deep generative models that constitute the core of all #GenerativeAI techs!
Check it out: 
๐Ÿ’ปtinyurl.com/mwj9dw83
Wolfgang Stammer (@wolfstammer) 's Twitter Profile Photo

Happy to share that our papers on concept learning (tinyurl.com/3urm5uhb) and CBMs for RL (tinyurl.com/r8hm7r22) have been accepted at NeurIPS 2024! ๐ŸŽ‰ Both explore ways to enhance AI interpretability and reasoning. Excited for the discussions #NeurIPS2024 #Interpretability

Happy to share that our papers on concept learning (tinyurl.com/3urm5uhb) and CBMs for RL (tinyurl.com/r8hm7r22) have been accepted at NeurIPS 2024! ๐ŸŽ‰ Both explore ways to enhance AI interpretability and reasoning. Excited for the discussions #NeurIPS2024 #Interpretability
Quentin Delfosse (@liimeleemon) 's Twitter Profile Photo

So happy that our paper Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents (arxiv.org/pdf/2401.05821) has been accepted at NeurIPS 2024! ๐ŸŽ‰ If you are wondering why RL agents cannot generalize to new scenarios and how to mitigate it, check it out !

So happy that our paper Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents (arxiv.org/pdf/2401.05821) has been accepted at NeurIPS 2024! ๐ŸŽ‰

If you are wondering why RL agents cannot generalize to new scenarios and how to mitigate it, check it out !
Nando Fioretto (@nandofioretto) 's Twitter Profile Photo

Excited to share our #NeurIPS2024 work on combining diffusion models with constrained optimization to generate data adhering to constraints and physical principles (with guarantees!). Led by the amazing Jacob Christopher and with Stephen Baek. > arxiv.org/abs/2402.03559

Excited to share our #NeurIPS2024 work on combining diffusion models with constrained optimization to generate data adhering to constraints and physical principles (with guarantees!).

Led by the amazing Jacob Christopher and with Stephen Baek.

> arxiv.org/abs/2402.03559