Alessandro Suglia (@ale_suglia) 's Twitter Profile
Alessandro Suglia

@ale_suglia

Assistant Professor @HeriotWattUni/@NRobotarium; Ex Head of Visual Dialogue at @helloalana; PhD @EDINRobotics; Ex Research Intern @MetaAI and @AmazonScience.

ID: 1213616077

linkhttps://alesuglia.github.io calendar_today23-02-2013 20:22:02

2,2K Tweet

1,1K Takipçi

1,1K Takip Edilen

Douwe Kiela (@douwekiela) 's Twitter Profile Photo

Incredibly proud of what this team has accomplished in just over a year since our launch. Built for the enterprise, our platform is backed by world-class research and operates in production at scale.

Alessandro Suglia (@ale_suglia) 's Twitter Profile Photo

Don't forget to speak to Yintao Tai today about PIXAR at #ACL2024NLP during the "Findings In-Person Poster Session 2". Time: 17:45 (Bangkok time) Location: Convention Center A1 Feel free to send me a message if you would like to chat about Embodied AI and multimodal AI!

Alessandro Suglia (@ale_suglia) 's Twitter Profile Photo

This is what I mean when I talk about input perturbations that completely break current LLMs: x.com/danielhanchen/… We are clearly missing something right? Pixel-based LLMs might be the answer!

Jesse Dodge (@jessedodge) 's Twitter Profile Photo

Congrats to our team for winning two paper awards at #ACL2024! OLMo won the Best Theme Paper award, and Dolma won a Best Resource Paper award! All the credit goes to the whole team for the massive group effort 🎉🎉

Congrats to our team for winning two paper awards at #ACL2024!

OLMo won the Best Theme Paper award, and Dolma won a Best Resource Paper award!

All the credit goes to the whole team for the massive group effort 🎉🎉
Alessandro Suglia (@ale_suglia) 's Twitter Profile Photo

It's so refreshing to see this paper being awarded. I hope that somebody will take on the challenge of making multimodal models as open as OLMo ;)

Nathan Benaich (@nathanbenaich) 's Twitter Profile Photo

New on Air Street Press: Now that LLMs can convincingly automate much of a bored human’s tasks, attention is turning to “agentic AI”. In this piece, we evaluate how far advanced this work actually is, look at both promising research directions and the challenges ahead. Thread:

New on <a href="/airstreetpress/">Air Street Press</a>: Now that LLMs can convincingly automate much of a bored human’s tasks, attention is turning to “agentic AI”.

In this piece, we evaluate how far advanced this work actually is, look at both promising research directions and the challenges ahead. 

Thread:
Seraphina Goldfarb-Tarrant (@seraphinagt) 's Twitter Profile Photo

A good reminder for those of us in LLM land (like me) that we don't only need to mitigate gender biases *caused* by LM generation, but we should enable researchers to *use* LMs to discover biases in human content. From Isabelle Augenstein's keynote genderbiasnlp #ACL2024

A good reminder for those of us in LLM land (like me) that we don't only need to mitigate gender biases *caused* by LM generation, but we should enable researchers to *use* LMs to discover biases in human content. From <a href="/IAugenstein/">Isabelle Augenstein</a>'s keynote <a href="/genderbiasnlp/">genderbiasnlp</a> #ACL2024
David Schlangen (@davidschlangen) 's Twitter Profile Photo

Ok, whatever it is that OpenAI has done to o1, it has payed off. At least on wordle, which used to be one of the hardest parts of our “conversational agency” benchmark. 4o: 23 (previous best) o1: 75.33 (Human expert players: 72)

Ok, whatever it is that <a href="/OpenAI/">OpenAI</a> has done to o1, it has payed off. At least on wordle, which used to be one of the hardest parts of our “conversational agency” benchmark.
 
4o: 23 (previous best)
o1: 75.33
(Human expert players: 72)