Pere-Lluís Huguet Cabot (@perelluishc) 's Twitter Profile
Pere-Lluís Huguet Cabot

@perelluishc

Marie-Curie PhD working at @SapienzaNLP, prev. @Babelscape and @knowgraphs. Working on Information Extraction and Italian LLM, projects: REBEL, MinervaLLM, ...

ID: 1351904755409428486

linkhttps://littlepea13.github.io/ calendar_today20-01-2021 14:50:05

262 Tweet

291 Followers

283 Following

Alex Hernandez-Garcia (@alexhdezgcia) 's Twitter Profile Photo

I am seriously concerned about the quality and tone of many ICLR 2026 reviews I have read in both my reviewer and author batches. I see - Reviews likely written by a language model - Dismissive remarks - Judgemental comments - Impolite tone - ... iclr.cc/Conferences/20…

Pere-Lluís Huguet Cabot (@perelluishc) 's Twitter Profile Photo

It's been extremely disappointing compared to the experience in ACL venues. In our paper not a single reviewer has acknowledged our answers. Did reviewers receive any reminders to engage ICLR 2026?

Simone Tedeschi (@simonetedeschi_) 's Twitter Profile Photo

I've written my first #Medium story! I delve into the longstanding challenge of #AGI, and provide an high-level overview of the current #NLP landscape, highlighting both the milestones achieved and the persisting challenges! 🧗🏻 #ChatGPT #LLMs #AI medium.com/@simone-tedesc…📑

I've written my first #Medium story! I delve into the longstanding challenge of #AGI, and provide an high-level overview of the current #NLP landscape, highlighting both the milestones achieved and the persisting challenges! 🧗🏻 #ChatGPT #LLMs #AI

medium.com/@simone-tedesc…📑
Anthropic (@anthropicai) 's Twitter Profile Photo

New Anthropic Paper: Sleeper Agents. We trained LLMs to act secretly malicious. We found that, despite our best efforts at alignment training, deception still slipped through. arxiv.org/abs/2401.05566

New Anthropic Paper: Sleeper Agents.

We trained LLMs to act secretly malicious. We found that, despite our best efforts at alignment training, deception still slipped through.

arxiv.org/abs/2401.05566
Fabrizio Silvestri (@fabreetseo) 's Twitter Profile Photo

🤯 Think adding nonsense to RAG systems is madness? Our new paper says otherwise! We found that including random documents boost accuracy by 30+%, challenging old paradigms and showing the complexity of integrating retrieval w/ language generation. #RAGSystems #surprisingresults

Alessandro Scirè (@alescire94) 's Twitter Profile Photo

Exciting strides in text summarization with LLMs 🚀but verifying their factual accuracy is still an open challenge 🤔 We introduce FENICE, a factuality-oriented metric for summarization with a strong focus on interpretability🔍arxiv.org/abs/2403.02270 #NLProc #LLMs #Factuality

Francesco Molfese (@framolfese) 's Twitter Profile Photo

Join us today at #EACL2024 for our presentation entitled “CroCoAlign: A Cross-Lingual, Context-Aware and Fully-Neural Sentence Alignment System for Long Texts” in the Sentence-level Semantics track! #NLProc (Radisson Blu, Carlson Ballroom, 5th floor, Malta)

Join us today at #EACL2024 for our presentation entitled “CroCoAlign: A Cross-Lingual, Context-Aware and Fully-Neural Sentence Alignment System for Long Texts” in the Sentence-level Semantics track! #NLProc
(Radisson Blu, Carlson Ballroom, 5th floor, Malta)
Roberto Navigli (@rnavigli) 's Twitter Profile Photo

Just interviewed by Alessio Jacona Agenzia ANSA to talk about #LLMs and the SapienzaNLP effort to create the first #Italian pre-trained #LLMs, the #Minerva family. Team: Edoardo Barba, Simone Conia, @perelluisHC, @AndrewWyn1, Riccardo Orlando & me youtube.com/watch?si=O9daf…

clem 🤗 (@clementdelangue) 's Twitter Profile Photo

GPU-Poor no more: super excited to officially release ZeroGPU in beta today. Congrats Victor M & team for the release! In the past few months, the open-source AI community has been thriving. Not only Meta but also Apple, NVIDIA, Bytedance, Snowflake, Databricks, Microsoft,

GPU-Poor no more: super excited to officially release ZeroGPU in beta today. Congrats <a href="/victormustar/">Victor M</a> &amp; team for the release!

In the past few months, the open-source AI community has been thriving. Not only Meta but also Apple, NVIDIA, Bytedance, Snowflake, Databricks, Microsoft,
Thomas Wolf (@thom_wolf) 's Twitter Profile Photo

The new interpretability paper from Anthropic is totally based. Feels like analyzing an alien life form. If you only read one 90-min-read paper today, it has to be this one transformer-circuits.pub/2024/scaling-m…

The new interpretability paper from Anthropic is totally based. Feels like analyzing an alien life form.

If you only read one 90-min-read paper today, it has to be this one

transformer-circuits.pub/2024/scaling-m…
Guilherme Penedo (@gui_penedo) 's Twitter Profile Photo

We are (finally) releasing the 🍷 FineWeb technical report! In it, we detail and explain every processing decision we took, and we also introduce our newest dataset: 📚 FineWeb-Edu, a (web only) subset of FW filtered for high educational content. Link: hf.co/spaces/Hugging…

We are (finally) releasing the 🍷 FineWeb technical report!

In it, we detail and explain every processing decision we took, and we also introduce our newest dataset: 📚 FineWeb-Edu, a (web only) subset of FW filtered for high educational content.

Link: hf.co/spaces/Hugging…
ACLRollingReview (@reviewacl) 's Twitter Profile Photo

ARR needs your help! We received 5800+ submissions for the June cycle, but with our current capacity, we can handle only half of these submissions. We can't start the review process without significant help. (1/2)

Riccardo Orlando (@riccardoricorl) 's Twitter Profile Photo

👀 Exciting News! 👀 Happy to announce our latest research paper, “Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget”, will be presented at #ACL2024! 🚀 Try it out! huggingface.co/spaces/relik-i… Thread below 👇

👀 Exciting News! 👀
Happy to announce our latest research paper, “Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget”, will be presented at #ACL2024! 🚀

Try it out! huggingface.co/spaces/relik-i…

Thread below 👇
Verna Dankers (@vernadankers) 's Twitter Profile Photo

Which layers memorise examples with permuted labels?🧐 That’s what we investigate in “Generalisation First, Memorisation Second? Memorisation Localisation for Natural Language Classification Tasks” presented in Findings poster session 2 @ 5.45PM today! #ACL2024 #NLProc (1/7)

Which layers memorise examples with permuted labels?🧐 That’s what we investigate in  “Generalisation First, Memorisation Second? Memorisation Localisation for Natural Language Classification Tasks” presented in Findings poster session 2 @ 5.45PM today! #ACL2024 #NLProc (1/7)
Elisa Bassignana (@elibassignana) 's Twitter Profile Photo

I'm in Bangkok for #ACL2024NLP 🇹🇭 Looking forward to present our work "Dissecting Biases in Relation Extraction: A Cross-dataset analysis on People's Gender and Origin" genderbiasnlp Joint work w/ Marco Stranisci Pere-Lluís Huguet Cabot Roberto Navigli 📄aclanthology.org/2024.gebnlp-1.… #NLProc 1/2

I'm in Bangkok for #ACL2024NLP 🇹🇭 Looking forward to present our work "Dissecting Biases in Relation Extraction: A Cross-dataset analysis on People's Gender and Origin" <a href="/genderbiasnlp/">genderbiasnlp</a> 
Joint work w/ <a href="/marcostranisci/">Marco Stranisci</a> <a href="/PereLluisHC/">Pere-Lluís Huguet Cabot</a> <a href="/RNavigli/">Roberto Navigli</a>

📄aclanthology.org/2024.gebnlp-1.…

#NLProc
1/2
Jerry Liu (@jerryjliu0) 's Twitter Profile Photo

Automatic knowledge graph construction can be slow and expensive. Also I find there's a lack of resources on how to build something principled (do you just stuff text into an LLM prompt?) That's why I love this blog by @tb_tomaz which not only outlines the step-by-uses Relik, a

Automatic knowledge graph construction can be slow and expensive. Also I find there's a lack of resources on how to build something principled (do you just stuff text into an LLM prompt?) 

That's why I love this blog by @tb_tomaz which not only outlines the step-by-uses Relik, a