Antonio Mallia (@antonio_mallia) 's Twitter Profile
Antonio Mallia

@antonio_mallia

Senior Research Scientist @ Pinecone
Previously, Applied Scientist @ Amazon & PhD @ New York University

ID: 514667085

linkhttp://www.antoniomallia.it calendar_today04-03-2012 19:22:28

51,51K Tweet

1,1K Takipçi

2,2K Takip Edilen

Antonio Mallia (@antonio_mallia) 's Twitter Profile Photo

Excited to see Sean MacAvaney adding Pinecone's reranker and sparse model to Terrier! Such a powerful integration for IR research and applications. 🚀 github.com/seanmacavaney/…

Pinecone (@pinecone) 's Twitter Profile Photo

Congratulations to our very own Antonio Mallia, Cesare Campagnano, and @JackPertschuk – as well as their co-authors – on their accepted #ECIR2025 research papers! 🎉 They continue to push the state-of-the-art forward on information retrieval, and we as an industry are better for it!

Congratulations to our very own <a href="/antonio_mallia/">Antonio Mallia</a>, <a href="/caesar_one_/">Cesare Campagnano</a>, and @JackPertschuk – as well as their co-authors – on their accepted #ECIR2025 research papers! 🎉 They continue to push the state-of-the-art forward on information retrieval, and we as an industry are better for it!
RSTLess group (@rstlessgroup) 's Twitter Profile Photo

We are very excited to share that the work of Cesare Campagnano , Antonio Mallia , @JackPertschuk and Fabrizio Silvestri has been accepted to #ECIR2025 as a #shortpaper. See you in #Lucca. ECIR2025 Pinecone #AI #Research #IR #industry

Jack Pertschuk (@jack_pertschuk) 's Twitter Profile Photo

A bunch of people asked me what a "sparse" vector is, so I wrote an explanation (and Pinecone now supports them!) pinecone.io/learn/sparse-r…

Antonio Mallia (@antonio_mallia) 's Twitter Profile Photo

Just read Arjun Patel's latest article on Pinecone’s brand-new sparse index and model capabilities—highly recommend it if you’re curious about how keyword search and sparse models fit into the bigger picture of vector search. pinecone.io/learn/learn-pi…

Antonio Mallia (@antonio_mallia) 's Twitter Profile Photo

It was a really pleasant surprise to learn that our paper “Efficient Constant-Space Multi-Vector Retrieval” aka ConstBERT, co-authored with Sean MacAvaney and Nicola Tonellotto received the Best Short Paper Honourable Mention at ECIR 2025! #ECIR2025 #IR #Pinecone

It was a really pleasant surprise to learn that our paper “Efficient Constant-Space Multi-Vector Retrieval” aka ConstBERT, co-authored with <a href="/macavaney/">Sean MacAvaney</a>  and <a href="/ntonellotto/">Nicola Tonellotto</a> received the Best Short Paper Honourable Mention at ECIR 2025!
#ECIR2025 #IR  #Pinecone
Antonio Mallia (@antonio_mallia) 's Twitter Profile Photo

If you are attending ECIR, do not miss our talk at the Industry Day: "Neural Retrieval Meets Cascading Architectures" with Cesare Campagnano and Jack Pertschuk showcasing how to perform SOTA retrieval with a few lines of code using Pinecone. Thursday @ 16:30 in Guinigi Chapel – IMT

If you are attending ECIR, do not miss our talk at the Industry Day: "Neural Retrieval Meets Cascading Architectures" with <a href="/caesar_one_/">Cesare Campagnano</a>  and <a href="/jack_pertschuk/">Jack Pertschuk</a> showcasing how to perform SOTA retrieval with a few lines of code using <a href="/pinecone/">Pinecone</a>.
Thursday @ 16:30 in Guinigi Chapel – IMT
Jack Pertschuk (@jack_pertschuk) 's Twitter Profile Photo

Want to build LLM-quality search without an LLM latency hit? We just open-sourced constBERT, a novel late-interaction model that boosts NDCG by up to 27% with only a few extra millis of latency on your existing Pinecone index. Technical blog: pinecone.io/blog/cascading…

TuringPost (@theturingpost) 's Twitter Profile Photo

Refreshing BERT – a groundbreaking shift in NLP BERT, or Bidirectional Encoder Representations from Transformers, was the first to pre-train a deep Transformer in a bidirectional way: ▪️It processes both left and right context of a word at the same time, to deeper understand

Refreshing BERT – a groundbreaking shift in NLP

BERT, or Bidirectional Encoder Representations from Transformers, was the first to pre-train a deep Transformer in a bidirectional way:

▪️It processes both left and right context of a word at the same time, to deeper understand
Antonio Mallia (@antonio_mallia) 's Twitter Profile Photo

Thanks so much! 🙏 It's always a pleasure to share what we're working on at @Pinecone. BERT continues to be a solid foundation for retrieval research, and with ConstBERT, we're excited to push the boundaries of what’s possible in multi-vector search. Appreciate the shoutout!

Jack Pertschuk (@jack_pertschuk) 's Twitter Profile Photo

youtu.be/t_ALyJ174gs?si… Recording of my talk from RustWeek out now - blog post coming soon! It was a blast to meet so many folks who share a love of writing high performance Rust. Huge thanks to the RustNL team for a phenomenal conference - in a movie theatre of all places!