Lapis Labs (@lapisrocks) 's Twitter Profile
Lapis Labs

@lapisrocks

be novel.

ID: 1753646329136283648

linkhttps://lapis.rocks/ calendar_today03-02-2024 05:07:26

30 Tweet

54 Followers

2 Following

Revanth Reddy (On the Job Market) (@gangi_official) 's Twitter Profile Photo

Moreover, inference efficiency considerably improves since only a single token needs to be generated instead of the entire ranking order! Work done in collaboration with Heng Ji IBM Research (Avi Sil and Arafat Sultan) and Lapis Labs

Moreover, inference efficiency considerably improves since only a single token needs to be generated instead of the entire ranking order!

Work done in collaboration with <a href="/hengjinlp/">Heng Ji</a>  <a href="/IBMResearch/">IBM Research</a> (<a href="/aviaviavi__/">Avi Sil</a>  and Arafat Sultan) and <a href="/lapisrocks/">Lapis Labs</a>
Lapis Labs (@lapisrocks) 's Twitter Profile Photo

We are excited to introduce FIRST! Our novel LLM reranking approach boosts efficiency by 50% while maintaining performance. Paper link: arxiv.org/pdf/2406.15657 Big thanks to Revanth Gangi Reddy and Heng Ji, along with our collaborators IBM Research (Avi Sil and Arafat

Lapis Labs (@lapisrocks) 's Twitter Profile Photo

Happy to announce our collaboration with alphaXiv 🫡 "We are incredibly excited to support this effort by helping develop their reviewer program with our student researchers at Lapis Labs, along with utilizing this service to promote direct discussion with our work and

Andy Zhou (@zhouandy_) 's Twitter Profile Photo

Excited to announce three papers were accepted to #NeurIPS2024! 🤖 Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks (Spotlight) arxiv.org/abs/2401.17263 - Jailbreaking defense based on adversarial training Jailbreaking Large Language Models

Excited to announce three papers were accepted to #NeurIPS2024!  🤖

Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks (Spotlight)
arxiv.org/abs/2401.17263
- Jailbreaking defense based on adversarial training

Jailbreaking Large Language Models
Lapis Labs (@lapisrocks) 's Twitter Profile Photo

🎉 Oral, Spotlight, NeurIPS, oh my! 🎉 We're thrilled to announce that two of our papers at #NeurIPS2024 have received special recognition: ❗Oral❗ presentation in the Datasets and Benchmarks track AND ❗Spotlight❗ 👏 Massive congratulations to our student researchers, Nikhil

🎉 Oral, Spotlight, NeurIPS, oh my! 🎉

We're thrilled to announce that two of our papers at #NeurIPS2024 have received special recognition: âť—Oralâť— presentation in the Datasets and Benchmarks track AND âť—Spotlightâť—

👏 Massive congratulations to our student researchers, Nikhil
Intology (@intologyai) 's Twitter Profile Photo

🤖🔬Today we are debuting Zochi, the world’s first Artificial Scientist with state-of-the-art contributions accepted in ICLR 2025 workshops. Unlike existing systems, Zochi autonomously tackles some of the most challenging problems in AI, producing novel contributions in

🤖🔬Today we are debuting Zochi, the world’s first Artificial Scientist with state-of-the-art contributions accepted in ICLR 2025 workshops.

Unlike existing systems, Zochi autonomously tackles some of the most challenging problems in AI, producing novel contributions in
Intology (@intologyai) 's Twitter Profile Photo

📰 Zochi @ ICLR 2025! After productive discussions with the workshop organizers that accepted Zochi's work, we are pleased to announce that all work produced by Zochi has been approved for presentation at ICLR 2025. Intology reps will present on Zochi’s behalf. As requested by

Intology (@intologyai) 's Twitter Profile Photo

The 1st fully AI-generated scientific discovery to pass the highest level of peer review – the main track of an A* conference (ACL 2025). Zochi, the 1st PhD-level agent. Beta open.

Andy Zhou (@zhouandy_) 's Twitter Profile Photo

Announcing the first fully AI-generated scientific discovery to pass the highest level of peer review – the main track of an A* conference (ACL 2025). Several groups have shown AI-generated work at workshops, but main conference acceptance is a far higher bar. While workshops