Assaf Ben Kish (@abk_tau) 's Twitter Profile
Assaf Ben Kish

@abk_tau

Deep Learning | Large Language Models | Reinforcement Learning

ID: 1688790742712336384

linkhttps://assafbk.github.io/website/ calendar_today08-08-2023 05:55:05

59 Tweet

94 Followers

126 Following

Assaf Ben Kish (@abk_tau) 's Twitter Profile Photo

'Mitigating Open-Vocabulary Caption Hallucinations' is accepted to EMNLP 2024! 🎉 t.ly/cBQM0 TL;DR OpenCHAIR: an open-vocabulary hallucination benchmark MOCHa: RLAIF framework for hallucination reduction A great coop with Moran Yanuka Morris Alper Raja Giryes 💔 Hadar Averbuch-Elor

Yael Vinker🎗 (@yvinker) 's Twitter Profile Photo

Excited to introduce SketchAgent!👩‍🎨 We leverage the prior of pretrained multimodal LLMs for language-driven, sequential sketch generation and human-agent collaborative sketching! ✨ Try our fun interface here: github.com/yael-vinker/Sk…

Assaf Ben Kish (@abk_tau) 's Twitter Profile Photo

Dense captions are highly informative! But it turns out that sometimes they can be overly detailed.. 🤔📚 A great work led by Moran Yanuka!

Assaf Ben Kish (@abk_tau) 's Twitter Profile Photo

DeciMamba, the first context extension method for Mamba, is accepted to #ICLR2025! 🎉 New revision with more long-context results: arxiv.org/abs/2406.14528 github.com/assafbk/DeciMa… Special thanks to Itamar Zimerman Shady Abu-Hussein Nadav Cohen Amir Globerson liorwolf Raja Giryes 💔 !

Yael Vinker🎗 (@yvinker) 's Twitter Profile Photo

SketchAgent has been accepted to #CVPR2025 ! This is an early step toward new tools for visual thinking and richer interaction with LLMs 🎨 🔗 sketch-agent.csail.mit.edu

idan shenfeld (@idanshenfeld) 's Twitter Profile Photo

The next frontier for AI shouldn’t just be generally helpful. It should be helpful for you! Our new paper shows how to personalize LLMs — efficiently, scalably, and without retraining. Meet PReF (arxiv.org/abs/2503.06358) 1\n

𝚐𝔪𝟾𝚡𝚡𝟾 (@gm8xx8) 's Twitter Profile Photo

Overflow Prevention Enhances Long-Context Recurrent LLMs OPRM chunk-based inference: - Split the context into chunks - Process chunks in parallel (speculative prefill) - Select the best one (e.g., lowest entropy). - Decode only from that chunk Advantages: - No training required

Overflow Prevention Enhances Long-Context Recurrent LLMs

OPRM chunk-based inference:
- Split the context into chunks
- Process chunks in parallel (speculative prefill)
- Select the best one (e.g., lowest entropy).
- Decode only from that chunk

Advantages:
- No training required