Oren Sultan (@oren_sultan) 's Twitter Profile
Oren Sultan

@oren_sultan

AI Researcher @Lightricks, CS PhD Candidate #AI #NLP @HebrewU, advised by @HyadataLab ๐Ÿ‡ฎ๐Ÿ‡ฑ | prev. @TU_Muenchen ๐Ÿ‡ฉ๐Ÿ‡ช @UniMelb ๐Ÿ‡ฆ๐Ÿ‡บ

ID: 1423192726670135300

linkhttp://www.orensultan.com calendar_today05-08-2021 08:02:52

502 Tweet

790 Takipรงi

624 Takip Edilen

LTX Studio (@ltxstudio) 's Twitter Profile Photo

We've partnered with Ari Folman, the Oscar nominated and two time Golden Globe winning director, on the first ever project created with the help of LTX Studio. As you can see, weโ€™re all about bringing dreams to life and experience storytelling transformed.

Yunyao Li (@yunyao_li) 's Twitter Profile Photo

Thanks Dan Roth for an inspiring talk on LLMs and reasoning DaSH #NAACL2024. Looking forward to hearing more at our panel discussion later today.

Thanks <a href="/DanRothNLP/">Dan Roth</a> for an inspiring talk on LLMs and reasoning <a href="/dash_workshop/">DaSH</a> #NAACL2024. Looking forward to hearing more at our panel discussion later today.
Arie Cattan (@ariecattan) 's Twitter Profile Photo

๐Ÿšจ๐Ÿšจ Check out our new paper for a new ICL method that greatly boosts LLMs in long contexts! >> arxiv.org/abs/2406.13632

๐Ÿšจ๐Ÿšจ Check out our new paper for a new ICL method that greatly boosts LLMs in long contexts! 

&gt;&gt;

arxiv.org/abs/2406.13632
hele (@helekuul) 's Twitter Profile Photo

One week ago today, I was at #NAACL in Mexico City ๐Ÿ‡ฒ๐Ÿ‡ฝ presenting a poster on our work about the Llammas๐Ÿ‘language model. Taking part in such a large event for the first time was an exciting experience, and I hope there will be more to come!๐Ÿ˜Š #NAACL24 #NAACL2024

One week ago today, I was at #NAACL in Mexico City ๐Ÿ‡ฒ๐Ÿ‡ฝ presenting a poster on our work about the Llammas๐Ÿ‘language model. Taking part in such a large event for the first time was an exciting experience, and I hope there will be more to come!๐Ÿ˜Š

#NAACL24 #NAACL2024
Siyu Yuan (@siyu_yuan_) 's Twitter Profile Photo

๐Ÿš€ Excited to introduce EvoAgent! A generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm! Paper๐Ÿ“„: arxiv.org/pdf/2406.14228 Website๐ŸŒ: evo-agent.github.io Code๐Ÿ’ป: github.com/siyuyuan/evoagโ€ฆ [1/6]

๐Ÿš€ Excited to introduce EvoAgent! A generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm!

Paper๐Ÿ“„: arxiv.org/pdf/2406.14228
Website๐ŸŒ: evo-agent.github.io
Code๐Ÿ’ป: github.com/siyuyuan/evoagโ€ฆ
[1/6]
Oren Sultan (@oren_sultan) 's Twitter Profile Photo

Greetings from Fort Lauderdale, USA ๐Ÿ‡บ๐Ÿ‡ธ Last destination for my post conference trip! Soon to be back in Tel Aviv, Israel ๐Ÿ‡ฎ๐Ÿ‡ฑ

Greetings from Fort Lauderdale, USA ๐Ÿ‡บ๐Ÿ‡ธ
Last destination for my post conference trip! Soon to be back in Tel Aviv, Israel ๐Ÿ‡ฎ๐Ÿ‡ฑ
Maor Ivgi (@maorivg) 's Twitter Profile Photo

1/7 ๐Ÿšจ What do LLMs do when they are uncertain? We found that the stronger the LLM, the more it hallucinates and the less it loops! This pattern extends to sampling methods and instruction tuning. ๐Ÿงต๐Ÿ‘‡ Mor Geva Jonathan Berant Ori Yoran

1/7 ๐Ÿšจ What do LLMs do when they are uncertain? We found that the stronger the LLM, the more it hallucinates and the less it loops! This pattern extends to sampling methods and instruction tuning. ๐Ÿงต๐Ÿ‘‡
<a href="/megamor2/">Mor Geva</a> <a href="/JonathanBerant/">Jonathan Berant</a> <a href="/OriYoran/">Ori Yoran</a>
Gili Lior (@gililior) 's Twitter Profile Photo

๐Ÿ“Š๐Ÿ“ˆ๐ŸŽฏHappy to share a new benchmark: SEAM ๐Ÿค - A Stochastic Evaluation Approach for Multi-document tasks Paper arxiv.org/pdf/2406.16086 Website seam-benchmark.github.io Code github.com/seam-benchmarkโ€ฆ w/ Avi Caciularu Arie Cattan Shahar Levy Ori Shapira Gabriel Stanovsky

๐Ÿ“Š๐Ÿ“ˆ๐ŸŽฏHappy to share a new benchmark: SEAM ๐Ÿค - A Stochastic Evaluation Approach for Multi-document tasks

Paper arxiv.org/pdf/2406.16086
Website seam-benchmark.github.io
Code github.com/seam-benchmarkโ€ฆ

w/ <a href="/clu_avi/">Avi Caciularu</a> <a href="/ArieCattan/">Arie Cattan</a> <a href="/ShaharLevy19/">Shahar Levy</a> <a href="/obspp18/">Ori Shapira</a> <a href="/GabiStanovsky/">Gabriel Stanovsky</a>
Dan Ofer (@Worldcon, was at @ICML) (@danofer) 's Twitter Profile Photo

1/ ๐ŸŽ‰ Our paper "Protein Language Models Expose Viral Mimicry and Immune Escape" is accepted at #ICML2024. We delve into how machine learning can help us understand tricky viruses better! ๐Ÿฆ  openreview.net/forum?id=gGnJBโ€ฆ #ICML #ML4LMS #science #bioinformatics #ML #virus #LLM

Shachar Don-Yehiya (@shachar_don) 's Twitter Profile Photo

Human feedback is critical for language models development ๐Ÿ’ฌ, but collecting it is costly ๐Ÿค‘ We find that users naturally include feedback when interacting with chat models, and we can automatically extract it! arxiv.org/abs/2407.10944 W Leshem Choshen ๐Ÿค–๐Ÿค— Omri Abend ๐Ÿงต๐Ÿ‘‡

Human feedback is critical for language models development ๐Ÿ’ฌ, but collecting it is costly ๐Ÿค‘

We find that users naturally include feedback when interacting with chat models, and we can automatically extract it!

arxiv.org/abs/2407.10944

W <a href="/LChoshen/">Leshem Choshen ๐Ÿค–๐Ÿค—</a> <a href="/AbendOmri/">Omri Abend</a> ๐Ÿงต๐Ÿ‘‡
Michael Hassid (@michaelhassid) 's Twitter Profile Photo

Which is better, running a 70B model once, or a 7B model 10 times? The answer might be surprising! Presenting our new Conference on Language Modeling paper: "The Larger the Better? Improved LLM Code-Generation via Budget Reallocation" arxiv.org/abs/2404.00725 1/n

Which is better, running a 70B model once, or a 7B model 10 times? The answer might be surprising!

Presenting our new <a href="/COLM_conf/">Conference on Language Modeling</a>  paper: "The Larger the Better? Improved LLM Code-Generation via Budget Reallocation"

arxiv.org/abs/2404.00725

1/n
ืื™ืœ ื ื•ื” - Eyal Naveh (@eyalnaveh1) 's Twitter Profile Photo

ื ืชื ื™ื”ื•, ืื ื™ ืžื•ืฉืš ื‘ื“ืฉ ืžืขื™ืœืš. ืขืฆื•ืจ ืืช ื”ื˜ื™ืจื•ืฃ ืื• ืชื™ื–ื›ืจ ื›ืžื—ื•ืœืœ ืฉืœื•. ื–ืืช ื”ื”ื–ื“ืžื ื•ืช ื”ืื—ืจื•ื ื” ืœืขืฉื•ืช ืกื™ื‘ื•ื‘ ืคืจืกื” ื•ื‘ืžืงื•ื ืœืงืจื•ืข ืืช ื™ืฉืจืืœ - ืœื“ืื•ื’ ืœืขืชื™ื“ื”

Yonatan Bitton (@yonatanbitton) 's Twitter Profile Photo

1/4 ๐Ÿงฉ Excited to share our new paper "Visual Riddles"! We explore how small visual details can greatly impact understanding, providing a rigorous test for both visual comprehension and world knowledge factuality. ๐Ÿงต

Gili Lior (@gililior) 's Twitter Profile Photo

Exciting news! I'll present my poster at #ACL2024 about unsupervised document structure extraction tomorrow (Aug. 12th) at 12:45 PM ๐Ÿ•’ Come say hi and let's chat over the paper! arxiv.org/pdf/2402.13906 More details below โฌ‡๏ธ w/ Gabriel Stanovsky (((ู„()(ู„() 'yoav))))๐Ÿ‘พ Ai2 HUJI NLP

Exciting news! I'll present my poster at #ACL2024 about unsupervised document structure extraction tomorrow (Aug. 12th) at 12:45 PM ๐Ÿ•’ Come say hi and let's chat over the paper! arxiv.org/pdf/2402.13906 More details below โฌ‡๏ธ
w/ <a href="/GabiStanovsky/">Gabriel Stanovsky</a> <a href="/yoavgo/">(((ู„()(ู„() 'yoav))))๐Ÿ‘พ</a> <a href="/allen_ai/">Ai2</a> <a href="/nlphuji/">HUJI NLP</a>
LTX Studio (@ltxstudio) 's Twitter Profile Photo

Remember when we partnered with award-winning filmmaker Ari Folman? Check out how he used LTX Studio throughout his project to visualize his concept in the thread below ๐Ÿงตโฌ‡๏ธ Make sure to follow our posts later today for a big announcement coming soon ๐Ÿ‘€ x.com/LTXStudio/statโ€ฆ

Gil Dickmann (@gildickmann) 's Twitter Profile Photo

ืกืœื™ื—ื”, ื›ืจืžืœื™. ืกืœื™ื—ื” ืฉืœื ืขืฆืจื ื• ื›ืฉืขื•ื“ ื”ื™ื” ืืคืฉืจ. ืกืœื™ื—ื” ืฉื ืชื ื• ืœื”ื ืœื”ืจื•ื’ ืื•ืชืš. ื”ืœื•ื•ืื™ ืฉืจืื™ืช ื•ืฉืžืขืช ืื•ืชื ื•. ื”ืœื•ื•ืื™ ืฉืœืžืจื•ืช ืฉืจืื™ืช ื‘ืขื™ื ื™ื™ื ืืช ื”ืจืฆื— ื”ื ื•ืจื ืฉืœ ืืžื ื›ื ืจืช, ื’ื™ืœื™ืช ืฉืื‘ื ืืฉืœ ื•ืื—ื™ื™ืš ืืœื•ืŸ ื•ืื•ืจ, ื”ื’ื™ืกื” ืฉืœืš ื™ืจื“ืŸ ื•ื”ืื—ื™ื™ื ื™ืช ืฉืœืš ื’ืคืŸ, ืฉืจื“ื•. ื”ืœื•ื•ืื™ ืฉืจืื™ืช ืื™ืš ื”ื—ื‘ืจื•ืช ืฉืœืš ื ืื‘ืงื• ื›ื“ื™ ืฉืชื—ื–ืจื™

ืกืœื™ื—ื”, ื›ืจืžืœื™. 

ืกืœื™ื—ื” ืฉืœื ืขืฆืจื ื• ื›ืฉืขื•ื“ ื”ื™ื” ืืคืฉืจ. 
ืกืœื™ื—ื” ืฉื ืชื ื• ืœื”ื ืœื”ืจื•ื’ ืื•ืชืš. 

ื”ืœื•ื•ืื™ ืฉืจืื™ืช ื•ืฉืžืขืช ืื•ืชื ื•. ื”ืœื•ื•ืื™ ืฉืœืžืจื•ืช ืฉืจืื™ืช ื‘ืขื™ื ื™ื™ื ืืช ื”ืจืฆื— ื”ื ื•ืจื ืฉืœ ืืžื ื›ื ืจืช, ื’ื™ืœื™ืช ืฉืื‘ื ืืฉืœ ื•ืื—ื™ื™ืš ืืœื•ืŸ ื•ืื•ืจ, ื”ื’ื™ืกื” ืฉืœืš ื™ืจื“ืŸ ื•ื”ืื—ื™ื™ื ื™ืช ืฉืœืš ื’ืคืŸ, ืฉืจื“ื•. ื”ืœื•ื•ืื™ ืฉืจืื™ืช ืื™ืš ื”ื—ื‘ืจื•ืช ืฉืœืš ื ืื‘ืงื• ื›ื“ื™ ืฉืชื—ื–ืจื™
Omri Avrahami (@omriavr) 's Twitter Profile Photo

[1/7] ๐Ÿ“œ I can finally share that our recent @NVIDIA project DiffUHaul --- A Training-Free Method for Object Dragging in Images has been accepted to #SIGGRAPHAsia2024 ๐ŸŽ‰. Project Page: omriavrahami.com/diffuhaul/

Leshem Choshen ๐Ÿค–๐Ÿค— (@lchoshen) 's Twitter Profile Photo

Human feedback is critical for aligning LLMs, so why donโ€™t we collect it in the open ecosystem?๐Ÿง We (15 orgs) gathered the key issues and next steps. Envisioning a community-driven feedback platform, like Wikipedia alphaxiv.org/abs/2408.16961 ๐Ÿงต

Human feedback is critical for aligning LLMs, so why donโ€™t we collect it in the open ecosystem?๐Ÿง
We (15 orgs) gathered the key issues and next steps.
Envisioning
a community-driven feedback platform, like Wikipedia

alphaxiv.org/abs/2408.16961
๐Ÿงต