Oren Sultan
@oren_sultan
AI Researcher @Lightricks, CS PhD Candidate #AI #NLP @HebrewU, advised by @HyadataLab ๐ฎ๐ฑ | prev. @TU_Muenchen ๐ฉ๐ช @UniMelb ๐ฆ๐บ
ID: 1423192726670135300
http://www.orensultan.com 05-08-2021 08:02:52
502 Tweet
790 Takipรงi
624 Takip Edilen
Done with the conference NAACL HLT 2024 โ Back to travel โ๏ธ ๐Mexico city ->๐Cancun
๐๐๐ฏHappy to share a new benchmark: SEAM ๐ค - A Stochastic Evaluation Approach for Multi-document tasks Paper arxiv.org/pdf/2406.16086 Website seam-benchmark.github.io Code github.com/seam-benchmarkโฆ w/ Avi Caciularu Arie Cattan Shahar Levy Ori Shapira Gabriel Stanovsky
Human feedback is critical for language models development ๐ฌ, but collecting it is costly ๐ค We find that users naturally include feedback when interacting with chat models, and we can automatically extract it! arxiv.org/abs/2407.10944 W Leshem Choshen ๐ค๐ค Omri Abend ๐งต๐
Which is better, running a 70B model once, or a 7B model 10 times? The answer might be surprising! Presenting our new Conference on Language Modeling paper: "The Larger the Better? Improved LLM Code-Generation via Budget Reallocation" arxiv.org/abs/2404.00725 1/n
Exciting news! I'll present my poster at #ACL2024 about unsupervised document structure extraction tomorrow (Aug. 12th) at 12:45 PM ๐ Come say hi and let's chat over the paper! arxiv.org/pdf/2402.13906 More details below โฌ๏ธ w/ Gabriel Stanovsky (((ู()(ู() 'yoav))))๐พ Ai2 HUJI NLP
ืกืืืื, ืืจืืื. ืกืืืื ืฉืื ืขืฆืจื ื ืืฉืขืื ืืื ืืคืฉืจ. ืกืืืื ืฉื ืชื ื ืืื ืืืจืื ืืืชื. ืืืืืื ืฉืจืืืช ืืฉืืขืช ืืืชื ื. ืืืืืื ืฉืืืจืืช ืฉืจืืืช ืืขืื ืืื ืืช ืืจืฆื ืื ืืจื ืฉื ืืื ืื ืจืช, ืืืืืช ืฉืืื ืืฉื ืืืืืื ืืืื ืืืืจ, ืืืืกื ืฉืื ืืจืื ืืืืืืื ืืช ืฉืื ืืคื, ืฉืจืื. ืืืืืื ืฉืจืืืช ืืื ืืืืจืืช ืฉืื ื ืืืงื ืืื ืฉืชืืืจื