Deniz Yuret (@denizyuret) 's Twitter Profile
Deniz Yuret

@denizyuret

@kuisaicenter founding director

ID: 2412037924

linkhttp://www.denizyuret.com calendar_today15-03-2014 06:43:50

408 Tweet

3,3K Takipçi

211 Takip Edilen

@HinczewskiLab (@hinczewskilab) 's Twitter Profile Photo

Check out our new preprint on the connections between machine learning and nonequilibrium physics: arxiv.org/abs/2306.03521. Shishir Adhikari started this as his last PhD work, and it has grown into a fun collaboration with Alkan Kabakcioglu, Alex Strang, and Deniz Yuret (1/n)

Check out our new preprint on the connections between machine learning and nonequilibrium physics: arxiv.org/abs/2306.03521.  Shishir Adhikari started this as his last PhD work, and it has grown into a fun collaboration with <a href="/AlkanKabakciog1/">Alkan Kabakcioglu</a>, Alex Strang, and <a href="/denizyuret/">Deniz Yuret</a> (1/n)
Jacob Andreas (@jacobandreas) 's Twitter Profile Photo

Incredibly proud of Ekin Akyürek for receiving the lexical semantics area award at ACL 2025. Come see his talk today at 4:15 in Pier 2/3! virtual2023.aclweb.org/paper_P2367.ht…

Incredibly proud of <a href="/akyurekekin/">Ekin Akyürek</a> for receiving the lexical semantics area award at <a href="/aclmeeting/">ACL 2025</a>. Come see his talk today at 4:15 in Pier 2/3!

virtual2023.aclweb.org/paper_P2367.ht…
Aykut Erdem (@aykuterdemml) 's Twitter Profile Photo

🎉 Exciting news! Our paper “CLIP-Guided StyleGAN Inversion for Text-Driven Real Image Editing" has been accepted to ACM Trans. on Graphics and will be presented at SIGGRAPH Asia ➡️ Hong Kong 2023. Co-authored with Canberk Baykal, Abdul Basit Anees, Duygu Ceylan, Erkut Erdem and Deniz Yuret. 1/4

🎉 Exciting news! Our paper “CLIP-Guided StyleGAN Inversion for Text-Driven Real Image Editing" has been accepted to ACM Trans. on Graphics and will be presented at <a href="/SIGGRAPHAsia/">SIGGRAPH Asia ➡️ Hong Kong</a> 2023. Co-authored with <a href="/cjberg19/">Canberk Baykal</a>, <a href="/abdul_basit_98/">Abdul Basit Anees</a>, <a href="/guerrera_desesp/">Duygu Ceylan</a>, <a href="/erkuterdem/">Erkut Erdem</a> and <a href="/denizyuret/">Deniz Yuret</a>. 1/4
KUIS AI (@kuisaicenter) 's Twitter Profile Photo

📢✨Next Tue (Nov 14) at 10:00 am, we'll have Kyunghyun Cho ( Kyunghyun Cho ) *in person* at KUIS AI Center, Koç University: "Beyond Test Accuracies for Studying Deep Neural Networks" for registration and details: [email protected] or just DM! #kuisaitalks #ArtificialInteligence

📢✨Next Tue (Nov 14) at 10:00 am, we'll have Kyunghyun Cho ( <a href="/kchonyc/">Kyunghyun Cho</a> ) *in person* at KUIS AI Center, Koç University: 
"Beyond Test Accuracies for Studying Deep Neural Networks" 
for registration and details: ai-info@ku.edu.tr or just DM! 
#kuisaitalks #ArtificialInteligence
Ali Safaya (@ali_safaya) 's Twitter Profile Photo

🎉 We are excited to announce the release of Kanarya 2B and Kanarya 0.7B, the latest pre-trained Turkish language models. 🎉

🎉 We are excited to announce the release of Kanarya 2B and Kanarya 0.7B, the latest pre-trained Turkish language models. 🎉
studioberlin (@ipnberlin) 's Twitter Profile Photo

youtu.be/p5xWWrS8XLA?si… Yapay zekanın geçmişi, bugünü ve geleceği Prof. Dr. Deniz Yüret Koç Üniversitesi Öğretim Üyesi | İş Bank.Yapay Zeka Uygulama ve Araştırma Merkezi Müd. studioberlin Koç Üniversitesi #yapayzeka #DeepLearning #DenizYüret Deniz Yuret KUIS AI @kuisaicenter

Semih Yagcioglu (@semihyagcioglu) 's Twitter Profile Photo

1. 🧵🎉 Excited to share that our paper "Sequential Compositional Generalization in Multimodal Models" is accepted as a long paper at #NAACL2024! 🌟 We'll be presenting our findings in Mexico City this June (NAACL HLT 2025). Dive into the full details here 👇 - Paper:

1. 🧵🎉 Excited to share that our paper "Sequential Compositional Generalization in Multimodal Models" is accepted as a long paper at #NAACL2024! 🌟 We'll be presenting our findings in Mexico City this June (<a href="/naaclmeeting/">NAACL HLT 2025</a>). Dive into the full details here 👇

- Paper:
Emre Can Acikgoz (@emrecanacikgoz) 's Twitter Profile Photo

🎉 Excited to share our new work: “Bridging the Bosphorus: Advancing Turkish Large Language Models through Strategies for Low-Resource Language Adaptation and Benchmarking”! #AI #NLProc #TurkishNLP 🇹🇷 🚀 📄Paper: arxiv.org/abs/2405.04685 🌐Website: emrecanacikgoz.github.io/Bridging-the-B… (1/7)

Gözde Gül Sahin (@gozde_gul_sahin) 's Twitter Profile Photo

Projemizde ücretli görevlendirmek üzere STEM alanında eğitim görmüş kişiler arıyoruz! Çevrimiçi etiketleme yaparak saatlik 300 TL, toplamda 3000 TL’ye kadar Migros hediye çeki kazanabilirsiniz. İlgilenebilecek kişilerle paylaşabilir misiniz?

Projemizde ücretli  görevlendirmek üzere STEM alanında eğitim görmüş kişiler arıyoruz! Çevrimiçi etiketleme yaparak saatlik 300  TL, toplamda 3000 TL’ye kadar Migros hediye çeki kazanabilirsiniz.  İlgilenebilecek kişilerle paylaşabilir misiniz?
Deniz Yuret (@denizyuret) 's Twitter Profile Photo

Have you ever seen a learning curve that looks like a step function? It turns out a few hundred negative examples flips a switch inside an LLM and gives a discrete jump in accuracy. "How much do LLMs learn from negative examples?" (arxiv.org/abs/2503.14391) with Shadi Hamdan.

Have you ever seen a learning curve that looks like a step function? It turns out a few hundred negative examples flips a switch inside an LLM and gives a discrete jump in accuracy. "How much do LLMs learn from negative examples?" (arxiv.org/abs/2503.14391) with <a href="/ShadiSHamdan/">Shadi Hamdan</a>.
Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

Training on wrong answers outpaces training on correct ones. 10 times more learning emerges from plausible errors than from truths. Large language models refine their accuracy slowly when they learn only from correct examples. This paper introduces Likra, which trains one

Training on wrong answers outpaces training on correct ones.

10 times more learning emerges from plausible errors than from truths.

Large language models refine their accuracy slowly when they learn only from correct examples.

This paper introduces Likra, which trains one
Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

This plot compares how accuracy on the ARC-Challenge benchmark improves as more examples are used for two training methods. The blue line shows supervised fine-tuning using only correct question-answer pairs. Its accuracy rises slowly from about 60% with very few examples to

This plot compares how accuracy on the ARC-Challenge benchmark improves as more examples are used for two training methods.

The blue line shows supervised fine-tuning using only correct question-answer pairs. Its accuracy rises slowly from about 60% with very few examples to