Onur Güngör (@onurgu_ml) 's Twitter Profile
Onur Güngör

@onurgu_ml

Part-time faculty - Bogazici University Comp. Eng.,
Data scientist and ML engineer

ID: 3303819562

linkhttps://www.cmpe.boun.edu.tr/~onurgu/ calendar_today30-05-2015 15:15:11

1,1K Tweet

991 Followers

1,1K Following

merve (@mervenoyann) 's Twitter Profile Photo

TURNA: the biggest Turkish encoder-decoder model up-to-date, based on UL2 architecture, comes in 1.1B params 🐦 😍 The researchers also released models fine-tuned on various downstream tasks including text categorization, NER, summarization and more! 🤯 Great models Onur Güngör

Cem Say (@say_cem) 's Twitter Profile Photo

Boğaziçi Üniversitesi Bilgisayar Mühendisliği Bölümündeki Text Analytics and BIoInformatics Lab (TABILAB) Dil Modelleme Grubu proje ekibi, TURNA adını verdiği Türkçe dil modelinin ilk aşamasını tamamlayıp Huggingface'te paylaşıma açtı: (huggingface.co/spaces/boun-ta…)

Onur Keleş (@onr_kls) 's Twitter Profile Photo

Good news! Our LLaMA-2-Econ model with Ömer Turan Bayraklı accepted at inetd_org by İÜ on Feb 24 🎉 We used supervised fine-tuning for adapting the model to economics papers in academic tasks and surpassed previous models. Special thanks to Onur Güngör for the feedback!

Onur Keleş (@onr_kls) 's Twitter Profile Photo

We will host Onur Güngör from Boğaziçi Üni. and Udemy with his talk “TURNA: A Turkish Encoder-Decoder Language Model for Enhanced Understanding and Generation" on March 4 at 8 PM (GMT+3)!

We will host <a href="/onurgu_ml/">Onur Güngör</a> from <a href="/UniBogazici/">Boğaziçi Üni.</a> and <a href="/udemy/">Udemy</a> with his talk “TURNA: A Turkish Encoder-Decoder Language Model for Enhanced Understanding and Generation" on March 4 at 8 PM (GMT+3)!
Onur Güngör (@onurgu_ml) 's Twitter Profile Photo

Heykel. Beğenmedik çünkü güzel değil. Anadolunun heykelcilik geleneğine ayıp edilmiş. Aşağıdakilerle karşılaştırın, tamam herkes Bernini olamaz ama niye bu kadar kötü!

Heykel.

Beğenmedik çünkü güzel değil.

Anadolunun heykelcilik geleneğine ayıp edilmiş.

Aşağıdakilerle karşılaştırın, tamam herkes Bernini olamaz ama niye bu kadar kötü!
Onur Güngör (@onurgu_ml) 's Twitter Profile Photo

Türkçe NLP araştırmaları sırasında üretilen araç ve verileri uzun süre saklamak, paylaşmak bulmak çok zordur. TULAP Cmpe için bu problemi çözüyor! haberler.bogazici.edu.tr/tr/haber/turkc… tulap.cmpe.boun.edu.tr/home

Onur Güngör (@onurgu_ml) 's Twitter Profile Photo

Gemini'ın Gmail'deki "Summarize this email" özelliği yapay zekanın günlük hayatımız hızlıca etkileyecek yanına güzel bir örnek

thaddeus e. grugq (@thegrugq) 's Twitter Profile Photo

The xz backdoor was the final part of a campaign that spanned two years of operations. These operations were predominantly HUMINT style agent operations. There was an approach that lasted months before the Jia Tan persona was well positioned to be given a trusted role.

Onur Güngör (@onurgu_ml) 's Twitter Profile Photo

Continued pretraining vs from scratch karşılaştırmasında güzel veriler sunuyor. Bu güzel eseri paylaştığınız için teşekkürler. İçimde qualitative karşılaştırma isteği oluştu, acaba bu taskları iyi yapıyor olmalarına rağmen Türkçe bilgileri açısından bir fark var mı?

yobibyte (@y0b1byte) 's Twitter Profile Photo

New blog! Notebooks are McDonalds of Code. You can come to McDonalds and order a salad, but you won't. Same with notebooks, you can write NASA-production-grade software in a notebook, but most likely you won't. Notebooks make you lazy, and encourage bad practices. **common

New blog! Notebooks are McDonalds of Code.

You can come to McDonalds and order a salad, but you won't. Same with  notebooks, you can write NASA-production-grade software in a notebook,  but most likely you won't.   Notebooks make you lazy, and encourage bad practices.

**common
Abdullatif Köksal (@akoksal_) 's Twitter Profile Photo

📝 Many projects use translated benchmarks, but issues like translation errors and cultural irrelevance persist. To solve this, we've created a native Turkish benchmark, TurkishMMLU, and a leaderboard for LLMs in various setups, including no-CoT, CoT, and by difficulty.

📝 Many projects use translated benchmarks, but issues like translation errors and cultural irrelevance persist.

To solve this, we've created a native Turkish benchmark, TurkishMMLU, and a leaderboard for LLMs in various setups, including no-CoT, CoT, and by difficulty.
Yann LeCun (@ylecun) 's Twitter Profile Photo

💥BOOM 💥 Llama 3.1 is out 💥 405B, 70B, 8B versions. Main takeaways: 1. 405B performance is on par with the best closed models. 2. Open/free weights and code, with a license that enables fine-tuning, distillation into other models, and deployment anywhere. 3. 128k context

Onur Güngör (@onurgu_ml) 's Twitter Profile Photo

Gündüz vakit güneşten şiddetli far görmüşlüğüm var. ve bu farların çoğu modifiye değil. Fabrika çıkışı böyle. Dolayısıyla yönetmelik değişmeli, TÜVTÜRK kontrol etmeli.

Jeremy Howard (@jeremyphoward) 's Twitter Profile Photo

I'll get straight to the point. We trained 2 new models. Like BERT, but modern. ModernBERT. Not some hypey GenAI thing, but a proper workhorse model, for retrieval, classification, etc. Real practical stuff. It's much faster, more accurate, longer context, and more useful. 🧵

I'll get straight to the point.

We trained 2 new models. Like BERT, but modern. ModernBERT.

Not some hypey GenAI thing, but a proper workhorse model, for retrieval, classification, etc. Real practical stuff.

It's much faster, more accurate, longer context, and more useful. 🧵