Turhan Can Kargın 🐳 (@turhancan97) 's Twitter Profile
Turhan Can Kargın 🐳

@turhancan97

🎓 PhD student @JagiellonskiUni / @GMUMJU | 🧠 AI Enthusiast | 📸 Photography : unsplash.com/@tkargin | Tweets in TR/EN

ID: 2499626534

linkhttps://bento.me/tkargin calendar_today16-05-2014 19:10:34

1,1K Tweet

573 Followers

1,1K Following

elvis (@omarsar0) 's Twitter Profile Photo

NVIDIA is going big on foundation models! They announced Llama Nemotron. Summary: - based on Llama - include nano, super, and ultra versions - optimized for building AI agents - support more enterprise use cases - #1 on MTEB for retrieval tasks - #1 on ChatRAG - #1 commercial

NVIDIA is going big on foundation models!

They announced Llama Nemotron.

Summary:

- based on Llama
- include nano, super, and ultra versions
- optimized for building AI agents
- support more enterprise use cases
- #1 on MTEB for retrieval tasks
- #1 on ChatRAG
- #1 commercial
Microsoft Research (@msftresearch) 's Twitter Profile Photo

Microsoft researchers introduce MatterGen, a model that can discover new materials tailored to specific needs—like efficient solar cells or CO2 recycling—advancing progress beyond trial-and-error experiments. msft.it/6012U8zX8

Google AI (@googleai) 's Twitter Profile Photo

Today we introduce an AI co-scientist system, designed to go beyond deep research tools to aid scientists in generating novel hypotheses & research strategies. Learn more, including how to join the Trusted Tester Program, at goo.gle/417wJrA

kyutai (@kyutai_labs) 's Twitter Profile Photo

Meet MoshiVis🎙️🖼️, the first open-source real-time speech model that can talk about images! It sees, understands, and talks about images — naturally, and out loud. Voice interaction with a compact model endowed with visual understanding opens up new applications, from audio

ARC Prize (@arcprize) 's Twitter Profile Photo

Today we are announcing ARC-AGI-2, an unsaturated frontier AGI benchmark that challenges AI reasoning systems (same relative ease for humans). Grand Prize: 85%, ~$0.42/task efficiency Current Performance: * Base LLMs: 0% * Reasoning Systems: <4%

Today we are announcing ARC-AGI-2, an unsaturated frontier AGI benchmark that challenges AI reasoning systems (same relative ease for humans).

Grand Prize: 85%, ~$0.42/task efficiency

Current Performance:
* Base LLMs: 0%
* Reasoning Systems: &lt;4%
Sundar Pichai (@sundarpichai) 's Twitter Profile Photo

1/ Gemini 2.5 is here, and it’s our most intelligent AI model ever. Our first 2.5 model, Gemini 2.5 Pro Experimental is a state-of-the-art thinking model, leading in a wide range of benchmarks – with impressive improvements in enhanced reasoning and coding and now #1 on

OpenAI (@openai) 's Twitter Profile Photo

4o image generation has arrived. It's beginning to roll out today in ChatGPT and Sora to all Plus, Pro, Team, and Free users.

4o image generation has arrived.

It's beginning to roll out today in ChatGPT and Sora to all Plus, Pro, Team, and Free users.
Turhan Can Kargın 🐳 (@turhancan97) 's Twitter Profile Photo

Minecraft isn't just for gaming anymore—it's now an innovative AI benchmark! 🚀 MC-Bench lets AI models compete by coding Minecraft builds from prompts. It's a fun, fresh way to evaluate AI's creativity and coding skills! Check it out: mcbench.ai/about

Ahmad Al-Dahle (@ahmad_al_dahle) 's Twitter Profile Photo

Introducing our first set of Llama 4 models! We’ve been hard at work doing a complete re-design of the Llama series. I’m so excited to share it with the world today and mark another major milestone for the Llama herd as we release the *first* open source models in the Llama 4

Introducing our first set of Llama 4 models!

We’ve been hard at work doing a complete re-design of the Llama series. I’m so excited to share it with the world today and mark another major milestone for the Llama herd as we release the *first* open source models in the Llama 4
Hongyu Wang (@realhongyu_wang) 's Twitter Profile Photo

Excited to introduce BitNet b1.58 2B4T — the first large-scale, native 1-bit LLM🚀🚀 BitNet achieves performance on par with leading full-precision LLMs — and it’s blazingly fast⚡️⚡️uses much lower memory🎉 Everything is open-sourced — run it on GPU or your Macbook 🖥️⚙️

Excited to introduce BitNet b1.58 2B4T — the first large-scale, native 1-bit LLM🚀🚀

BitNet achieves performance on par with leading full-precision LLMs — and it’s blazingly fast⚡️⚡️uses much lower memory🎉

Everything is open-sourced — run it on GPU or your Macbook 🖥️⚙️
Yujia Qin@ICLR2025 (@tsingyoga) 's Twitter Profile Photo

Introducing UI-TARS-1.5, a vision-language model that beats OpenAI Operator and Claude 3.7 on GUI Agent and Game Agent tasks. We've open-sourced a small-size version model for research purposes, more details can be found in our blog. TARS learns solely from a screen, but

AI at Meta (@aiatmeta) 's Twitter Profile Photo

🚀 Meta FAIR is releasing several new research artifacts on our road to advanced machine intelligence (AMI). These latest advancements are transforming our understanding of perception. 1️⃣ Meta Perception Encoder: A large-scale vision encoder that excels across several image &

Physical Intelligence (@physical_int) 's Twitter Profile Photo

We got a robot to clean up homes that were never seen in its training data! Our new model, π-0.5, aims to tackle open-world generalization. We took our robot into homes that were not in the training data and asked it to clean kitchens and bedrooms. More below⤵️

AI at Meta (@aiatmeta) 's Twitter Profile Photo

Introducing Meta Perception Language Model (PLM): an open & reproducible vision-language model tackling challenging visual tasks. Learn more about how PLM can help the open source community build more capable computer vision systems. Read the research paper, and download the

Turhan Can Kargın 🐳 (@turhancan97) 's Twitter Profile Photo

Just returned from an inspiring weekend at GHOST Day: Applied Machine Learning Conference 2025 in Poznań, Poland 🇵🇱🤖! It felt great to attend an event at my alma mater, Politechnika Poznańska 🎓, and revisit the city that holds so many memories for me. The conference was filled with insightful talks and engaging

Just returned from an inspiring weekend at <a href="/ghostdayamlc/">GHOST Day: Applied Machine Learning Conference</a> 2025 in Poznań, Poland 🇵🇱🤖!

It felt great to attend an event at my alma mater, <a href="/PUT_Poznan/">Politechnika Poznańska</a> 🎓, and revisit the city that holds so many memories for me.

The conference was filled with insightful talks and engaging
Deedy (@deedydas) 's Twitter Profile Photo

Bytedance just dropped a super impressive model that can make fast, targetted image edits with just text. BAGEL is only ~14B image + text model (7B active) that bats far above its size. It's also fully open weight. Now that's what the power of a small team!

Bytedance just dropped a super impressive model that can make fast, targetted image edits with just text.

BAGEL is only ~14B image + text model (7B active) that bats far above its size. It's also fully open weight.

Now that's what the power of a small team!
Shreyas Gite (@shreyasgite) 's Twitter Profile Photo

Not all augmentations are equal -> which visual nuisance variables actually cause robotic imitation-learning policies to fail? One of my fav papers from Annie Xie, Lisa Lee, Chelsea Finn & Ted Xao. The authors create a controlled benchmark that lets them toggle seven factors

ML in PL (@mlinpl) 's Twitter Profile Photo

Machine Learning Summer School on Drug and Materials Discovery kicked off yesterday! The event began with opening remarks from the organizers: Jagiellonian University, GMUM and MLinPL. It was a true pleasure to welcome all participants and see the community come together in

Machine Learning Summer School on Drug and Materials Discovery kicked off yesterday!

The event began with opening remarks from the organizers: Jagiellonian University, GMUM and MLinPL. It was a true pleasure to welcome all participants and see the community come together in
ML in PL (@mlinpl) 's Twitter Profile Photo

Halfway through MLSS^D 2025 - and what a journey it's been so far! Each day has been filled with intensive lectures and hands-on sessions focused on machine learning for drug and materials discovery. But beyond the schedule, what truly stands out is the energy and engagement of

Halfway through MLSS^D 2025 - and what a journey it's been so far!

Each day has been filled with intensive lectures and hands-on sessions focused on machine learning for drug and materials discovery. But beyond the schedule, what truly stands out is the energy and engagement of