Aryan kolapkar (e/acc) (@aryankolapkar) 's Twitter Profile
Aryan kolapkar (e/acc)

@aryankolapkar

IITB '23 | AI builder | I use twitter as my note keeping app
text2shorts.com
thescript.ink

ID: 4516353076

linkhttps://text2shorts.com calendar_today17-12-2015 18:03:48

740 Tweet

37 Followers

368 Following

SkalskiP (@skalskip92) 's Twitter Profile Photo

this might be the coolest blogpost I ever written I dove deep into: - player detection with RF-DETR - player tracking with SAM2 - team clustering with SigLIP and K-means - number recognition with SmolVLM2 and ResNet I hope you'll like it link: blog.roboflow.com/identify-baske…

SkalskiP (@skalskip92) 's Twitter Profile Photo

3+ years of making computer vision tutorials YOLO11, RT-DETR, SAM 2, PaliGemma 2, Basketball AI, Qwen3-VL, and many others. all with links to papers, repos, blog posts, and YouTube tutorials. all in one place. link: github.com/roboflow/noteb…

3+ years of making computer vision tutorials

YOLO11, RT-DETR, SAM 2, PaliGemma 2, Basketball AI, Qwen3-VL, and many others. all with links to papers, repos, blog posts, and YouTube tutorials. all in one place.

link: github.com/roboflow/noteb…
Ai2 (@allen_ai) 's Twitter Profile Photo

Announcing Olmo 3, a leading fully open LM suite built for reasoning, chat, & tool use, and an open model flow—not just the final weights, but the entire training journey. Best fully open 32B reasoning model & best 32B base model. 🧵

Announcing Olmo 3, a leading fully open LM suite built for reasoning, chat, & tool use, and an open model flow—not just the final weights, but the entire training journey.
Best fully open 32B reasoning model & best 32B base model. 🧵
Clémentine Fourrier 🍊 (@clefourrier) 's Twitter Profile Photo

Hey twitter! I'm releasing the LLM Evaluation Guidebook v2! Updated, nicer to read, interactive graphics, etc! huggingface.co/spaces/OpenEva… After this, I'm off: I'm taking a sabbatical to go hike with my dogs :D (back Hugging Face in Dec *2026*) See you all next year!

Hey twitter! 

I'm releasing the LLM Evaluation Guidebook v2! 
Updated, nicer to read, interactive graphics, etc!
huggingface.co/spaces/OpenEva…

After this, I'm off: I'm taking a sabbatical to go hike with my dogs :D 
(back <a href="/huggingface/">Hugging Face</a> in Dec *2026*)

See you all next year!
swyx (@swyx) 's Twitter Profile Photo

One thing I'm finding from NeurIPS chatter is that SOTA-competitive open weights + RL FT is presenting an incredibly strong/compelling business opportunity for lots and lots of folks. the subagent/domaintuned stuff that the cog team did with swe-grep and swe-1.5 models (and

AC&E (@appliedcompute) 's Twitter Profile Photo

RL is a powerful mechanism for training company-specific models on their unique work and data. This is what we do at Applied Compute. A key challenge is how to make RL efficient, because we need runs to be fast (delivered in days), cheap (scalable unit economics), and predictable

Muyu He (@hemuyu0327) 's Twitter Profile Photo

On-policy distillation would revolutionize multi-turn tool-use training beyond RL, but neither Tinker nor TRL which implements on-policy supports anything other than single-turn distillation. We therefore have taken this upon ourselves and implemented this feature in native

On-policy distillation would revolutionize multi-turn tool-use training beyond RL, but neither Tinker nor TRL which implements on-policy supports anything other than single-turn distillation.

We therefore have taken this upon ourselves and implemented this feature in native
NVIDIA AI Developer (@nvidiaaidev) 's Twitter Profile Photo

Top 5 AI Model Optimization Techniques for Faster, Smarter Inference 1️⃣ Post-Training Quantization (PTQ) – The fastest path to value. Compress models without retraining for instant latency and throughput wins. 2️⃣ Quantization-Aware Training (QAT) – Fine-tune for low precision

Top 5 AI Model Optimization Techniques for Faster, Smarter Inference
 
1️⃣ Post-Training Quantization (PTQ) – The fastest path to value. Compress models without retraining for instant latency and throughput wins.
2️⃣ Quantization-Aware Training (QAT) – Fine-tune for low precision
Boris Cherny (@bcherny) 's Twitter Profile Photo

I'm Boris and I created Claude Code. I wanted to quickly share a few tips for using Claude Code, sourced directly from the Claude Code team. The way the team uses Claude is different than how I use it. Remember: there is no one right way to use Claude Code -- everyones' setup is

Joshua Gavin (@joshdgavin) 's Twitter Profile Photo

First movers bouta print. Who wants to launch their lowticket call funnel for this offer? I'll place one of my Ascension Officers who can cook it for you.

First movers bouta print.

Who wants to launch their lowticket call funnel for this offer?

I'll place one of my Ascension Officers who can cook it for you.
Baifeng (@baifeng_shi) 's Twitter Profile Photo

Humans can see in high-res, high-FPS in real-time. Why can't VLMs? Introducing AutoGaze: ViTs/VLMs "gaze" only at key video regions! Up to 4-100x token savings, 19x speedup, and enables scaling to 4K-res 1K-frame videos. 📄 arxiv.org/abs/2603.12254 🌐 autogaze.github.io 🤗

Deedy (@deedydas) 's Twitter Profile Photo

Meta Harnesses is Autoresearch on steroids. Something I've been exploring recently is to get long running agents to hill climb on a verifiable task to continuously improve without my intervention. Karpathy's Autoresearch did this pretty well on specific tasks, but this weekend I

Meta Harnesses is Autoresearch on steroids.

Something I've been exploring recently is to get long running agents to hill climb on a verifiable task to continuously improve without my intervention. Karpathy's Autoresearch did this pretty well on specific tasks, but this weekend I