Ankesh Bharti (@_feynon) 's Twitter Profile
Ankesh Bharti

@_feynon

systems researcher and technologist. building a personal software notebook / tiles.run / tilekit.dev. stewarding userandagents.com. he/him/25/bengaluru

ID: 992534564961599488

linkhttps://ankeshbharti.com/ calendar_today04-05-2018 22:40:55

4,4K Tweet

325 Followers

1,1K Following

Sheel Mohnot (@pitdesi) 's Twitter Profile Photo

I’m often reminded of this letter from Lead Edge capital Companies choose which data to present -- ranging from "cash profits" to "being voted the best place to work in our city."

I’m often reminded of this letter from Lead Edge capital

Companies choose which data to present -- ranging from "cash profits" to "being voted the best place to work in our city."
SwiftWasm (@swiftwasm) 's Twitter Profile Photo

📣 SwiftWasm 6.1 is now available Notable improvements: • No custom patches needed - fully upstreamed • Swift SDK-only distribution • Code coverage support • swift-testing support • VSCode support Check out the release blog post 👇 blog.swiftwasm.org/posts/6-1-rele…

Privy (@privy_io) 's Twitter Profile Photo

1/ Today, we're proud to announce that Stripe is acquiring Privy. We couldn’t be more excited. Privy will continue as an independent product – but now we’ll move faster, ship more, and serve you even better, so you can stay focused on your users.

1/ Today, we're proud to announce that Stripe is acquiring Privy.

We couldn’t be more excited.

Privy will continue as an independent product – but now we’ll move faster, ship more, and serve you even better, so you can stay focused on your users.
.txt (@dottxtai) 's Twitter Profile Photo

Edge inference demands blazing-fast structured generation. Our technology delivers exactly that and with grammars we can eliminate JSON overhead for even faster function calling. Excited to announce our collaboration with Liquid AI bringing this to edge devices!

Yixin Dong (@yi_xin_dong) 's Twitter Profile Photo

We’re excited to announce that XGrammar has partnered with Outlines! 🎉 XGrammar is now the grammar backend powering Outlines, enabling structured LLM generation with higher speed. Check out Outlines — an amazing library for LLM structured text generation! 🚀

We’re excited to announce that XGrammar has partnered with Outlines! 🎉
XGrammar is now the grammar backend powering Outlines, enabling structured LLM generation with higher speed.

Check out Outlines — an amazing library for LLM structured text generation! 🚀
mozilla.ai (@mozillaai) 's Twitter Profile Photo

New on the blog: our first-ever community guest post! Baris Guler explores running AI agents fully in-browser, no APIs, no servers, powered by: ⚙️ WebLLM 🧱 WASM 🔁 WebWorkers Multi-language, local-first, and privacy-respecting. Full post: blog.mozilla.ai/3w-for-in-brow…

Aman Chadha (@i_amanchadha) 's Twitter Profile Photo

🧠 [Primer] On-device Transformers • od-xformer.aman.ai - On-device transformers bring the power of LLMs and Encoder Models directly to mobile and edge hardware, overcoming constraints of latency, privacy, and connectivity while enabling real-time intelligence. - This

Vaibhav (VB) Srivastav (@reach_vb) 's Twitter Profile Photo

🚨 Apple just released FastVLM on Hugging Face - 0.5, 1.5 and 7B real-time VLMs with WebGPU support 🤯 > 85x faster and 3.4x smaller than comparable sized VLMs > 7.9x faster TTFT for larger models > designed to output fewer output tokens and reduce encoding time for high

Awni Hannun (@awnihannun) 's Twitter Profile Photo

GPT-OSS uses MXFP4 quantization (which MLX now supports). There are two FP4 formats circulating right now: MXFP4 and NVFP4 (NV for Nvidia). From looking at how GPT-OSS uses MXFP4, it is somewhat suboptimal. I'm thinking NVFP4 will be the more commonly used format in the

GPT-OSS uses MXFP4 quantization (which MLX now supports). 

There are two FP4 formats circulating right now: MXFP4 and NVFP4 (NV for Nvidia).

From looking at how GPT-OSS uses MXFP4, it is somewhat suboptimal. I'm thinking NVFP4 will be the more commonly used format in the
Didier Lopes (@didier_lopes) 's Twitter Profile Photo

My next blog post is dropping this week, and it’s a much deeper dive than usual. I’ll be walking through how I fine-tuned Microsoft’s Phi-3-mini-4k-instruct (3.8B) with LoRA on my Mac using MLX. The experiment: exploring whether a 3.8B model that runs locally can be fine-tuned

My next blog post is dropping this week, and it’s a much deeper dive than usual.

I’ll be walking through how I fine-tuned Microsoft’s Phi-3-mini-4k-instruct (3.8B) with LoRA on my Mac using MLX.

The experiment: exploring whether a 3.8B model that runs locally can be fine-tuned
Georgi Gerganov (@ggerganov) 's Twitter Profile Photo

VS Code adds support for custom OAI-compatible endpoints This a big win for local AI as it allows us to use any local model provider without vendor lock-in. Big thanks to the VS Code devs and especially Isidor Nikolic for listening to the community feedback and adding this option!

VS Code adds support for custom OAI-compatible endpoints

This a big win for local AI as it allows us to use any local model provider without vendor lock-in. Big thanks to the VS Code devs and especially <a href="/IsidorN/">Isidor Nikolic</a> for listening to the community feedback and adding this option!
Shawn Lewis (@shawnup) 's Twitter Profile Photo

🧵 We acquired OpenPipe! OpenPipe's ART framework makes it easy to beat foundation models with smaller open models on real problems, using reinforcement learning.

🧵 We acquired OpenPipe!

OpenPipe's ART framework makes it easy to beat foundation models with smaller open models on real problems, using reinforcement learning.
Josh Miller (@joshm) 's Twitter Profile Photo

The The Browser Company just signed a merger agreement to be acquired. We will remain independent. Our focus is Dia. I’ve written and rewritten this post more times than I’d like to admit, but what I keep coming back to is simple: the work continues, and we’re grateful for this