Oleksandr Besan🇺🇦 (@oleksandrbesan) 's Twitter Profile
Oleksandr Besan🇺🇦

@oleksandrbesan

👨🏻‍💼 / ⚙️ 🌍 /🐍 🐘🤖☁️ 🐳 ⚛️ 🏗️ 🐧/ 🎨 🧠 📚. 👺 ⛩️

ID: 3321066545

linkhttps://github.com/OleksandrBesan calendar_today12-06-2015 14:48:13

874 Tweet

171 Followers

357 Following

Simplifying Complexity (@simplifyinai) 's Twitter Profile Photo

Google has open-sourced the LangExtract Python library. It turns unstructured documents into structured data, with clear source references for each result. 100% open-source.

Google has open-sourced the LangExtract Python library.

It turns unstructured documents into structured data, with clear source references for each result.

100% open-source.
Jerry Liu (@jerryjliu0) 's Twitter Profile Photo

OCR-tuned VLMs are getting really good AND cheap, really quickly This is a cool release from LightOn Good and cheap models are starting to saturate OlmOCR-bench. There's a lot of greenfield opportunity in curating an even more complex, diversified benchmark (lots of complex

OCR-tuned VLMs are getting really good AND cheap, really quickly 

This is a cool release from LightOn 

Good and cheap models are starting to saturate OlmOCR-bench. There's a lot of greenfield opportunity in curating an even more complex, diversified benchmark (lots of complex
neuralamp (@neuralamp4ever) 's Twitter Profile Photo

AI-Induced Code Entropy: The Hidden Cost of Unchecked AI Acceleration In the rush to adopt AI coding tools, teams celebrate 5×–10× faster feature delivery and skyrocketing velocity metrics. Yet beneath the surface, a subtler but more dangerous phenomenon is emerging: AI-induced

Google Cloud Tech (@googlecloudtech) 's Twitter Profile Photo

Recursive Language Models (RLMs) let agents manage 10M+ tokens by delegating tasks recursively. This Google Cloud Community Article explains why ADK was the perfect choice for re-implementing the original RLM codebase in a more enterprise-ready format →goo.gle/4kjT12E

Recursive Language Models (RLMs) let agents manage 10M+ tokens by delegating tasks recursively.

This Google Cloud Community Article explains why ADK was the perfect choice for re-implementing the original RLM codebase in a more enterprise-ready format →goo.gle/4kjT12E
Illia (root.near) (🇺🇦, ⋈) (@ilblackdragon) 's Twitter Profile Photo

Péter Szilágyi The answer is to not let LLM touch secrets at all. The harness should keep it controlled and away from tools and inference engine and policy around secret <> domain. Working on this here - github.com/nearai/ironclaw

7213 | Ejaaz (@cryptopunk7213) 's Twitter Profile Photo

wait this is fucking AWESOME lol - dude fused an AI model *directly* into silicon which means this thing answers before you've even hit enter - processes 17,000 tokens PER SECOND 😂 for context, openai's quickest model does 1000 tokens per sec (i believe) - this is 17X faster -

Muratcan (@koylanai) 's Twitter Profile Photo

I build AI for a living. I believe in what we're building. But this kind of rhetoric makes my work harder and more dangerous. Sam Altman, comparing human development to model training is tone-deaf, strategically reckless. People are losing jobs. They're getting angry. They're

Sebastian Völkl (@basti_vkl) 's Twitter Profile Photo

built my own personal assistent device that runs OpenClaw. I was curious what the smallest form factor could be that fits in my pocket so I wanted to use the Pi Zero W. Works via Push to Talk->Transcribe->Sends to OpenClaw and streams the response back.