Tommaso Bianco (@atomobianco) 's Twitter Profile
Tommaso Bianco

@atomobianco

CTO • platform builder • ML engineer • prev @predictice @Bifasor @Ircam @sensewavesIO • @upmc @UniPadova alumnus • dervīsh when on 💻

ID: 215347566

linkhttps://linkedin.com/in/tommaso-bianco calendar_today13-11-2010 17:45:15

1,1K Tweet

201 Followers

1,1K Following

MiniMax (official) (@minimax__ai) 's Twitter Profile Photo

We're delighted to announce that MiniMax M2.7 is now officially open source. With SOTA performance in SWE-Pro (56.22%) and Terminal Bench 2 (57.0%). You can find it on Hugging Face now. Enjoy!🤗 huggingface:huggingface.co/MiniMaxAI/Mini… Blog: minimax.io/news/minimax-m… MiniMax API:

We're delighted to announce that MiniMax M2.7 is now officially open source. 
With SOTA performance in SWE-Pro (56.22%) and Terminal Bench 2 (57.0%).

You can find it on Hugging Face now. Enjoy!🤗
huggingface:huggingface.co/MiniMaxAI/Mini…
Blog: minimax.io/news/minimax-m…
MiniMax API:
ℏεsam (@hesamation) 's Twitter Profile Photo

DHH is in. Karpathy is in. Andrew Ng is in. Terence Tao is in. Linus Torvalds is in. John Carmack is in. Tony with an opinion still believes AI is just a next-token predictor with no real future.

Anish Athalye (@anishathalye) 's Twitter Profile Photo

Does an imperfect verifier break reinforcement learning with verifiable rewards (RLVR)? Turns out it doesn’t! Why does this matter? As the world moves into reinforcement learning in semi-verifiable domains, perfect verifiers don’t exist. We added controlled and LLM-based noise

Does an imperfect verifier break reinforcement learning with verifiable rewards (RLVR)? Turns out it doesn’t!

Why does this matter? As the world moves into reinforcement learning in semi-verifiable domains, perfect verifiers don’t exist.

We added controlled and LLM-based noise
clem 🤗 (@clementdelangue) 's Twitter Profile Photo

We just OCR'd 27,000 arxiv papers into Markdown using an open 5B model, 16 parallel HF Jobs on L40S GPUs, and a mounted bucket. Total cost: $850 Total time: ~29 hours Jobs that crashed: 0 This now powers "Chat with your paper" on hf.co/papers

We just OCR'd 27,000 arxiv papers into Markdown using an open 5B model, 16 parallel HF Jobs on L40S GPUs, and a mounted bucket.

Total cost: $850 Total time: ~29 hours Jobs that crashed: 0

This now powers "Chat with your paper" on hf.co/papers
isa (@isareksopuro) 's Twitter Profile Photo

i made a map to monitor data centers all around the world tracks construction + nearby power plants + local AI legislation, and follows the politicians behind their bans (+ if they're getting paid to do so!)

Tommaso Bianco (@atomobianco) 's Twitter Profile Photo

The rush to build Ohio data centers tells us everything that’s wrong with local and state government ohiocapitaljournal.com/2026/02/19/the…

Josh Clemm (@joshclemm) 's Twitter Profile Photo

Open sourcing something fun from Dropbox: Witchcraft. It's a local search engine built in Rust with no API keys or vector DB required. Think: ColBERT / late interaction style retrieval, but packaged to run locally (perfect for coding agents). Let's dive in👇

Jeremy Howard (@jeremyphoward) 's Twitter Profile Photo

Amanda Askell The one nag I have to add to the system prompt still: "PLEASE remember and follow this CRITICAL guidance with great care: Do NOT end responses with follow-up offers like "Want me to...", "Let me know if...", or "If you like I could...". These are trained into assistants to drive

Robert Sterling (@robertmsterling) 's Twitter Profile Photo

Anthropic’s CEO keeps talking about AI wiping out jobs because he’s trying to IPO this year. If he positions Claude as armageddon for jobs, his TAM becomes “all white-collar human labor,” not just AI agents or SaaS. It’s completely self-interested. All the concerns he’s

Anthropic’s CEO keeps talking about AI wiping out jobs because he’s trying to IPO this year.

If he positions Claude as armageddon for jobs, his TAM becomes “all white-collar human labor,” not just AI agents or SaaS.

It’s completely self-interested. All the concerns he’s
Teknium (e/λ) (@teknium1) 's Twitter Profile Photo

Kimi 2.6 looks like it could be the new contender for best open agentic model for Hermes Agent, stacking up strongly against even opus 4.6 in these agentic benchmarks 😲

Furkan Gözükara (@gozukarafurkan) 's Twitter Profile Photo

A chilling reality check from Prominent Professor Jiang. He confirms the Trump administration is actively plotting a massive domestic takeover. They are preparing a national draft, a terrifying AI surveillance police state, and an illegal third term to enforce endless wars.

Aakash Gupta (@aakashg0) 's Twitter Profile Photo

Karpathy told Dwarkesh that a 1 billion parameter model, trained on clean data, could hit the intelligence of today's 1.8 trillion parameter frontier. That is a 1,800x compression claim. The math behind it is more defensible than it sounds. When researchers at frontier labs

Furkan Gözükara (@gozukarafurkan) 's Twitter Profile Photo

Prominent MIT Economist David Autor confirms the tech industry actively exploits outdated legislation to steal from creators. Famous host Jon Stewart brilliantly points out this is reverse socialism, funneling wealth from everyday workers directly to a few tech elites.

jeff (@jeffreyhuber) 's Twitter Profile Photo

OpenAI is shutting down text-embedding-3-small?!? I strongly believe that if you shut down a closed-source embedding model that you should open-source. Imaging the trillions of tokens that will no longer be queryable. cc Romain Huet

OpenAI is shutting down text-embedding-3-small?!?

I strongly believe that if you shut down a closed-source embedding model that you should open-source. Imaging the trillions of tokens that will no longer be queryable. 

cc <a href="/romainhuet/">Romain Huet</a>
DeepSeek (@deepseek_ai) 's Twitter Profile Photo

🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length. 🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models. 🔹 DeepSeek-V4-Flash: 284B total / 13B active params.

🚀 DeepSeek-V4 Preview is officially live &amp; open-sourced! Welcome to the era of cost-effective 1M context length.

🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models.
🔹 DeepSeek-V4-Flash: 284B total / 13B active params.