Margus Pala (@marguspala) 's Twitter Profile
Margus Pala

@marguspala

Co-founder of #eID and #eSignature @ eideasy.com @e_id_easy. #digitalidentity #digitalsignature #eIDAS #E-stonia
Tweets are endorsements

ID: 85334618

linkhttps://eideasy.com calendar_today26-10-2009 15:02:23

562 Tweet

161 Followers

482 Following

Chris Murphy 🟧 (@chrismurphyct) 's Twitter Profile Photo

An insider trading scandal is brewing. Trump's 9:30am tweet makes it clear he was eager for his people to make money off the private info only he knew. So who knew ahead of time and how much money did they make?

Vishal Kapur (@figelwump) 's Twitter Profile Photo

One thing coding with agents does is it exposes how underbaked your thinking on the details of a product really are They’ll do some stuff that isn’t what you wanted, and then you realize you never told it what you wanted, and maybe never fully understood it yourself

Margus Pala (@marguspala) 's Twitter Profile Photo

Very often the reason of AI bad responses are misaligned incentives. If it did one web search and thought a bit harder the correct answer would be there. However the cost of extra work would make AI companies to lose money so agents are optimized to be cheap.

Boris Cherny (@bcherny) 's Twitter Profile Photo

I'm Boris and I created Claude Code. Lots of people have asked how I use Claude Code, so I wanted to show off my setup a bit. My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don't customize it much. There is no one correct way to

Wes Roth (@wesrothmoney) 's Twitter Profile Photo

Geoffrey Hinton explains that large language models are nothing like traditional software written line by line. Instead of explicit instructions, they rely on code that teaches them how to learn from data. What actually emerges is billions or trillions of learned connection

Z.ai (@zai_org) 's Twitter Profile Photo

Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens.

Introducing GLM-5: From Vibe Coding to Agentic Engineering

GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens.
MiniMax (official) (@minimax__ai) 's Twitter Profile Photo

Introducing M2.5, an open-source frontier model designed for real-world productivity. - SOTA performance at coding (SWE-Bench Verified 80.2%), search (BrowseComp 76.3%), agentic tool-calling (BFCL 76.8%) & office work. - Optimized for efficient execution, 37% faster at complex

Introducing M2.5, an open-source frontier model designed for real-world productivity.

- SOTA performance at coding (SWE-Bench Verified 80.2%), search (BrowseComp 76.3%), agentic tool-calling (BFCL 76.8%) & office work.

- Optimized for efficient execution, 37% faster at complex
OpenAI Developers (@openaidevs) 's Twitter Profile Photo

Introducing GPT-5.3-Codex-Spark, our ultra-fast model purpose built for real-time coding. We’re rolling it out as a research preview for ChatGPT Pro users in the Codex app, Codex CLI, and IDE extension.

Margus Pala (@marguspala) 's Twitter Profile Photo

My experience shows the same. It's really hard to hit limits with Codex. Claude Code planning itself takes so many tokens that implementation hits 5h limit.

Margus Pala (@marguspala) 's Twitter Profile Photo

This is very useful. I am managing many different projects and in every one of them I need to reiterate same things over and over again

Mistral AI Labs (@mistralailabs) 's Twitter Profile Photo

🔥 Meet Mistral Small 4: One model to do it all. ⚡ 128 experts, 119B total parameters, 256k context window ⚡ Configurable Reasoning ⚡ Apache 2.0 ⚡ 40% faster, 3x more throughput Our first model to unify the capabilities of our flagship models into a single, versatile model.

🔥 Meet Mistral Small 4: One model to do it all.
âš¡ 128 experts, 119B total parameters, 256k context window
âš¡ Configurable Reasoning
âš¡ Apache 2.0
âš¡ 40% faster, 3x more throughput

Our first model to unify the capabilities of our flagship models into a single, versatile model.
OpenAI (@openai) 's Twitter Profile Photo

Introducing workspace agents in ChatGPT—shared agents that can handle complex tasks and long-running workflows across tools and teams.