Miyamoto Musashi (@solminded) 's Twitter Profile
Miyamoto Musashi

@solminded

developer/builder. My twitter alter-ego for AI/LLM related stuff.

ID: 1466036914830684160

calendar_today01-12-2021 13:30:21

1,1K Tweet

165 Followers

696 Following

Miyamoto Musashi (@solminded) 's Twitter Profile Photo

I am sorry for all startups building wrappers on LCM feeling smart and suddenly SDXL turbo comes along and pulls the rug under them. This is why you should build on propriety models and added value business principles not first mover advantage on some latest shiny tech

Miyamoto Musashi (@solminded) 's Twitter Profile Photo

When I look at build in public communities all I see is attention bait posts, fake metrics, lies about revenue - users - marketing, typical x grifter hustle

AshutoshShrivastava (@ai_for_success) 's Twitter Profile Photo

Manus AI is freaking insane and I have not used anything like this before. Disclaimer: I am not paid by Manus AI to write this, I just feel lucky to have gotten access. For the given prompt, Manus AI conducted online research, generated Python code, validated all results, and

Deedy (@deedydas) 's Twitter Profile Photo

Manus, the new AI product that everyone's talking about, is worth the hype. This is the AI agent we were promised. Deep Research+Operator+Computer Use+Lovable+memory. Asked it to "Do a professional analysis of Tesla stock " and it did ~2wks of professional-level work in ~1hr!

Simon Willison (@simonw) 's Twitter Profile Photo

llama.cpp shipped new support for vision models this morning, including macOS binaries (albeit quarantined so you have to take extra steps to run them) that let you run vision models in a terminal or as a localhost web UI

llama.cpp shipped new support for vision models this morning, including macOS binaries (albeit quarantined so you have to take extra steps to run them) that let you run vision models in a terminal or as a localhost web UI
JingyuanLiu (@jingyuanliu123) 's Twitter Profile Photo

I was lucky to work in both China and the US LLM labs, and I've been thinking this for a while. The current values of pretraining are indeed different: US labs be like: - lots of GPUs and much larger flops run - Treating stabilities more seriously, and could not tolerate spikes

swemachine (@swe_machine) 's Twitter Profile Photo

Clawd disaster incoming if this trend of hosting ClawdBot on VPS instances keeps up, along with people not reading the docs and opening ports with zero auth... I'm scared we're gonna have a massive credentials breach soon and it can be huge This is just a basic scan of

Clawd disaster incoming

if this trend of hosting ClawdBot on VPS instances keeps up, along with people not reading the docs and opening ports with zero auth...

I'm scared we're gonna have a massive credentials breach soon and it can be huge

This is just a basic scan of
Andriy Burkov (@burkov) 's Twitter Profile Photo

LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the

LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the
Miyamoto Musashi (@solminded) 's Twitter Profile Photo

crewai, hermes, openclaw.. if you are using any of these or an agent framework make sure not to have installed version 1.82.8 of litellm

Wes Bos (@wesbos) 's Twitter Profile Photo

Claude Code leaked their source map, effectively giving you a look into the codebase. I immediately went for the one thing that mattered: spinner verbs There are 187

Claude Code leaked their source map, effectively giving you a look into the codebase.

I immediately went for the one thing that mattered: spinner verbs

There are 187