Tom (@krip_tom) 's Twitter Profile
Tom

@krip_tom

ID: 737003058996076544

calendar_today29-05-2016 19:29:56

47 Tweet

43 Followers

289 Following

Tom (@krip_tom) 's Twitter Profile Photo

someone just broke googles unbreakable AI watermark with 200 black images and a math transform from 1965. billions in detection research, defeated by a technique older than the internet. at some point we gotta admit pixel-level watermarking is a dead end

Tom (@krip_tom) 's Twitter Profile Photo

marimo-pair drops agents into running python notebooks. they see your data, run code, build plots. no more copy pasting between chat and jupyter github.com/marimo-team/ma…

Indra (@indravahan) 's Twitter Profile Photo

still got a couple of (niche?) usecases of gemini beyond the nano banana stuff: 1. dump long (10-20k+ loc) chat summaries from cursor chats and then tell gemini to summarize the core findings for handover to other agent(s) 2. dump entire pipeline logs to spot failures and

Tom (@krip_tom) 's Twitter Profile Photo

found archon. its like github actions but for ai coding, you write yaml workflows that force your agent through plan, code, test, review steps. no skipping github.com/coleam00/archon

found archon. its like github actions but for ai coding, you write yaml workflows that force your agent through plan, code, test, review steps. no skipping

github.com/coleam00/archon
Tom (@krip_tom) 's Twitter Profile Photo

MOSS-TTS-Nano. realtime text to speech on CPU, no GPU needed. 0.1B params, 20 languages, voice cloning tested locally and latency is wild for something this tiny github.com/OpenMOSS/MOSS-…

Tom (@krip_tom) 's Twitter Profile Photo

been testing caveman on claude code. it strips all filler from AI output, keeps code untouched. ~75% fewer output tokens, accuracy stays the same. one line install github.com/JuliusBrussee/…

been testing caveman on claude code. it strips all filler from AI output, keeps code untouched. ~75% fewer output tokens, accuracy stays the same. one line install

github.com/JuliusBrussee/…
Tom (@krip_tom) 's Twitter Profile Photo

been testing manifest for routing LLM calls. it scores each request in 2ms and picks the cheapest model that can handle it. self hosted, 300+ models. simple queries go to haiku instead of opus and you dont notice github.com/mnfst/manifest

been testing manifest for routing LLM calls. it scores each request in 2ms and picks the cheapest model that can handle it. self hosted, 300+ models. simple queries go to haiku instead of opus and you dont notice

github.com/mnfst/manifest