Introducing DeepSeek-V3.1: our first step toward the agent era! 🚀
🧠 Hybrid inference: Think & Non-Think — one model, two modes
⚡️ Faster thinking: DeepSeek-V3.1-Think reaches answers in less time vs. DeepSeek-R1-0528
🛠️ Stronger agent skills: Post-training boosts tool use and
Re-upping this AI startup news from Friday 👀
Meta and Midjourney held partnership talks including about a potential acquisition this summer, joining Meta convos with Runway, Pika, Higgsfield, HeyGen and Krea (the later two unreported).
Shortly after we published at
Workflow for this one is nano-banana to Wan 2.2 (on KREA AI).
Asked nano-banana to turn the pets into gangsters in an alleyway with grills and chains - it took creative liberty with the hat 😂
Prompt for Wan 2.2 was "light animation of dog and cat gangsters."
In era of pretraining, what mattered was internet text. You'd primarily want a large, diverse, high quality collection of internet documents to learn from.
In era of supervised finetuning, it was conversations. Contract workers are hired to create answers for questions, a bit
pretty crazy that people are vibe coding music now (this is dj_dave)
she uses strudel a new live coding app to write music in your browser (open source too)
i've never knew this was possible before
Tried the HunyuanWorld-Voyager and found it very impressive. A scalable “world memory” system maintains geometric stability and coherence across any camera movement.
It‘s the world’s first ultra-long-range world model with native 3D reconstruction, redefining AI-driven spatial
It's been about a year since my team has fully adopted all the AI coding tools (Cursor, Claude Code)
And day to day I am feeling the added cruft in the code base. Unit tests are not catching regressions. Unneeded mocking, comments, are left in between. More refactoring is needed