IF (@impactframesx) 's Twitter Profile
IF

@impactframesx

(*・‿・)ノ⌒*:・゚✧
Who has believed our report and to whom has the arm of the lord has been revealed?

ID: 1289917113478643713

linkhttp://impactframes.ai calendar_today02-08-2020 13:33:15

6,6K Tweet

1,1K Followers

361 Following

1LittleCoder💻 (@1littlecoder) 's Twitter Profile Photo

🇨🇳 China recent days: Kimi K2 Qwen3-235B-A22B Qwen 3 Coder Qwen Small+Medium Models New StepFun MoE ZAI GLM 4.5 and GLM 4.5 Air InternLM Intern S1 All open weights (mostly permissive license) 🇺🇸 US: OpenAI - Study Mode in ChatGPT Anthropic - Claude Max Pricing So, US is

Fake Wizard (@reallifefakewiz) 's Twitter Profile Photo

Doesn’t anyone think it’s, perhaps, slightly convenient that this censorship catastrophe is happening in multiple countries, across multiple mediums, with regard to many corporations, banks, and cultures… all at the same time? Just me?

Doesn’t anyone think it’s, perhaps, slightly convenient that this censorship
catastrophe is happening in multiple countries, across multiple mediums, with regard to many corporations, banks, and cultures… all at the same time?

Just me?
Dreaming Tulpa 🥓👑 (@dreamingtulpa) 's Twitter Profile Photo

all the disbelievers last week who said ai will never be able to achieve the omw style and that it's just a 5 second slop video and now deepmind dropped a world model on a regular tuesday that just does it lmao

p(doom) (@prob_doom) 's Twitter Profile Photo

Inspired by today's Genie 3 release? We are open-sourcing 🧞‍♀️Jasmine🧞‍♀️, a production-ready JAX-based codebase for world modeling from unlabeled videos. Scale from single hosts to hundreds of xPUs thanks to XLA! 🧵 (1/10)

Chongjie(CJ) Ye (@ychngji6) 's Twitter Profile Photo

🎮 Genie3 looks amazing but we can't play it yet! 😭 So we made our own version - and it's super fun to play! Thanks World Labs for providing the interactive frontend renderer & Deemos for the awesome environment map generation! #Genie3 #aigc

alex duffy (@alxai_) 's Twitter Profile Photo

GPT-5 is out. It's pretty great, steerable, & fast, BUT... - o3 still wins - GPT-5-mini, cheaper & as good as 2.5 Flash developers rejoice! - GPT-5 is super steerable! Great prompts make big difference - Different 'reasoning-effort' makes a big difference Results below! AI

Qwen (@alibaba_qwen) 's Twitter Profile Photo

🚀 Qwen3-30B-A3B-2507 and Qwen3-235B-A22B-2507 now support ultra-long context—up to 1 million tokens! 🔧 Powered by: • Dual Chunk Attention (DCA) – A length extrapolation method that splits long sequences into manageable chunks while preserving global coherence. •

🚀 Qwen3-30B-A3B-2507 and Qwen3-235B-A22B-2507 now support ultra-long context—up to 1 million tokens!

🔧 Powered by:

• Dual Chunk Attention (DCA) –  A length extrapolation method that splits long sequences into manageable chunks while preserving global coherence.  

•
Ostris (@ostrisai) 's Twitter Profile Photo

I was working on my 4090 with Qwen Image. I quantized it down to 3bit and trained it on my musician character just to debug. The samples are CLEAN. So now I am thinking, training an SVDquant style adapter LoRA could make qwen trainable/runable at 3bit. Imgs are step 0 and 1500.

I was working on my 4090 with Qwen Image. I quantized it down to 3bit and trained it on my musician character just to debug. The samples are CLEAN. So now I am thinking, training an SVDquant style adapter LoRA could make qwen trainable/runable at 3bit. Imgs are step 0 and 1500.
BennyKok (@bennykokmusic) 's Twitter Profile Photo

Ever wondered what your favorite static memes would look like in motion? We're launching a studio app that brings memes to life with animation! Check it out at comfy studio in ComfyDeploy.

Ostris (@ostrisai) 's Twitter Profile Photo

Trained a sidechain LoRA to compensate for the quantization precision loss when quantizing Qwen Image to 3 bit. It works well. This can be active during training and should allow us to fine tune Qwen Image on <24GB of VRAM. This can be done to all models.

Trained a sidechain LoRA to compensate for the quantization precision loss when quantizing Qwen Image to 3 bit. It works well. This can be active during training and should allow us to fine tune Qwen Image on &lt;24GB of VRAM. This can be done to all models.
BennyKok (@bennykokmusic) 's Twitter Profile Photo

We made another amazing app that let you upload an AI video and get a similar back Sort of OSS Midjourney here are some comparisons ComfyDeploy MJ/ComfyStudio