(✿◡‿◡) 🍁 (@soccerff22) 's Twitter Profile
(✿◡‿◡) 🍁

@soccerff22

ID: 80158383

calendar_today05-10-2009 23:42:01

3,3K Tweet

223 Takipçi

2,2K Takip Edilen

(✿◡‿◡) 🍁 (@soccerff22) 's Twitter Profile Photo

🔥 Blazing fast, mythically unstoppable. Fogo burner is the Firedancer in its purest form. The fastest Layer 1, moving at the speed of legends. ⚡🔥 #Fogo #Blockchain #FastestL1

🔥 Blazing fast, mythically unstoppable.

Fogo <a href="/FogoChain/">burner</a> is the Firedancer in its purest form. The fastest Layer 1, moving at the speed of legends. ⚡🔥

#Fogo #Blockchain #FastestL1
Qwen (@alibaba_qwen) 's Twitter Profile Photo

🚀 One line. A full webpage. No hassle. Introducing Web Dev – the ultimate tool for building stunning frontend webpages & apps using simple prompts in Qwen Chat. 🎨 Just say, "create a twitter website" — and boom! Instant code, ready to go. No coding required. Just your

🚀 One line. A full webpage. No hassle.

Introducing Web Dev – the ultimate tool for building stunning frontend webpages &amp; apps using simple prompts in Qwen Chat.

🎨 Just say, "create a twitter website" — and boom! Instant code, ready to go.

No coding required. Just your
Zed (@zeddotdev) 's Twitter Profile Photo

We use Zed to run our meetings, using our collaboration tooling. We organize our projects via Markdown in our channel notes. Here, we are prioritizing work we are doing for Zed's official Windows support. windowswen.com

We use Zed to run our meetings, using our collaboration tooling. We organize our projects via Markdown in our channel notes. 

Here, we are prioritizing work we are doing for Zed's official Windows support.

windowswen.com
Qwen (@alibaba_qwen) 's Twitter Profile Photo

>>> Qwen3-Coder is here! ✅ We’re releasing Qwen3-Coder-480B-A35B-Instruct, our most powerful open agentic code model to date. This 480B-parameter Mixture-of-Experts model (35B active) natively supports 256K context and scales to 1M context with extrapolation. It achieves

&gt;&gt;&gt; Qwen3-Coder is here! ✅

We’re releasing Qwen3-Coder-480B-A35B-Instruct, our most powerful open agentic code model to date. This 480B-parameter Mixture-of-Experts model (35B active) natively supports 256K context and scales to 1M context with extrapolation. It achieves
(✿◡‿◡) 🍁 (@soccerff22) 's Twitter Profile Photo

My Etherscan Moment: Realizing I could check ANY wallet's balance without asking. No more 'trust me, bro'—just pure transparency. ✨ 10 years of building trust. Thank you, etherscan.eth! #10YearsofEtherscan

Murad 💹🧲 (@muststopmurad) 's Twitter Profile Photo

What you should learn from yesterday: - Don’t use Leverage - Don’t use Margin - Don’t use Debt - Don’t Pair Trade - Stop Trading The ONLY way to make it is to: - Do Spot - Do HODL - Do DCA - Believe in Something

(✿◡‿◡) 🍁 (@soccerff22) 's Twitter Profile Photo

🚀 You’ve been invited to join the GLM Coding Plan! Enjoy full support for Claude Code, Cline, and 10+ top coding tools — starting at just $3/month. Subscribe now and grab the limited-time deal! Link: z.ai/subscribe?ic=Y…

Unsloth AI (@unslothai) 's Twitter Profile Photo

You can now train LLMs 3× faster with no accuracy loss, via our new RoPE and MLP kernels. Our Triton kernels plus smart auto packing delivers ~3× faster training & 30% less VRAM vs optimized FA3 setups. Train Qwen3-4B 3x faster on just 3.9GB VRAM. Blog: docs.unsloth.ai/new/3x-faster-…

You can now train LLMs 3× faster with no accuracy loss, via our new RoPE and MLP kernels.

Our Triton kernels plus smart auto packing delivers ~3× faster training &amp; 30% less VRAM vs optimized FA3 setups.

Train Qwen3-4B 3x faster on just 3.9GB VRAM.

Blog: docs.unsloth.ai/new/3x-faster-…
Xiao Liu (Shaw) (@shawliu12) 's Twitter Profile Photo

Today we are open-sourcing AutoGLM 🚀 32 months → from random taps to real device agency: an AI that can use your phone and finish tasks for you 📱🤖 I’m all-in on bringing a private, controllable phone agent to everyone.