Philip (@zkphilipe) 's Twitter Profile
Philip

@zkphilipe

Working towards a Web3 future

ID: 1310636131235565572

calendar_today28-09-2020 17:43:14

12 Tweet

68 Followers

228 Following

Ola (@ola_zkzkvm) 's Twitter Profile Photo

1/5) We are thrilled to announce the release of the second edition of our technical white paper - Ola: A ZKVM-based, High-performance, and Privacy-focused Layer2 platform. shorturl.at/qrxCD

ZK Hack (@__zkhack__) 's Twitter Profile Photo

👀 #ZkHackIstanbul the first batch of bounties is out! Check 'em out here zk-hack-istanbul.devfolio.co/prizes 🚨 Applications are still open, sign up now: zkistanbul.com 🙏 And huge thanks to our amazing partners: Polygon | POL Aleo o1Labs (httpz) Ola RISC Zero

Ola (@ola_zkzkvm) 's Twitter Profile Photo

🚀Ola Incentivized Pre-Alpha Testnet is live! 🌟 #Ola is advancing data ownership and ZK smart contract innovation in #blockchain. Now's the time to get involved—devs and users alike can drive innovation and win exclusive #NFTs, #OVPs, future Ola #tokens, and mystery rewards!

🚀Ola Incentivized Pre-Alpha Testnet is live! 🌟

#Ola is advancing data ownership and ZK smart contract innovation in #blockchain.

Now's the time to get involved—devs and users alike can drive innovation and win exclusive #NFTs, #OVPs, future Ola #tokens, and mystery rewards!
Aligned (@alignedlayer) 's Twitter Profile Photo

WHITEPAPER ANNOUNCEMENT We have been working on the first Universal Verification Layer to build the Truth infrastructure the Internet deserves. Today we want to share with you the first draft version of "Aligned Layer: universal verification layer". docsend.com/view/55fqmbwmw…

Ola (@ola_zkzkvm) 's Twitter Profile Photo

🚀 Get ready for #HKWeb3Festival Web3Festival: Bitcoin Tech Talk HK 2024 is here! 🤝 Hosted by Ola & Psy (Formerly QED), we're bringing together top #Bitcoin minds: - Ben77 Ben77, Founder of Discoco Labs - Luis Luis_0xyi, Co-founder of ScaleBit - Fisher Yu

🚀 Get ready for #HKWeb3Festival <a href="/festival_web3/">Web3Festival</a>: Bitcoin Tech Talk HK 2024 is here!

🤝 Hosted by <a href="/ola_zkzkvm/">Ola</a> &amp; <a href="/QEDProtocol/">Psy (Formerly QED)</a>, we're bringing together top #Bitcoin  minds:
- Ben77 <a href="/blapta/">Ben77</a>, Founder of <a href="/discoco_lab/">Discoco Labs</a>
- Luis <a href="/0xyilu/">Luis_0xyi</a>, Co-founder of <a href="/scalebit_/">ScaleBit</a>
- Fisher Yu
Ola (@ola_zkzkvm) 's Twitter Profile Photo

🎁Dear #OlaMassiveMiners, 💎⛏️ Ola Massive Mining is Now Live! 📲 For Android users eager to embark on this journey, the early access version is available for download now! Get the app here: …-file.s3.ap-southeast-1.amazonaws.com/massive.releas…. iOS users, hang tight—your version is in the pipeline!

Sundar Pichai (@sundarpichai) 's Twitter Profile Photo

Introducing Willow, our new state-of-the-art quantum computing chip with a breakthrough that can reduce errors exponentially as we scale up using more qubits, cracking a 30-year challenge in the field. In benchmark tests, Willow solved a standard computation in <5 mins that would

Qwen (@alibaba_qwen) 's Twitter Profile Photo

🚀 Introducing the Qwen 3.5 Small Model Series Qwen3.5-0.8B · Qwen3.5-2B · Qwen3.5-4B · Qwen3.5-9B ✨ More intelligence, less compute. These small models are built on the same Qwen3.5 foundation — native multimodal, improved architecture, scaled RL: • 0.8B / 2B → tiny, fast,

🚀 Introducing the Qwen 3.5 Small Model Series
Qwen3.5-0.8B · Qwen3.5-2B · Qwen3.5-4B · Qwen3.5-9B

✨ More intelligence, less compute.
These small models are built on the same Qwen3.5 foundation — native multimodal, improved architecture, scaled RL:
• 0.8B / 2B → tiny, fast,
Unsloth AI (@unslothai) 's Twitter Profile Photo

Kimi K2.6 can now run on CPU, GPU and SSD setups! 🔥 We shrank the 1T model to 340GB via Dynamic GGUFs where important layers are upcasted. Run at >40 tok/s on 350GB RAM/VRAM setups. Run full precision on 610 GB. Guide: unsloth.ai/docs/models/ki… GGUF: huggingface.co/unsloth/Kimi-K…

Kimi K2.6 can now run on CPU, GPU and SSD setups! 🔥

We shrank the 1T model to 340GB via Dynamic GGUFs where important layers are upcasted.

Run at &gt;40 tok/s on 350GB RAM/VRAM setups.

Run full precision on 610 GB.

Guide: unsloth.ai/docs/models/ki…
GGUF: huggingface.co/unsloth/Kimi-K…
Qwen (@alibaba_qwen) 's Twitter Profile Photo

🚀 Meet Qwen3.6-27B, our latest dense, open-source model, packing flagship-level coding power! Yes, 27B, and Qwen3.6-27B punches way above its weight. 👇 What's new: 🧠 Outstanding agentic coding — surpasses Qwen3.5-397B-A17B across all major coding benchmarks 💡 Strong

🚀 Meet Qwen3.6-27B, our latest dense, open-source model, packing flagship-level coding power!

Yes, 27B, and Qwen3.6-27B punches way above its weight. 👇

What's new:
🧠 Outstanding agentic coding — surpasses Qwen3.5-397B-A17B across all major coding benchmarks
💡 Strong
XiaomiMiMo (@xiaomimimo) 's Twitter Profile Photo

Xiaomi MiMo-V2.5 is now officially open-sourced! MIT License, supporting commercial deployment, continued training, and fine-tuning - no additional authorization required. Two models, both supporting a 1M-token context window : • MiMo-V2.5-Pro: built for complex agent and

Xiaomi MiMo-V2.5 is now officially open-sourced!
MIT License, supporting commercial deployment, continued training, and fine-tuning - no additional authorization required.
Two models, both supporting a 1M-token context window :
• MiMo-V2.5-Pro: built for complex agent and
Qwen (@alibaba_qwen) 's Twitter Profile Photo

🚀 Introducing FlashQLA: high-performance linear attention kernels built on TileLang. ⚡ 2–3× forward speedup. 2× backward speedup. 💻 Purpose-built for agentic AI on your personal devices. 💡Key insights: 1. Gate-driven automatic intra-card CP. 2. Hardware-friendly algebraic

🚀 Introducing FlashQLA: high-performance linear attention kernels built on TileLang.

⚡ 2–3× forward speedup. 2× backward speedup.
💻 Purpose-built for agentic AI on your personal devices.

💡Key insights:
1. Gate-driven automatic intra-card CP.
2. Hardware-friendly algebraic
Vercel Changelog (@vercel_changes) 's Twitter Profile Photo

Introducing deepsec, an open source coding security harness. • CLI-first • Sandbox-based scaling • Pluggable coding agents • Designed for large-scale repos • Use AI Gateway or your own subscription After months of successful internal use, we put it to the test on some of

Google for Developers (@googledevs) 's Twitter Profile Photo

Gemma 4: Now up to 3x Faster. ⚡ Same quality, way more speed. Our new MTP drafters allow Gemma 4 to predict multiple tokens at once, effectively tripling your output speed without compromising intelligence.

Zhijian Liu (@zhijianliu_) 's Twitter Profile Photo

DFlash for Gemma 4: Up to 6x Faster. ⚡⚡ Great to see MTP land natively in Gemma 4 today. If you want to push it further, try DFlash — open source, same quality, more speed!! github.com/z-lab/dflash

Prince Canuma (@prince_canuma) 's Twitter Profile Photo

mlx-vlm v0.5.0 is here 🚀 This is the largest release ever 🙌🏽 → Continuous batching server + KV cache quantization → MTP and DFlash speculative decoding (single, batch, server) → Distributed inference: Qwen3.5, Kimi K2.5 & K2.6 → Prompt caching w/ warm-disk persistence

mlx-vlm v0.5.0 is here 🚀

This is the largest release ever 🙌🏽

→ Continuous batching server + KV cache quantization

→ MTP and DFlash speculative decoding (single, batch, server)

→ Distributed inference: Qwen3.5, Kimi K2.5 &amp; K2.6

→ Prompt caching w/ warm-disk persistence