Cortensor (@cortensor) 's Twitter Profile
Cortensor

@cortensor

Pioneering Decentralized AI Inference, Democratizing Universal Access. #AI #DePIN
🔗 linktr.ee/cortensor

ID: 1802916496789929984

linkhttps://www.cortensor.network calendar_today18-06-2024 04:09:33

619 Tweet

4,4K Takipçi

30 Takip Edilen

Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – SessionPayment Flow on Testnet L3 We've deployed the latest Session & Payment contracts on Testnet L3 (COR Rollup). 🔹 Continuing refinement of tightly coupled Session ↔ SessionPayment integration 🔹 E2E flow already verified on Testnet L2 (Arbitrum Sepolia) 🔹 Now

Cortensor (@cortensor) 's Twitter Profile Photo

💡 Community Spotlight – Decentralized Prompting Helper Bot Built to support new AI users, this Telegram assistant teaches how to craft effective prompts – fully powered by Cortensor’s decentralized AI network. 📌 How It Works 🔹 Start with /start to access the usage guide 🔹

💡 Community Spotlight – Decentralized Prompting Helper Bot

Built to support new AI users, this Telegram assistant teaches how to craft effective prompts – fully powered by Cortensor’s decentralized AI network.

📌 How It Works
🔹 Start with /start to access the usage guide
🔹
Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – Testnet L3 (COR Rollup) Status The latest Session & Payment modules are live - technically, Testnet L3 (COR Rollup) is now working. 🔹 Core flow verified 🔹 Some abnormalities observed during credit flow 🔹 Further manual E2E + edge case testing continues today to

Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – Fixing Equality Edge Case on L3 (COR Rollup) We found a bug where SessionPayment failed on L3 (COR Rollup) if a session balance was exactly equal to the required amount. ✔️ Worked on L2 (Arbitrum Sepolia) ✖️ Failed on L3 due to subtle EVM handling differences

Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – NodeSpec → SessionConfig Design Today's focus: integrating NodeSpec into SessionConfig to pave the way for larger model support. This will be mostly a mock/design phase before full coding. 🔹 Use NodeSpec to right-size models at session start 🔹 Enable Session

Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – NodeSpec Categorization & Dynamic Model Switching We're designing how NodeSpec and SessionConfig work together so users can run bigger models reliably. 🔹 Step 1 – Model Classes We group models by memory size: 0 = 6GB, 1 = 12GB, 2 = 24GB, 3 = 48GB 🔹 Step 2 –

Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – NodeSpec Categorization Progress We're building the link between NodeSpec and Session config so sessions can correctly assign larger models to capable nodes. 🔹 Step 1 – Model Classes (6GB / 12GB / 24GB / 48GB) – ✅ Done 🔹 Step 2 – NodeSpec Attribute – adjusting

Cortensor (@cortensor) 's Twitter Profile Photo

💡 Community Spotlight – Eureka Buddy Built to provide a safe, kid-friendly AI chat experience, Eureka Buddy is a Telegram bot designed for children under 12 – fully powered by Cortensor's decentralized AI network. 📌 How It Works 🔹 Choose an AI persona – Friendly Buddy,

💡 Community Spotlight – Eureka Buddy

Built to provide a safe, kid-friendly AI chat experience, Eureka Buddy is a Telegram bot designed for children under 12 – fully powered by Cortensor's decentralized AI network.

📌 How It Works
🔹 Choose an AI persona – Friendly Buddy,
Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – NodeSpec Attribute & Session Integration Progress on linking NodeSpec with Session config for larger model support: ✅ Step 1 – Model Classes (6GB / 12GB / 24GB / 48GB) ✅ Step 2 – NodeSpec Attribute → category index now calculated on the fly (shown in dashboard,

🛠️ DevLog – NodeSpec Attribute & Session Integration

Progress on linking NodeSpec with Session config for larger model support:

✅ Step 1 – Model Classes (6GB / 12GB / 24GB / 48GB)

✅ Step 2 – NodeSpec Attribute → category index now calculated on the fly (shown in dashboard,
Cortensor (@cortensor) 's Twitter Profile Photo

⏳ 10 Days Left – Cortensor Hackathon #1 We're entering the final stretch - builders have 10 days left to ship and polish their apps. So far we've seen: 🔹 AI chat assistants w/ custom personas 🔹 Real-time market + sentiment bots 🔹 Web2 ↔ Web3 bridges powered by

⏳ 10 Days Left – Cortensor Hackathon #1

We're entering the final stretch - builders have 10 days left to ship and polish their apps.

So far we've seen:
🔹 AI chat assistants w/ custom personas
🔹 Real-time market + sentiment bots
🔹 Web2 ↔ Web3 bridges powered by
Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – NodeSpec Levels Recap We've introduced Spec Level, a categorization system that maps node memory capacity to the models it can support. 🔹 Level 0 (<6GB) – limited, only tiny models 🔹 Level 1 (6–12GB) – small / medium models 🔹 Level 2 (12–24GB) – standard medium

Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – NodeSpec Memory Classification Update We've refined NodeSpec memory classification into clear spec levels with color-coded ranges + model capability mapping: ✅ Adjusted memory ranges (0 = <6GB … 8+ = ≥96GB) ✅ NodeSpec now stores a spec level integer directly,

Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – NodeSpec Spec Level → Session Integration We'e advanced NodeSpec by mapping system and GPU memory into Spec Levels (0–8+), tied to ranges, colors, and model capability. ✅ NodeSpec now stores Spec Level directly ✅ Dashboard shows classification for easier node

Cortensor (@cortensor) 's Twitter Profile Photo

📄 New Docs Update We've published 3 additions to Cortensor's technical docs - expanding on how the network scales, stays verifiable, and sustains itself: 🔹 Building Trustless AI – Proof of Inference (PoI) & Proof of Useful Work (PoUW) ensure inference is not just fast, but

Cortensor (@cortensor) 's Twitter Profile Photo

🎉 Cortensor is now on Farcaster! We've just launched our account: farcaster.xyz/cortensor 🔹 Follow us for updates in your decentralized social feed 🔹 You can also buy & sell Cortensor directly through Farcaster Another step in making Cortensor accessible across

Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – NodeSpec Memory Index & Dynamic Model Loading We've extended NodeSpec with a memory index that categorizes each node by system/GPU memory. ✅ cortensord reports memory index per node ✅ Dashboard shows classification for visibility ✅ cortensord now uses memory

Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – NodeSpec Memory Index & Model Expansion While working on memory index & dynamic model loading, we've expanded the catalog to 12 models (incl. default): New Additional Models: - Qwen Qwen3 4B-Q8_0 - Google gemma-3 12B Q4_K_M - DeepSeek R1 Distill Qwen 14B Q4_K_M -

Cortensor (@cortensor) 's Twitter Profile Photo

🛠️ DevLog – Expanded Models + Memory-Aware Sessions We've added 4 more models to the dashboard UI (total now 12 models), integrated into the NodeSpec memory index system. Nodes dynamically load only the models they can actually fit in memory at boot, ensuring reliability across

🛠️ DevLog – Expanded Models + Memory-Aware Sessions

We've added 4 more models to the dashboard UI (total now 12 models), integrated into the NodeSpec memory index system.

Nodes dynamically load only the models they can actually fit in memory at boot, ensuring reliability across
Cortensor (@cortensor) 's Twitter Profile Photo

Monthly Node Rewards – August 2025 (Initial Batch Sent) We've just sent out the first round of $COR rewards to 30+ active nodes, as part of our monthly node reward program. 🔹 Rewards will be distributed in batches 🔹 More batches rolling out over the next few days 🔹

Monthly Node Rewards – August 2025 (Initial Batch Sent)

We've just sent out the first round of $COR rewards to 30+ active nodes, as part of our monthly node reward program.

🔹 Rewards will be distributed in batches
🔹 More batches rolling out over the next few days
🔹