InferX (@inferxai) 's Twitter Profile
InferX

@inferxai

Serving 50+ LLMs per GPU with fast snapshot-based loading. Sub-2s cold starts. 90%+ GPU utilization. Building the AI-native runtime for inference.

ID: 1896703895654858752

linkhttps://inferx.net/ calendar_today03-03-2025 23:27:18

178 Tweet

90 Takipçi

37 Takip Edilen