Getting our new platform online, ensuring compatibility for our pre-existing customers and building our new infra automation was a tough grind but it's now done.
While we're working on the next set of capabilities, I'd love to hear what you'd like to see us do in the platform.
If model weights are like a giant jpeg of the data on the internet, then hallucinations are a compression artifact. The more parameters in the model, the less lossy it will be.
This articulates well what it means to really lean in to ai for coding. Asking the llm for help might 2x you but letting it build entire capabilities like another human is where the tantalising prospect of 10x'ing your throughput lies.
Personally, I can't commit to that (yet).
One of the bottlenecks for adoption of AI chat in enterprises is that the AI doesn't have a standardized way to access business data. That's changing with MCP (model context protocol) - a standard that's being rapidly adopted.
We're a few weeks away from shipping an MCP server