Phind (@phindsearch) 's Twitter Profile
Phind

@phindsearch

AI answer engine for complex questions.

ID: 1534680822803857408

linkhttps://phind.com calendar_today08-06-2022 23:36:46

102 Tweet

8,8K Takipçi

21 Takip Edilen

Paul Graham (@paulg) 's Twitter Profile Photo

As well as being a useful tool, Phind is an example of an encouraging point for startups: you can beat the AI giants in a specific domain with orders of magnitude less resources. It would be interesting if the biggest bang for the buck in AI was ASI rather than AGI.

raymel 🚀 (@pseudokid) 's Twitter Profile Photo

Phind - Phind worked pretty well for my tiny Bash script. It even adopted how I use a third-party script (jq). All on first try. See it in action:

swyx (@swyx) 's Twitter Profile Photo

🆕 pod: Beating GPT-4 with Open Source LLMs latent.space/p/phind with Michael Royzen of Phind! The full story of how Phind finetuned CodeLlama to: - reach 74.7% HumanEval vs 45% base model - 2x'ed GPT4 context window to 16k tokens - 5x faster than GPT-4 (100 tok/s)

🆕 pod: Beating GPT-4 with Open Source LLMs

latent.space/p/phind

with <a href="/MichaelRoyzen/">Michael Royzen</a> of <a href="/phindsearch/">Phind</a>!

The full story of how Phind finetuned CodeLlama to:

- reach 74.7% HumanEval vs 45% base model
- 2x'ed GPT4 context window to 16k tokens
- 5x faster than GPT-4 (100 tok/s)
Phind (@phindsearch) 's Twitter Profile Photo

🚀 While ChatGPT is pausing signups, Phind continues to be better at programming while being 5x faster. We’ve been rapidly adding capacity and it’s only getting faster. Check it out ➡️ phind.com

Phind (@phindsearch) 's Twitter Profile Photo

🚀 Introducing GPT-4 with 32K context for Phind Pro users. If you’re not yet a subscriber, join us at phind.com/plans.

Phind (@phindsearch) 's Twitter Profile Photo

Announcing much faster Phind Model inference for Pro and Plus users. Your request will be served by a dedicated cluster powered by NVIDIA H100s for the lowest latency and a generation speed of up to 100 tokens per second. If you’re not yet a Pro user, join us at

Phind (@phindsearch) 's Twitter Profile Photo

Join us for our San Francisco meetup on February 6th! We’d love to meet you and hear about how we can keep making Phind better for you. And, of course, food and drinks will be provided :) forms.gle/tEGwu2hBtYKptN…

Phind (@phindsearch) 's Twitter Profile Photo

Introducing Phind-70B, our largest and most capable model to date! We think it offers the best overall user experience for developers amongst state-of-the-art models. phind.com/blog/introduci…

elvis (@omarsar0) 's Twitter Profile Photo

Phind-70B looks like a big deal! Phind-70B closes the code generation quality gap with GPT-4 Turbo and is 4x faster. Phind-70B can generate 80+ token/s (GPT-4 is reported to generate ~20 tokens/s). Interesting to see that inference speed is becoming a huge factor in comparing

Phind-70B looks like a big deal!

Phind-70B closes the code generation quality gap with GPT-4 Turbo and is 4x faster.

Phind-70B can generate 80+ token/s (GPT-4 is reported to generate ~20 tokens/s). Interesting to see that inference speed is becoming a huge factor in comparing
Paul Graham (@paulg) 's Twitter Profile Photo

I'm very impressed by the way Phind, despite being the tiniest of startups, has managed to keep up with the giants. Phind-70B beats GPT-4 Turbo at code generation, and runs 4x faster. There is definitely still room for startups in this game.

I'm very impressed by the way Phind, despite being the tiniest of startups, has managed to keep up with the giants. Phind-70B beats GPT-4 Turbo at code generation, and runs 4x faster. There is definitely still room for startups in this game.
Garry Tan (@garrytan) 's Twitter Profile Photo

"We find that Phind-70B is in the same quality realm as GPT-4 Turbo for code generation and exceeds it on some tasks. Phind-70B is significantly faster than GPT-4 Turbo, running at 80+ tokens per second to GPT-4 Turbo's ~20 tokens per second." What a launch! 👀

"We find that Phind-70B is in the same quality realm as GPT-4 Turbo for code generation and exceeds it on some tasks. 

Phind-70B is significantly faster than GPT-4 Turbo, running at 80+ tokens per second to GPT-4 Turbo's ~20 tokens per second."

What a launch! 👀
Phind (@phindsearch) 's Twitter Profile Photo

Introducing Phind-405B, our new flagship model! Phind-405B scores 92% on HumanEval, matching Claude 3.5 Sonnet. We're particularly happy with its performance on real-world tasks, particularly when it comes to designing and implementing web apps. Our focus on technical topics

Phind (@phindsearch) 's Twitter Profile Photo

We're excited to launch Phind 2 today! The new Phind is able to go beyond text to present answers visually with inline images, diagrams, cards, and other widgets to make answers much more delightful. Phind is also now able to seek out information on its own. If it needs more