Eddie Zhou (@eddiedzhou) 's Twitter Profile
Eddie Zhou

@eddiedzhou

Intelligence @glean -- hiring NLP eng to bring LLMs to the workplace. Prev: @GoogleAI / Brain, @Princeton

ID: 431244726

linkhttp://glean.com calendar_today08-12-2011 02:33:22

1,1K Tweet

519 Followers

812 Following

Ishaan Gulrajani (@__ishaan) 's Twitter Profile Photo

David Dohan In principle you're entirely right. In practice, RLHF is a tiny fraction of the total supervision. Most of the learning still has to happen by next-word prediction.

Delip Rao e/σ (@deliprao) 's Twitter Profile Photo

1. When we say model X beat GPT3.5 by whatever percent, we really don’t know which GPT3.5 we are talking about. All we know is they used the API, but the model behind the API is changing all the time, and it’s impossible to reproduce those results over time.

Glean (@glean) 's Twitter Profile Photo

"The punchline [...] is that all the winners in the AI space will have data moats." "The data moat needs to be fast and queryable. This is a Search Problem!" 👋 @sourcegraph about.sourcegraph.com/blog/cheating-…

Aran Komatsuzaki (@arankomatsuzaki) 's Twitter Profile Photo

Self-Refine: Iterative Refinement with Self-Feedback Presents a novel approach that allows LLMs to iteratively refine outputs and incorporate feedback along multiple dimensions to improve performance on diverse tasks. proj: selfrefine.info abs: arxiv.org/abs/2303.17651

Eddie Zhou (@eddiedzhou) 's Twitter Profile Photo

Bringing a real generative experience to enterprise users is incredibly challenging, but over the past few months we've made some amazing progress. There's even more to come!

Eddie Zhou (@eddiedzhou) 's Twitter Profile Photo

Awesome to see from Ali Ghodsi and Databricks. Both the model weights and the dataset itself will be really important to bring generative product experiences to more and more people!

Sebastian Raschka (@rasbt) 's Twitter Profile Photo

Finetuning vs prompting? Here's a nice empirical insight from arxiv.org/abs/2104.08691 illustrating that finetuning outperforms prompting. (Caveat: I wish they had done a per-dataset analysis to learn what how much labeled data is needed to outperform prompting)

Finetuning vs prompting?
Here's a nice empirical insight from arxiv.org/abs/2104.08691 illustrating that finetuning  outperforms prompting.

(Caveat: I wish they had done a per-dataset analysis to learn what how much labeled data is needed to outperform prompting)
Chau Tran (@mr_cheu) 's Twitter Profile Photo

Key design difference between ChatGPT with browsing plugin vs Perplexity Copilot (maybe BingChat too): - ChatGPT's LLM performs a series of sequential actions: query -> click -> extract -> query ... - Perplexity's LLM makes fewer but more parallel decisions: issue multiple

Key design difference between ChatGPT with browsing plugin vs Perplexity Copilot (maybe BingChat too):
- ChatGPT's LLM performs a series of sequential actions: query -> click -> extract -> query ...
- Perplexity's LLM makes fewer but more parallel decisions: issue multiple
Eddie Zhou (@eddiedzhou) 's Twitter Profile Photo

With Glean Chat, we've brought a real ChatGPT experience to the workplace, grounded in your company's language and knowledge. Check out some use cases, and sign up for a demo! glean.com/get-a-demo

Eddie Zhou (@eddiedzhou) 's Twitter Profile Photo

Check out our post on the complexity of building an enterprise-knowledge-aware AI Assistant, and why you can't just slap vector search together with some prompts!

Delip Rao e/σ (@deliprao) 's Twitter Profile Photo

One of the best pieces of advice I embodied from my advisor is “smell the data”. You pay for it in compute and other ways, if you don’t do it, and from my experience working with others, most don’t. That’s one of the reasons why we have overly complicated archs, objectives, and

One of the best pieces of advice I embodied from my advisor is “smell the data”.  You pay for it in compute and other ways, if you don’t do it, and from my experience working with others, most don’t. That’s one of the reasons why we have overly complicated archs, objectives, and
Eddie Zhou (@eddiedzhou) 's Twitter Profile Photo

🎉 Excited to share an awesome milestone for Glean – we raised a Series D of over $200M at a $2.2B valuation led by Kleiner Perkins and Lightspeed! bit.ly/3OYlJYi Amidst a rollercoaster year of excitement and hype, I'm really proud to be part of a team that put