Fanglin Lu (@fanglinlu) 's Twitter Profile
Fanglin Lu

@fanglinlu

Senior Software Engineer @ Google Cloud Vertex AI Gemini API

ID: 399688853

calendar_today27-10-2011 21:54:17

36 Tweet

214 Followers

787 Following

Fanglin Lu (@fanglinlu) 's Twitter Profile Photo

There is so much wisdom in this book "The Almanack of Naval Ravikant". I agree on almost every single word in this book. Today I am inspired by this one "Your real resume is just a catalog of all your suffering."

Yann LeCun (@ylecun) 's Twitter Profile Photo

A survey of LLMs with a practical guide and evolutionary tree. Number of LLMs from Meta = 7 Number of open source LLMs from Meta = 7 The architecture nomenclature for LLMs is somewhat confusing and unfortunate. What's called "encoder only" actually has an encoder and a decoder

A survey of LLMs with a practical guide and evolutionary tree.

Number of LLMs from Meta = 7
Number of open source LLMs from Meta = 7

The architecture nomenclature for LLMs is somewhat confusing and unfortunate.
What's called "encoder only" actually has an encoder and a decoder
Sundar Pichai (@sundarpichai) 's Twitter Profile Photo

Today developers can start building with our first version of Gemini Pro through Google AI Studio at ai.google.dev.  Developers have a free quota and access to a full range of features including function calling, embeddings, semantic retrieval, custom knowledge

Today developers can start building with our first version of Gemini Pro through Google AI Studio at ai.google.dev. 

Developers have a free quota and access to a full range of features including function calling, embeddings, semantic retrieval, custom knowledge
Fanglin Lu (@fanglinlu) 's Twitter Profile Photo

Thrilled about the Gemini API launch on Vertex AI today! 🎉 Proud to have contributed to the launch with the amazing team behind it.🚀 cloud.google.com/blog/products/… #GeminiAPI #VertexAI #GenAi #GoogleDeepmind

LMSYS Org (@lmsysorg) 's Twitter Profile Photo

How long have you been "planning to understand" how modern LLM inference works? We just gave you a readable version of SGLang you can finish over the weekend. Introducing mini-SGLang âš¡ We distilled SGLang from 300K into 5,000 lines. Kept the core design, cut the complexity.

How long have you been "planning to understand" how modern LLM inference works?

We just gave you a readable version of SGLang you can finish over the weekend.

Introducing mini-SGLang âš¡

We distilled SGLang from 300K into 5,000 lines. Kept the core design, cut the complexity.