Gavin Li (@lyo_gavin) 's Twitter Profile
Gavin Li

@lyo_gavin

Founder of Anima AI, former Airbnb, Alibaba AI senior leader. Fintech unicorn(YINGMI)'s AI advisor. Author of AirLLM. Building @crazyfaceai25

ID: 23437507

linkhttps://animaai.cloud/ calendar_today09-03-2009 13:57:23

120 Tweet

90 Takipçi

201 Takip Edilen

David Park (@davidjpark96) 's Twitter Profile Photo

Dcai It was literally hundreds and hundreds of optimizations The most obvious ones are to have an annual plan option and if you have seasonality like us, have some way to pause the subscription!

Marc Lou (@marc_louvion) 's Twitter Profile Photo

To send emails, I use Mailgun ($4/month). It’s easy and free to start. Payments are processed via Stripe (3.5%). Getting started is a bit daunting. I wrote this tutorial to handle subscriptions with Stripe: byedispute.com/blog/how-to-co….

歸藏(guizang.ai) (@op7418) 's Twitter Profile Photo

Meta 上周开源了一个端到端的语音模型 Spirit LM。 这个太重要了,居然没注意到。 这个模型有两个版本: 基础版: 适合进行一般的语音识别和生成,不包含情感变化。 高表现力版:可以捕捉语音中的情感特征,能够生成包含快乐、愤怒或兴奋等情感的语音。 主要特点有: Spirit LM

Gavin Li (@lyo_gavin) 's Twitter Profile Photo

There will be 100x more this kind of "design-for-short-video-hook" toys getting made. Totally useless in daily life but get you traffic like magic. People would BUY it.

Gavin Li (@lyo_gavin) 's Twitter Profile Photo

Most simple most basic application, but actually there are a lot of details extremely hard to get to perfect. Focus on tiniest thing deeply, provide a perfect solution. There are many "last mile" problems that are real "blue ocean"!

Gavin Li (@lyo_gavin) 's Twitter Profile Photo

Alibaba's published its o1 implementation. Marco-o1 uses Monte Carlo Tree Search (MCTS) in its fine-tuning process, highlighting a new approach that doesn’t rely on massive models but enhances reasoning. Alibaba employs synthetic datasets with MCTS to advance LLMs' inference.

Alibaba's published its o1 implementation. Marco-o1 uses Monte Carlo Tree Search (MCTS) in its fine-tuning process, highlighting a new approach that doesn’t rely on massive models but enhances reasoning. Alibaba employs synthetic datasets with MCTS to advance LLMs' inference.
Ilias Ism (@illyism) 's Twitter Profile Photo

Drop your website URL We'll send you ONE programmatic SEO strategy perfect for your niche (first 10 replies get a video) 👇

Gavin Li (@lyo_gavin) 's Twitter Profile Photo

The key to startup success? A killer product idea. So I'm maximizing my chances by trying as many as possible products. I built and tested 10 products in the last 6 months—most of them failed, but some thrived. The cost is almost negligible. Aiming for 20+ next year!

Gavin Li (@lyo_gavin) 's Twitter Profile Photo

AI will never replace Lex Fridman. Cuz he's more disciplined than AI. Even AI has hallucination. I feel it's like a robot 😱. Isn't it too much? Does such rigid discipline leave no room for art and creativity elements which are also crucial for many forms of achievement?

AI will never replace Lex Fridman. Cuz he's more disciplined than AI. Even AI has hallucination.  

I feel it's like a robot 😱. Isn't it too much?  

Does such rigid discipline leave no room for art and creativity elements which are also crucial for many forms of achievement?
Gavin Li (@lyo_gavin) 's Twitter Profile Photo

Are LoRA and full fine-tuning really the same? this latest paper has some truly interesting findings: "intruder dimensions" found in LORA (high-ranking singular vectors orthogonal to pre-trained weight), causes forgetting of pre-training knowledge+ weaker in continual learning

Are LoRA and full fine-tuning really the same?
this latest paper has some truly interesting findings:

"intruder dimensions" found in LORA  (high-ranking singular vectors orthogonal to pre-trained weight),  causes forgetting of pre-training knowledge+ weaker in continual learning
Gavin Li (@lyo_gavin) 's Twitter Profile Photo

Tibo The key of great content is the core idea/core data/core material. If it's written by LLM or human doesn't actually matter too much.

Gavin Li (@lyo_gavin) 's Twitter Profile Photo

There won't be any Tokenizer any more? "Large Concept Models" from Meta. Human don't think based on tokens. What if we train the autoregressive prediction model based on "concepts"...

There won't be any Tokenizer any more? 

"Large Concept Models" from Meta. 
Human don't think based on tokens. What if we train the  autoregressive prediction model based on "concepts"...