RodrigoPy (@rodrigopy5) 's Twitter Profile
RodrigoPy

@rodrigopy5

Chemical Eng. Machine Learning Eng, PhD. Concept Drift and Online ML. Tech Lead at RD

ID: 1185826318887337985

calendar_today20-10-2019 07:53:40

3,3K Tweet

157 Takipçi

666 Takip Edilen

tetsuo.ai 💹🧲 (@7etsuo) 's Twitter Profile Photo

Ryan O'Donnell from CMU explains the Word RAM model: it uses fixed-size words (e.g., 64-bit) with constant-time operations for algorithm analysis. 🧵1/2 Full CS Theory Toolkit at CMU👇

Paulo Gala (@paulogala) 's Twitter Profile Photo

O BRICS somará em breve 3,727,938,000 de habitantes. Isso é 46% da população mundial. O que é ignorado ou negligenciado pelo G7 -- mas será estudado pelos historiadores? Aqui está a lista completa de 13 exemplos:🧵

O BRICS somará em breve 3,727,938,000 de habitantes.

Isso é 46% da população mundial.

O que é ignorado ou negligenciado pelo G7 -- mas será estudado pelos historiadores?

Aqui está a lista completa de 13 exemplos:🧵
Manish Kumar Shah (@manishkumar_dev) 's Twitter Profile Photo

Free JavaScript Courses to learn in 2024: 1. Learn JavaScript codecademy.com/learn/introduc… 2. JavaScript for Beginners simplilearn.com/learn-javascri… 3. Javascript Essentials udemy.com/course/javascr… 4. JavaScript Fundamentals udemy.com/course/javascr… 5. Learn to Program in Javascript:

Free JavaScript Courses to learn in 2024:

1. Learn JavaScript
codecademy.com/learn/introduc…

2. JavaScript for Beginners
simplilearn.com/learn-javascri…

3. Javascript Essentials
udemy.com/course/javascr…

4. JavaScript Fundamentals
udemy.com/course/javascr…

5. Learn to Program in Javascript:
Aleksa Gordić (水平问题) (@gordic_aleksa) 's Twitter Profile Photo

New DeepMind paper: "A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs” They introduce SALT a 2-stage approach to pretraining LLMs: 1. Use a SLM (small LM) to bootstrap the LLM pre-training via knowledge distillation (the loss is crossentropy against

New DeepMind paper: "A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs”

They introduce SALT a 2-stage approach to pretraining LLMs:

1. Use a SLM (small LM) to bootstrap the LLM pre-training via knowledge distillation (the loss is crossentropy against
tut_ml (@tut_ml) 's Twitter Profile Photo

Best Statistics Courses- mltut.com/best-course-on… Kirk Borne Antonio Grasso @ronald_vanloon #MachineLearning #DeepLearning #BigData #Datascience #ML #HealthTech #DataVisualization #ArtificialInteligence #SoftwareEngineering #GenAI #deeplearning #ChatGPT #OpenAI #python #AI #keras

Best Statistics Courses- mltut.com/best-course-on…

<a href="/KirkDBorne/">Kirk Borne</a>
<a href="/antgrasso/">Antonio Grasso</a>
@ronald_vanloon
#MachineLearning #DeepLearning #BigData #Datascience #ML #HealthTech #DataVisualization #ArtificialInteligence #SoftwareEngineering #GenAI #deeplearning #ChatGPT #OpenAI #python #AI #keras