Chethan(@_hey_chethan) 's Twitter Profileg
Chethan

@_hey_chethan

Generative AI is taking over the world.

Learning to harness its incredible power.

ID:1587026708049252352

linkhttps://chethan2106.gumroad.com/ calendar_today31-10-2022 10:20:32

548 Tweets

136 Followers

146 Following

Chethan(@_hey_chethan) 's Twitter Profile Photo

if

you want interface style glass panel for your webapp and if you are using

then

use this - rounded-xl border border-white/10 bg-white/30 px-6 py-3 shadow-lg backdrop-blur-lg

you are welcome 😄

account_circle
Chethan(@_hey_chethan) 's Twitter Profile Photo

If someone is trying to get working and is not able to see the view even after redeploy and everything suggested.

A nifty tip is to disable your adblock and then try.

You are welcome 😄

account_circle
Chethan(@_hey_chethan) 's Twitter Profile Photo

It is awesome that is so powerful and being opensource will make it orders of magnitude cheaper than chatGPT.
We need this kind of competition to take LLMs forward.

Now, where is GPT-5.

account_circle
jack morris(@jxmnop) 's Twitter Profile Photo

you're telling me an 8B param model was trained on fifteen trillion tokens? i didn't even know there was that much text in the world

really interesting to see how scaling laws have changed best practices; GPT-3 was 175 billion params and trained on a paltry 300 billion tokens

you're telling me an 8B param model was trained on fifteen trillion tokens? i didn't even know there was that much text in the world really interesting to see how scaling laws have changed best practices; GPT-3 was 175 billion params and trained on a paltry 300 billion tokens
account_circle
Cameron R. Wolfe, Ph.D.(@cwolferesearch) 's Twitter Profile Photo

LLaMA-3 is a prime example of why training a good LLM is almost entirely about data quality…

TL;DR. Meta released LLaMA-3-8B/70B today and 95% of the technical info we have so far is related to data quality:

- 15T tokens of pretraining data
- More code during pretraining

LLaMA-3 is a prime example of why training a good LLM is almost entirely about data quality… TL;DR. Meta released LLaMA-3-8B/70B today and 95% of the technical info we have so far is related to data quality: - 15T tokens of pretraining data - More code during pretraining
account_circle