profile-img
AI at Meta

@AIatMeta

Together with the AI community, we are pushing the boundaries of what’s possible through open science to create a more connected world.

calendar_today29-08-2018 16:45:58

1,9K Tweets

532,8K Followers

256 Following

AI at Meta(@AIatMeta) 's Twitter Profile Photo

Today’s release includes both 8B & 70B Llama 3 models that were trained on over 15T tokens. The training dataset is seven times larger than that used for Llama 2.

Today’s release includes both 8B & 70B Llama 3 models that were trained on over 15T tokens. The training dataset is seven times larger than that used for Llama 2.
account_circle
AI at Meta(@AIatMeta) 's Twitter Profile Photo

Our latest Llama 3 models feature double the context length of our previous Llama 2 models — additionally in the coming months, we’re working to release models with even longer context windows to enable even more unique use cases.

Our latest Llama 3 models feature double the context length of our previous Llama 2 models — additionally in the coming months, we’re working to release models with even longer context windows to enable even more unique use cases.
account_circle
AI at Meta(@AIatMeta) 's Twitter Profile Photo

Llama 3 uses a new tokenizer with a vocabulary of 128k tokens. This enables it to encode language much more efficiently.

It offers notably improved token efficiency — despite the larger 8B model, Llama 3 maintains inference efficiency on par with Llama 2 7B.

Llama 3 uses a new tokenizer with a vocabulary of 128k tokens. This enables it to encode language much more efficiently. It offers notably improved token efficiency — despite the larger 8B model, Llama 3 maintains inference efficiency on par with Llama 2 7B.
account_circle