Lienid(@0xLienid) 's Twitter Profileg
Lienid

@0xLienid

ID:1403525107209940992

calendar_today12-06-2021 01:30:44

6,8K Tweets

524 Followers

587 Following

Lienid(@0xLienid) 's Twitter Profile Photo

tbh feels beyond stupid that every token is weighted the same in llm training loss, particularly for instruct or task-specific use cases. most tokens are filler and thus noise relative to what you want the model to really learn.

account_circle
Lienid(@0xLienid) 's Twitter Profile Photo

can anyone point me to research on using chain-of-thought (or tree-of-thought) to generate a synthetic dataset that basically converts system 2 thought (the chain or tree) into system 1 thought (just standard next token prediction)?

i assume someone has done this

account_circle
Matthew Garcia(@garciacmatthew) 's Twitter Profile Photo

cora since the aughts we’ve all received nonstop and pervasive media/social media pipe gen activity from VC’s and founders parading around as self actualization performance narratives. this has shifted egoistic preferences materializing in everyone thinking they need to be founders.

account_circle
Lienid(@0xLienid) 's Twitter Profile Photo

friendly reminder that these things are COMMODITIES and will end up priced to reflect that

zuck has the right strategy here. openai microsoft and anthropic do not.

account_circle
Flo Crivello(@Altimor) 's Twitter Profile Photo

Many people are hating on this video, but I actually think it's a fascinating display of the two very distinct modes that exist to relate with reality: mimesis vs. first principles thinking.

95% of people operate by mimesis. Truth doesn't matter to them as much as getting

account_circle