brainmatics
@brainmatics
ID: 229852080
23-12-2010 14:15:50
12,12K Tweet
232 Followers
5,5K Following
Stop predicting words one by one. A new paper just broke the next-token rule every LLM follows. It cuts generation steps by 4ร and training compute by 44%. ๐ก๐ฒ๐ ๐-๐๐ผ๐ธ๐ฒ๐ป ๐ฝ๐ฟ๐ฒ๐ฑ๐ถ๐ฐ๐๐ถ๐ผ๐ป ๐ถ๐ ๐๐ต๐ฒ ๐ฏ๐ผ๐๐๐น๐ฒ๐ป๐ฒ๐ฐ๐ธ CALM replaces word-by-word guessing with