Mattness ๐Ÿ’€(๐Ÿซ—,๐Ÿฆ‰) (@0xmattness) 's Twitter Profile
Mattness ๐Ÿ’€(๐Ÿซ—,๐Ÿฆ‰)

@0xmattness

Founder: @Dither_Solana
@tokens_terminal

ID: 1389754537674747904

calendar_today05-05-2021 01:31:27

3,3K Tweet

4,4K Takipรงi

665 Takip Edilen

Mattness ๐Ÿ’€(๐Ÿซ—,๐Ÿฆ‰) (@0xmattness) 's Twitter Profile Photo

Meaningful art transcends the tool The most viral "AI slop" took emotionally charged moments/memes and added emotionally charged style. The meaning was derived from cultural and personal significance. The artistic work was performed before the AI work

Meaningful art transcends the tool

The most viral "AI slop" took emotionally charged moments/memes and added emotionally charged style. The meaning was derived from cultural and personal significance. 

The artistic work was performed before the AI work
Mattness ๐Ÿ’€(๐Ÿซ—,๐Ÿฆ‰) (@0xmattness) 's Twitter Profile Photo

This isn't the first screenshot of payouts I've seen but a clear indication that CT activity has been down significantly. Everyone's on standbye until something exciting happens

Mattness ๐Ÿ’€(๐Ÿซ—,๐Ÿฆ‰) (@0xmattness) 's Twitter Profile Photo

As much as people want it to be true - there's limited fundamental difference between leading LLM systems. It's arguing over programming languages. I have my favorites but actual effect is marginal

Mattness ๐Ÿ’€(๐Ÿซ—,๐Ÿฆ‰) (@0xmattness) 's Twitter Profile Photo

Where does AI go from here? Wherever there exists bottlenecks of intelligence. There are intelligent sinks 2x greater than software engineering relatively untouched still. TAM for intelligence is kind of insane.

Mattness ๐Ÿ’€(๐Ÿซ—,๐Ÿฆ‰) (@0xmattness) 's Twitter Profile Photo

We're likely a couple months away before we start getting 'niche deep research' as a trend It's the same trend of large general models to niche small models. Time is a circle. I wonder if this one will end differently

Mattness ๐Ÿ’€(๐Ÿซ—,๐Ÿฆ‰) (@0xmattness) 's Twitter Profile Photo

Agent frameworks are dead The model has eaten the framework layer. They were always a stop gap but I didn't expect them to be on deaths door so fast. Essentially, for an agent to work you needed to setup and steer the agents thinking. You needed to use RAG to predict what was

Mattness ๐Ÿ’€(๐Ÿซ—,๐Ÿฆ‰) (@0xmattness) 's Twitter Profile Photo

Feed it to reinforce what you want When you interact with social media you need to remember you are in control of the incentives you give the algorithm

Mattness ๐Ÿ’€(๐Ÿซ—,๐Ÿฆ‰) (@0xmattness) 's Twitter Profile Photo

Reasoning models are showing limitations in understanding human intention especially at long context. IIRC Ilya made a point about not assuming perfect input from users. This seems to be rearing its head when it comes to longer context agentic models (o3/claude). They can now

Mattness ๐Ÿ’€(๐Ÿซ—,๐Ÿฆ‰) (@0xmattness) 's Twitter Profile Photo

Some thoughts on model size, reasoning and noise (mostly for myself) TL;DR - We might get smarter models that are much smaller. There seems to be a trend of reasoning unlocking the abilities of smaller models. If we look at reasoning as noise minimization - then this starts to

Mattness ๐Ÿ’€(๐Ÿซ—,๐Ÿฆ‰) (@0xmattness) 's Twitter Profile Photo

Vaguepost: Pulled two really cool results. One changes how to think about LPs. Might be able to share next week One is a potentially novel way to train an LLM/large time series model