Claude 2.1 (200K Tokens) - Pressure Testing Long Context Recall
We all love increasing context lengths - but what's performance like?
Anthropic reached out with early access to Claude 2.1 so I repeated the “needle in a haystack” analysis I did on GPT-4
Here's what I found:
“Coding” was never the source of value, and people shouldn’t get overly attached to it. Problem solving is the core skill. The discipline and precision demanded by traditional programming will remain valuable transferable attributes, but they won’t be a barrier to entry.
Many