Stanislav Fort (@stanislavfort) 's Twitter Profile
Stanislav Fort

@stanislavfort

Building in AI + security | Stanford PhD in AI & Cambridge physics | ex-Anthropic and DeepMind | alignment + progress + growth | 🇺🇸🇨🇿

ID: 40285266

linkhttp://stanislavfort.com calendar_today15-05-2009 17:16:47

1,1K Tweet

13,13K Takipçi

7,7K Takip Edilen

Stanislav Fort (@stanislavfort) 's Twitter Profile Photo

We put a basic demo Colab for Direct Ascent Synthesis (DAS) (arxiv.org/abs/2502.07753) up on Github: github.com/stanislavfort/… You should be able to 1) generate images from text, 2) run "style" transfer, and 3) reconstruct images from CLIP embeddings in minutes on an A100

We put a basic demo Colab for Direct Ascent Synthesis (DAS) (arxiv.org/abs/2502.07753) up on Github: github.com/stanislavfort/…

You should be able to 1) generate images from text, 2) run "style" transfer, and 3) reconstruct images from CLIP embeddings in minutes on an A100
Stanislav Fort (@stanislavfort) 's Twitter Profile Photo

I'm trying to build a quick and dirty semantic search over a repository. What are the best embedding models out there right now? I'm a bit out of the loop. Or should I be doing something else than comparing embedding vectors of a query to code chunks?

Stanislav Fort (@stanislavfort) 's Twitter Profile Photo

This is still genuinely surprising: I use LLMs all the time to read research papers, to write code, to brainstorm ideas, and rarely see any issues at all + get a huge amount of productivity gain from them. Yet to many they're just valueless hallucinators. What's going on here?

Cameron Jones (@camrobjones) 's Twitter Profile Photo

New preprint: we evaluated LLMs in a 3-party Turing test (participants speak to a human & AI simultaneously and decide which is which). GPT-4.5 (when prompted to adopt a humanlike persona) was judged to be the human 73% of the time, suggesting it passes the Turing test (🧵)

New preprint: we evaluated LLMs in a 3-party Turing test (participants speak to a human & AI simultaneously and decide which is which).

GPT-4.5 (when prompted to adopt a humanlike persona) was judged to be the human 73% of the time, suggesting it passes the Turing test (🧵)
Stanislav Fort (@stanislavfort) 's Twitter Profile Photo

This is very much a landmark paper. An empirical demonstrations that humans are able to learn concepts from super-human AIs that before that were inaccessible to them

Stanislav Fort (@stanislavfort) 's Twitter Profile Photo

Isn't the Strong Model Collapse paper basically impossible to be correct since synthetic data is a huge part of frontier model training already? > results show that even the smallest fraction of synthetic data (e.g., as little as 1% [...]) can still lead to model collapse ???

Isn't the Strong Model Collapse paper basically impossible to be correct since synthetic data is a huge part of frontier model training already?

> results show that even the smallest fraction of synthetic data (e.g., as little as 1% [...]) can still lead to model collapse

???
Stanislav Fort (@stanislavfort) 's Twitter Profile Photo

My impression was that this was relatively widely accepted since the landmark paper on translation without parallel corpora in 2017 titled "Word Translation Without Parallel Data"? It's got >1k citations and spawned a small subfield

My impression was that this was relatively widely accepted since the landmark paper on translation without parallel corpora in 2017 titled "Word Translation Without Parallel Data"? It's got >1k citations and spawned a small subfield
Stanislav Fort (@stanislavfort) 's Twitter Profile Photo

I have also been seeing a persistent hallucinatory citation of a paper I am allegedly the first author of, which, despite that, sadly doesn't exist

I have also been seeing a persistent hallucinatory citation of a paper I am allegedly the first author of, which, despite that, sadly doesn't exist