Kojo Osei (@heykojo) 's Twitter Profile
Kojo Osei

@heykojo

Partner @matrixvc (pre-seed to Series A) // writing kojo.blog // @stanford alum

ID: 1277048863237554176

calendar_today28-06-2020 01:19:22

399 Tweet

1,1K Followers

558 Following

Kojo Osei (@heykojo) 's Twitter Profile Photo

The bear case in the GS report underestimates how much of day-to-day programming falls into the bucket of simple but tedious. Freeing developers from that work is precisely the point.

The bear case in the GS report underestimates how much of day-to-day programming falls into the bucket of simple but tedious.

Freeing developers from that work is precisely the point.
Kojo Osei (@heykojo) 's Twitter Profile Photo

In-app search is a guessing game without a way to improve search results. Objective comes with query evals so you don't have to guess how well search works for your users.

Kojo Osei (@heykojo) 's Twitter Profile Photo

Llama's permissive license that allows for distillation is the most consequential shift in the AI arms race. This is a neat example of lightweight prompt-distillation

Kojo Osei (@heykojo) 's Twitter Profile Photo

Two interesting bits from the GameNGen paper (diffusion models acting as game engines): 1) Data was generated with an RL agent — a clever way to get synthetic data. 2) Researchers used StableDiffusion 1.4 from *2021*. You don't need the next big model to do cool work.

Two interesting bits from the GameNGen paper (diffusion models acting as game engines):

1) Data was generated with an RL agent — a clever way to get synthetic data.

2) Researchers used StableDiffusion 1.4 from *2021*. You don't need the next big model to do cool work.
Patrick Malatack (@patrickmalatack) 's Twitter Profile Photo

Meet Anton. Latest release from Objective, Inc.. Improve search results with LLM as a judge. It works on any search you are already are using. Watch Lance Hasson demo it: youtube.com/watch?v=geX8mw…

LM Studio (@lmstudioai) 's Twitter Profile Photo

LM Studio 0.3.4 ships with Apple MLX 🚢🍎 Run on-device LLMs super fast, 100% locally and offline on your Apple Silicon Mac! Includes: > run Llama 3.2 1B at ~250 tok/sec (!) on M3 > enforce structured JSON responses > use via chat UI, or from your own code > run multiple