Hrsh Venket (@eric4rthurblair) 's Twitter Profile
Hrsh Venket

@eric4rthurblair

I enjoy solving problems and building things.
Currently @ Hong Kong University of Science and Technology for a Masters in AI. TFT player and board game nerd.

ID: 1444958363926741001

linkhttps://hrsh-venket.github.io calendar_today04-10-2021 09:32:31

9 Tweet

7 Takipçi

218 Takip Edilen

Noam Brown (@polynoamial) 's Twitter Profile Photo

I vibecoded an open-source poker river solver over the holiday break. The code is 100% written by Codex, and I also made a version with Claude Code to compare. Overall these tools allowed me to iterate much faster in a domain I know well. But I also felt I couldn't fully trust

I vibecoded an open-source poker river solver over the holiday break. The code is 100% written by Codex, and I also made a version with Claude Code to compare.

Overall these tools allowed me to iterate much faster in a domain I know well. But I also felt I couldn't fully trust
Grant Slatton (@grantslatton) 's Twitter Profile Photo

me to my mom: don't download any file, don't click any links in your email, call me before touching your computer me on my own: "to install our new Rust formatter, just run 'curl totally_random_url.com/jf713had.txt | sh' in your terminal" how convenient! ctrl+c, ctrl+v, enter

Aesah (@aesahtft) 's Twitter Profile Photo

A common sampling bias mistake I see is using data heavily correlated with level. For example, the majority of 7 Zaun boards are level 9+, while most 5 Zaun boards are not. Level 9+ has a 3.43 AVP without any other filters, so of course 7 Zaun "looks better" in data.

A common sampling bias mistake I see is using data heavily correlated with level. For example, the majority of 7 Zaun boards are level 9+, while most 5 Zaun boards are not. 

Level 9+ has a 3.43 AVP without any other filters, so of course 7 Zaun "looks better" in data.
John Carmack (@id_aa_carmack) 's Twitter Profile Photo

It is generally frowned upon to have LLMs precisely regurgitate part of their training set, but it is an interesting question how you could use LLM training to nearly losslesly compress a huge corpus like the entirety of the Internet Archive. The Hutter Prize is for perfect