ali (@sparx260) 's Twitter Profile
ali

@sparx260

may the odds be ever in your favor

ID: 1424469909879263234

calendar_today08-08-2021 20:38:28

240 Tweet

912 Followers

5,5K Following

Will Bryk (@williambryk) 's Twitter Profile Photo

On the eve of 4 years since founding Exa, I realize the most important learning was to get excited about the fires. I basically come in every week on Monday expecting a dozen fires. And this thrills me. Like I get dopamine from it. This is not a normal reaction and was honed

Will Bryk (@williambryk) 's Twitter Profile Photo

It's honestly insane that LLMs haven't discovered something significant yet. I believe this is a skill issue. No one has tried to call these phd-level LLMs thousands of times with the right arxiv papers streaming in, prompted in all the right ways, chaos and randomness

Startup Archive (@startuparchive_) 's Twitter Profile Photo

“All new fails” - Zynga CEO Mark Pincus explains his favorite product principle “All new fails. If all new worked, we’d be using new stuff all the time. But how often do you change what’s on the front of your iPhone? How often do the top 10 or 25 apps change? They haven’t

Alex Albert (@alexalbert__) 's Twitter Profile Photo

Right now, someone is creating the next defining UX pattern of the AI age. Something so fundamental it'll be everywhere, used by billions of people, and they might not even know it yet. Can't think of a more exciting time to build.

Wes Roth (@wesrothmoney) 's Twitter Profile Photo

Ana Bell responds to whether it’s still worth learning languages like Python when AI tools can write code for you. Her answer: trust but verify. While Gen AI can generate working code, it often makes small mistakes—especially with math or logic. Knowing how to code helps you

Omar Waseem (@omarwasm) 's Twitter Profile Photo

A year ago, I made one of my most important investments to date: Founders Arm. Today, I'm excited to share that I've become a Co-Owner of the company. Full announcement: foundersarm.com/hello

seanpixel 🫧 (@sean_pixel) 's Twitter Profile Photo

you can now improve RL models WITHOUT ANY TRAINING. Inspired by mechanistic interpretability for LLMs (cred: Jacob Dunefsky Emmanuel Ameisen Neel Nanda), I applied sparse-transcoder methods to a CartPole policy and saw a +24% performance increase with zero additional training. (1/9)

you can now improve RL models WITHOUT ANY TRAINING.

Inspired by mechanistic interpretability for LLMs (cred: <a href="/jacobdunefsky/">Jacob Dunefsky</a> <a href="/mlpowered/">Emmanuel Ameisen</a> <a href="/NeelNanda5/">Neel Nanda</a>), I applied sparse-transcoder methods to a CartPole policy and saw a +24% performance increase with zero additional training.

(1/9)