Peter Ince (@satoriweb) 's Twitter Profile
Peter Ince

@satoriweb

Software Engineer and Researcher; at the intersection of smart contracts and ML

ID: 636051515

linkhttps://au.linkedin.com/in/peterince calendar_today15-07-2012 09:24:49

551 Tweet

380 Followers

2,2K Following

adri.algo | Adri Belotti (@nonfungibleab) 's Twitter Profile Photo

๐Ÿ˜‰ Not part of the thread, but thought I mention to the #algofam that I'll be interviewing Silvio Micali Algorand Technologies on Day 2 11am AEDT. Make sure to get a virtual ticket and tune in. Later in the day Daniel Oon Algorand Foundation will join the "Web3 - The building blocks" panel ๐Ÿš€

Peter Shor (@petershor1) 's Twitter Profile Photo

I've put my lecture notes from my Fall 2022 Quantum Computation course online. math.mit.edu/~shor/435-LN/ I don't guarantee that there aren't any mistakes left in them, although I've tried to eradicate them all. If you find any, feel free to email me.

Peter Ince (@satoriweb) 's Twitter Profile Photo

When OpenAI dropped the cost of o3, and in some cases it TPs seemed to improve, it's like because they moved them to google's TPU's right? Assuming the TPUs have a similar efficient to groq's LPUs (which make super fast serving of models relatively affordable), it makes sense.

Peter Ince (@satoriweb) 's Twitter Profile Photo

I assume this is so they can save money/increase speed on inference using TPUs to run ChatGPT to expand free services and reduce costs. As the only other architecture similar and competitive is Groq, and there is no way they could scale that many chips afaik.