QL (@qlanor) 's Twitter Profile
QL

@qlanor

🐸

ID: 761251171675705344

calendar_today04-08-2016 17:23:17

164 Tweet

55 Takipçi

873 Takip Edilen

Doug Colkitt (@0xdoug) 's Twitter Profile Photo

AMM toxicity is almost entirely attributable to a tiny fraction of high frequency algo wallets: "[O]utside of this subset of 364 wallets — i.e, the other 454,091 wallets in the dataset — are in fact immensely profitable for Uniswap LPs to the tune of positive 104 million USD"

AMM toxicity is almost entirely attributable to a tiny fraction of high frequency algo wallets:

"[O]utside of this subset of 364 wallets — i.e, the other 454,091 wallets in the dataset — are in fact immensely profitable for Uniswap LPs to the tune of positive 104 million USD"
alz (@alz_zyd_) 's Twitter Profile Photo

The key to this thing seems to be basically that LP token trades behave like call spreads. A call spread has bounded losses, hence bounded liquidation risk. The main issue with the design is that you get call-spread like exposure to the underlying, not linear

zkSTONKs (@zkstonks) 's Twitter Profile Photo

1/ is developing a PoW mechanism to disincentivize sybil attacks by MEV searchers on the Arbitrum sequencer relay. In this post, I describe why the proposed design is economically wasteful and suggest more efficient designs research.arbitrum.io/t/thoughts-on-…

Uniswap Labs 🦄 (@uniswap) 's Twitter Profile Photo

1/ Today, we’re announcing our vision for Uniswap v4 🦄 We see Uniswap as core financial infrastructure & think it should be built in public with space for community feedback and contribution. An early implementation of the code can be found here: github.com/Uniswap/v4-core

davidad 🎇 (@davidad) 's Twitter Profile Photo

Deep neural networks, as you probably know, are sandwiches of linear regressions with elementwise nonlinearities between each layer. The core contribution of “Attention is All You Need,” which led directly to the LLM/GPT explosion, is to throw some *logistic* regressions in there

Alex-André Dumbass (@isomorphisms) 's Twitter Profile Photo

Backprop as Functor: A compositional perspective on supervised learning Brendan Fong David I. Spivak Rémy Tuyéras arxiv.org/abs/1711.10455 Cambridge, Mass. • 2017

Dwarkesh Patel (@dwarkesh_sp) 's Twitter Profile Photo

The Ilya Sutskever episode 0:00:00 – Explaining model jaggedness 0:09:39 - Emotions and value functions 0:18:49 – What are we scaling? 0:25:13 – Why humans generalize better than models 0:35:45 – Straight-shotting superintelligence 0:46:47 – SSI’s model will learn from deployment

Jonathan Gorard (@getjonwithit) 's Twitter Profile Photo

I think one of the conclusions we should draw from the tremendous success of LLMs is how much of human knowledge and society exists at very low levels of Kolmogorov complexity. We are entering an era where the minimal representation of a human cultural artifact... (1/12)