Mark S. Miller
@marksammiller
"Case dismissed." Two words that every defendant in every case yearns to hear. Today we can announce upon full Commission approval U.S. Securities and Exchange Commission is dropping our case. There will be no settlement or compromise-- a wrong will simply be made right. 1/4
Chain Abstraction Fireside: Insights From Builders: -Kushagra | Okto 🐙 from Okto 🐙 -Vivek Gupta | Okto 🐙⛓️ | Orchestrating Web3 from Okto 🐙 -Brendan O'Toole from Agoric A fireside chat featuring insights from Okto and Agoric on how Chain Abstraction and intents are reshaping blockchain infrastructure.
W T F mozilla.org/en-US/about/le… "When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of
Gal Weizman kumavis boneskull mcboneskullface LeoTM The vision continues. There's more future in front of it than there is past behind it (even if you count from when Mark S. Miller started it) Everyone who was ever properly introduced to the idea of fearless cooperation and Hardened JS is forever infected with the idea.
Neat ai reframing by K Eric Drexler — you’ll like this too Mark S. Miller. open.substack.com/pub/aiprospect…
Yoav Ganbar Hardening, coined by Agoric (afaia), is the action of freezing all properties of an object, both own properties of the object and all own properties of all prototypes it inherits from. That way, I can share such object with an untrusted party, knowing it remains immutable and
What are the best strategies for addressing risks from artificial superintelligence? In this 4-hour conversation, Eliezer Yudkowsky ⏹️ and Mark S. Miller discuss their cruxes for disagreement. While Eliezer advocates an international treaty that bans anyone from building it, Mark argues
I hope you reserved 4 hours in your day for this singular AI debate between MIRI co-founder Eliezer Yudkowsky ⏹️ and Foresight Institute senior fellow Mark S. Miller!
It was my pleasure to set up and moderate a debate/discussion on ASI (artificial superintelligence) between Foresight Senior Fellow Mark S. Miller and MIRI founder Eliezer Yudkowsky ⏹️. Sharing similar long-term goals, they nevertheless reach opposite conclusions on best strategy. While
🤔What happens when you put Mark S. Miller and Eliezer Yudkowsky ⏹️ in the same room? A deep dive into AI, existential risk, and whether alignment is even possible! Thank you to Foresight Institute for the video ↓
vitalik.eth Love it! I think there’s an important complementary idea though: having trustworthy, verifiable machines that contain untrustworthy components. I think it is possible and ultimately necessary. I get this idea from Mark S. Miller’s work.