Emad
@emostaque
Open source AI @SchellingAI.
Board @AiEleuther.
Advisor @rendernetwork.
Founder @StabilityAI.
ID: 407800233
08-11-2011 15:32:17
15,15K Tweet
233,233K Takipçi
18 Takip Edilen
Should AI be aligned with human preferences, rewards, or utility functions? Excited to finally share a preprint that Micah Carroll Matija Franklin Hal Ashton & I have worked on for almost 2 years, arguing that AI alignment has to move beyond the preference-reward-utility nexus!
"crypto : not your keys not your crypto :: ai : not your weights not your brain" - Andrej Karpathy