Minae Kwon
@minaekwon
Working @AnthropicAI | PhD @StanfordAILab
ID: 1075589434031001600
20-12-2018 03:11:19
114 Tweet
829 Followers
530 Following
We use gestures all the time for specifying targets! How can robots make sense of “gimme that one”? We propose GIRAF, a framework for interpreting human gesture instructions using LLMs. Paper to appear in Conference on Robot Learning: arxiv.org/abs/2309.02721 Website: tinyurl.com/giraf23
Hungry? Let our robot twirl your spaghetti for you! 🍝🤖 Introducing VAPORS: Visual Action Planning OveR Sequences, a framework for long-horizon food acquisition. Project Page: sites.google.com/view/vaporsbot Paper: arxiv.org/abs/2309.05197 To appear at Conference on Robot Learning 1/11🧵
It's an absolute honor to be a guest on the The TWIML AI Podcast podcast! Sam Charrington and I cover everything under the sun in LLMOps, from hallucinations, RAG to LLM safety. Check out the podcast on the link below!
At #ICRA24 we've a few papers on 𝗴𝗿𝗼𝘂𝗻𝗱𝗲𝗱 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 of LLMs/VLMs. • Grounded common-sense reasoning via active perception - Minae Kwon's 🧵👇 • Physically grounding VLMs - Jensen Gao's 🧵👇 • Learning from online language corrections - @lihanzha's 🧵👇
I’m taking applications for collaborators via ML Alignment & Theory Scholars! It’s a great way for new or experienced researchers outside AI safety research labs to work with me/others in these groups: Neel Nanda, Evan Hubinger, mrinank 🍂, Nina, Fabien Roger, Rylan Schaeffer, ...🧵