Yongchao Chen (@yongchaoc) 's Twitter Profile
Yongchao Chen

@yongchaoc

PhD candidate in Harvard and MIT, working on Robotics, Foundation Models, and AI for Science. Interned at Microsoft Research and MIT-IBM Watson AI Lab.

ID: 1166060863008063488

linkhttps://yongchao98.github.io/YongchaoChen/ calendar_today26-08-2019 18:52:36

10 Tweet

94 Followers

260 Following

Yongchao Chen (@yongchaoc) 's Twitter Profile Photo

lnkd.in/eCR_yJ5Q Glad to share our recent work 'AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers', on applying Large Language Models (LLM) to robot planning. This method performs better than task planning frameworks like SayCan.

lnkd.in/eCR_yJ5Q
Glad to share our recent work 'AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers', on applying Large Language Models (LLM) to robot planning. 

This method performs better than task planning frameworks like SayCan.
Yongchao Chen (@yongchaoc) 's Twitter Profile Photo

Glad to share my intern work in Microsoft Research. Great gratitude to my mentors Chi Wang , Harsh Jhamtani, Srinagesh Sharma and my PhD advisor Chuchu Fan. 'Steering Large Language Models between Code Execution and Textual Reasoning' 👉 Full paper: arxiv.org/abs/2410.03524

Glad to share my intern work in Microsoft Research. Great gratitude to my mentors <a href="/Chi_Wang_/">Chi Wang</a> , <a href="/harsh_jhamtani/">Harsh Jhamtani</a>, Srinagesh Sharma and my PhD advisor Chuchu Fan.

'Steering Large Language Models between Code Execution and Textual Reasoning'

👉 Full paper: arxiv.org/abs/2410.03524
elvis (@omarsar0) 's Twitter Profile Photo

Cool research paper from Google. This is what clever context engineering looks like. It proposes Tool-Use-Mixture (TUMIX), leveraging diverse tool-use strategies to improve reasoning. This work shows how to get better reasoning from LLMs by running a bunch of diverse agents

Cool research paper from Google.

This is what clever context engineering looks like.

It proposes Tool-Use-Mixture (TUMIX), leveraging diverse tool-use strategies to improve reasoning.

This work shows how to get better reasoning from LLMs by running a bunch of diverse agents
Yongchao Chen (@yongchaoc) 's Twitter Profile Photo

Thanks for sharing our work. We find tool-augmented multi-agent TTS owns much potentials. I personally also hypothesize the current OpenAI ChatGPT Agent and Grok4 also utilize similar techniques in tool-augmented TTS.