Andreea Bobu (@andreea7b) 's Twitter Profile
Andreea Bobu

@andreea7b

Assistant Professor @MITAeroAstro and @MIT_CSAIL ∙ PhD from @Berkeley_EECS ∙ machine learning, robots, humans, and alignment

ID: 2200242594

linkhttps://andreea7b.github.io/ calendar_today17-11-2013 21:37:37

100 Tweet

2,2K Takipçi

436 Takip Edilen

Bahar Irfan (@baharirfan_) 's Twitter Profile Photo

Lifelong Learning and Personalization in Long-Term Human-Robot Interaction #LEAPHRI workshop is back for the 4th year with a fantastic lineup of speakers and debaters for another giant leap in #HRI 🤯🦿 Working on these areas? Submit by 🐦Jan 12/📌Feb 16: leap-hri.github.io

Lifelong Learning and Personalization in Long-Term Human-Robot Interaction #LEAPHRI workshop is back for the 4th year with a fantastic lineup of speakers and debaters for another giant leap in #HRI 🤯🦿

Working on these areas? Submit by 🐦Jan 12/📌Feb 16: leap-hri.github.io
Andreea Bobu (@andreea7b) 's Twitter Profile Photo

What does it mean for humans and robots to align their representations of their tasks and how do current approaches fare? Come see at #HRI2024 in the Tuesday 17:10 session! Paper: arxiv.org/abs/2302.01928 w/ Andi Peng,Pulkit Agrawal, Julie Shah, Anca Dragan

What does it mean for humans and robots to align their representations of their tasks and how do current approaches fare? Come see at #HRI2024 in the Tuesday 17:10 session! 

Paper: arxiv.org/abs/2302.01928 w/ <a href="/TheAndiPenguin/">Andi Peng</a>,<a href="/pulkitology/">Pulkit Agrawal</a>, <a href="/julie_a_shah/">Julie Shah</a>, <a href="/ancadianadragan/">Anca Dragan</a>
Andi Peng (@theandipenguin) 's Twitter Profile Photo

Can changes in user behavior tell us anything meaningful about their implicit preferences? Our #HRI2024 paper suggests yes! Paper: arxiv.org/abs/2402.03081 [1/n]

Ruairidh Battleday (@rmbattleday) 's Twitter Profile Photo

For AI Researchers, Thinkers, Founders, and VCs: Announcing our Spring Summit on Fundamental Challenges for AI! bit.ly/springsummitai… We discuss core challenges & promising avenues in a day of high-profile keynotes & panels. Lunch & Reception included. April 15th, Bay Area,

For AI Researchers, Thinkers, Founders, and VCs:

Announcing our Spring Summit on Fundamental Challenges for AI!

bit.ly/springsummitai…

We discuss core challenges &amp; promising avenues in a day of high-profile keynotes &amp; panels. Lunch &amp; Reception included.

April 15th, Bay Area,
Mike Hagenow (@hagenowrobotics) 's Twitter Profile Photo

Excited to share our RSS 2024 (Robotics: Science and Systems) workshop, Mechanisms for Mapping Human Input to Robots: From Robot Learning to Shared Control/Autonomy, on July 15 in Delft! mechanisms-hri.github.io Andreea Bobu Tesca Fitzgerald, Mario Selvaggio, Harold Soh, Julie Shah

Andi Peng (@theandipenguin) 's Twitter Profile Photo

Introducing LGA (Language-Guided Abstraction) at ICLR 2024! 🧵 📰 Paper: rb.gy/89268y 🌐 Website: rb.gy/10thlm 🗞️ MIT News: rb.gy/7ske0y State abstraction is key to generalizable learning, but how do we know which features are task-relevant?

Mike Hagenow (@hagenowrobotics) 's Twitter Profile Photo

Deadline extended to submit to our #RSS2024 Robotics: Science and Systems Workshop: Mechanisms for Mapping Human Input to Robots mechanisms-hri.github.io New deadline: 5/17 AOE!

Jason Liu @CoRL (@jasonxyliu) 's Twitter Profile Photo

Submit to our #RSS2024 workshop on “Robotic Tasks and How to Specify Them? Task Specification for General-Purpose Intelligent Robots” by June 12th. Join our discussion on what constitutes various task specifications for robots, in what scenarios they are most effective and more!

Submit to our #RSS2024 workshop on “Robotic Tasks and How to Specify Them? Task Specification for General-Purpose Intelligent Robots” by June 12th.

Join our discussion on what constitutes various task specifications for robots, in what scenarios they are most effective and more!
Micah Carroll (@micahcarroll) 's Twitter Profile Photo

Excited to share a unifying formalism for the main problem I’ve tackled since starting my PhD! 🎉 Current AI Alignment techniques ignore the fact that human preferences/values can change. What would it take to account for this? 🤔 A thread 🧵⬇️

Excited to share a unifying formalism for the main problem I’ve tackled since starting my PhD! 🎉

Current AI Alignment techniques ignore the fact that human preferences/values can change. What would it take to account for this? 🤔

A thread 🧵⬇️
Carnegie Mellon Robotics Institute Summer Scholars (@cmu_riss) 's Twitter Profile Photo

Get ready to explore robotics with RoboLaunch! 🚀 Our next speaker is Dr. Andreea Bobu (Andreea Bobu), an incoming Assistant Professor at MIT. 🤖 Tune in this Wednesday 7/10/24, 11:00 AM EDT and join this conversation on our YouTube: youtube.com/watch?v=lgt255…

Get ready to explore robotics with RoboLaunch! 🚀  
Our next speaker is Dr. Andreea Bobu (<a href="/andreea7b/">Andreea Bobu</a>), an incoming Assistant Professor at  MIT. 🤖 

Tune in this Wednesday 7/10/24, 11:00 AM EDT and join this conversation on our YouTube:
youtube.com/watch?v=lgt255…
Andreea Bobu (@andreea7b) 's Twitter Profile Photo

Excited to share our work on smarter inference-time compute allocation! By estimating query difficulty and focusing resources on harder problems, we cut compute by up to 50% with no performance loss on math/coding tasks. Huge shoutout to Mehul Damani for leading this!

Micah Carroll (@micahcarroll) 's Twitter Profile Photo

🚨 New paper: We find that even safety-tuned LLMs learn to manipulate vulnerable users when training them further with user feedback 🤖😵‍💫 In our simulated scenarios, LLMs learn to e.g. selectively validate users' self-destructive behaviors, or deceive them into giving 👍. 🧵👇

🚨 New paper: We find that even safety-tuned LLMs learn to manipulate vulnerable users when training them further with user feedback 🤖😵‍💫

In our simulated scenarios, LLMs learn to e.g. selectively validate users' self-destructive behaviors, or deceive them into giving 👍.

🧵👇
MIT AeroAstro (@mitaeroastro) 's Twitter Profile Photo

Introducing the Collaborative Learning and Autonomy Research Lab (CLEAR Lab), led by Prof. Andreea Bobu. Part of AeroAstro and MIT CSAIL, CLEAR Lab focuses on developing autonomous agents that learn to perform tasks for, with, and around people. clear.csail.mit.edu

Introducing the Collaborative Learning and Autonomy Research Lab (CLEAR Lab), led by Prof. Andreea Bobu. Part of AeroAstro and <a href="/MIT_CSAIL/">MIT CSAIL</a>, CLEAR Lab focuses on developing autonomous agents that learn to perform tasks for, with, and around people.
clear.csail.mit.edu
Andreea Bobu (@andreea7b) 's Twitter Profile Photo

I’m at NeurIPS for the week — DM me if you want to chat about research, our lab at MIT, PhD applications or all of the above!

Chen Tang (@chentangmark) 's Twitter Profile Photo

Foundation models like LLMs and VLMs have transformed AI and our digital interactions, but what challenges arise when these models operate in physical, human-centric environments? Join us at the human-centered robot learning workshop at #ICRA2025 to explore these questions!

Foundation models like LLMs and VLMs have transformed AI and our digital interactions, but what challenges arise when these models operate in physical, human-centric environments? Join us at the human-centered robot learning workshop at #ICRA2025 to explore these questions!
Mehul Damani @ ICLR (@mehuldamani2) 's Twitter Profile Photo

I am super excited to be presenting our work on adaptive inference -time compute at ICLR! Come chat with me on Thursday 4/24 at 3PM (Poster #219). I am also happy to chat about RL/reasoning/ RLHF/ inference scaling (DMs are open)!