Michael Qian (@michaelqwl) 's Twitter Profile
Michael Qian

@michaelqwl

PhD student in Computer Science at University of Southern California, Haptics, Robotics, HCI research advised by Professor Heather Culbertson

ID: 1433471478071169034

calendar_today02-09-2021 16:46:58

15 Tweet

23 Followers

63 Following

Gerry (@gchenfc) 's Twitter Profile Photo

Our graffiti painting robot is getting better! Slowly but steadily :) With @florez_jd Frank Dellaert Seth Hutchinson, Michael Qian Powered by #gtsam

Our graffiti painting robot is getting better!  Slowly but steadily :)
With @florez_jd <a href="/fdellaert/">Frank Dellaert</a> Seth Hutchinson, Michael Qian
Powered by #gtsam
Ken Nakagaki (@ken0324) 's Twitter Profile Photo

Graduation season is here, and AxLab - Actuated Experience Lab proudly has 8(!) members departing for their next steps! So much memory with them, building the lab, exhibiting (Ars Electronica/SXSW, etc), writing papers, attending conf, and having fun together (incl. today's T-shirt activity👕

Graduation season is here, and <a href="/AxLab_UChicago/">AxLab - Actuated Experience Lab</a> proudly has 8(!) members departing for their next steps! So much memory with them, building the lab, exhibiting (Ars Electronica/SXSW, etc), writing papers, attending conf, and having fun together (incl. today's T-shirt activity👕
Meta Newsroom (@metanewsroom) 's Twitter Profile Photo

We just unveiled Orion, which we believe is the most advanced pair of augmented reality glasses ever made. about.fb.com/news/2024/09/i…

Michael Qian (@michaelqwl) 's Twitter Profile Photo

Proud to announce that I’ll be joining the HARVI Lab as a PhD student this fall! I’m thrilled to work with my advisor, Heather Culbertson, and can’t wait for the exciting years ahead!

Proud to announce that I’ll be joining the HARVI Lab as a PhD student this fall! I’m thrilled to work with my advisor, <a href="/hmculbertson/">Heather Culbertson</a>, and can’t wait for the exciting years ahead!
Ken Nakagaki (@ken0324) 's Twitter Profile Photo

.AxLab - Actuated Experience Lab will present 3 PAPERS (talk+demo) and 2 POSTERS + be part of 1 WORKSHOP in the #UIST2024 conference next week! The image below shows them with detailed schedules! We will post the project details in the next few days. We are so excited to present them ACM UIST!

.<a href="/AxLab_UChicago/">AxLab - Actuated Experience Lab</a> will present 3 PAPERS (talk+demo) and 2 POSTERS + be part of 1 WORKSHOP in the #UIST2024 conference next week! 
The image below shows them with detailed schedules! We will post the project details in the next few days. We are so excited to present them <a href="/ACMUIST/">ACM UIST</a>!
Ken Nakagaki (@ken0324) 's Twitter Profile Photo

🚨#UIST2024 Poster: Towards AI-Infused Shape-Changing UIs 👉💬🪄 What if you can point, gesture, and speak to 'summon' any physical shapes? Another exploration by AxLab, in collab with UChicagoCS's 3DL, employs Gen-AI to author shape display via multimodal interaction. 🧵→

Ken Nakagaki (@ken0324) 's Twitter Profile Photo

Thanks to everyone who came to AxLab - Actuated Experience Lab's #UIST2024 demos! We immensely enjoyed showing our demos to the ACM UIST community! (Special Thanks to Anup, who couldn't make the conf but significantly contributed to 2 papers, & Chi, who operated the shape display from Chicago!

Thanks to everyone who came to <a href="/AxLab_UChicago/">AxLab - Actuated Experience Lab</a>'s #UIST2024 demos!
We immensely enjoyed showing our demos to the <a href="/ACMUIST/">ACM UIST</a> community!  (Special Thanks to Anup, who couldn't make the conf but significantly contributed to 2 papers, &amp; Chi, who operated the shape display from Chicago!
Ken Nakagaki (@ken0324) 's Twitter Profile Photo

In today's Poster Session A, Jesse Gao & Michael Qian will share their exploration and extended vision into 'AI-infused Shape Changing UIs' controlled by touch, gesture, & speech. #UIST2024 'Towards Multimodal Interaction with AI-Infused Shape-Changing Interfaces'

Joel Chan | 🦣: joelchan86@hci.social (@joelchan86) 's Twitter Profile Photo

In the spirit of #UIST2024, a next-level live demo: controlling a shape-changing display (live from Chicago!) via text input in Pittsburgh! This conference is awesome.

Ken Nakagaki (@ken0324) 's Twitter Profile Photo

Michael Qian + Jesse Gao gave a transformative talk, especially with a LIVE DEMO to remotely control the shape display in our lab in UChicago using their system! Thanks to Chi for running the hardware at AxLab - Actuated Experience Lab !

<a href="/MichaelQwl/">Michael Qian</a> + <a href="/JesseGao13/">Jesse Gao</a> gave a transformative talk, especially with a LIVE DEMO to remotely control the shape display in our lab in UChicago using their system! Thanks to <a href="/ChiWang_02/">Chi</a> for running the hardware at <a href="/AxLab_UChicago/">AxLab - Actuated Experience Lab</a> !
Yujie Tao (@tao_yujie) 's Twitter Profile Photo

The sense of touch is innately private, but what if it could be shared across individuals? We Stanford VR studied shared body sensations in VR and found it increases body illusion and empathy toward others and influences social behavior. #ISMAR2024 🔗vhil.stanford.edu/publications/s…

The sense of touch is innately private, but what if it could be shared across individuals?  

We <a href="/StanfordVR/">Stanford VR</a> studied shared body sensations in VR and found it increases body illusion and empathy toward others and influences social behavior. #ISMAR2024

🔗vhil.stanford.edu/publications/s…
Fei-Fei Li (@drfeifei) 's Twitter Profile Photo

Very excited to share with you what our team World Labs has been up to! No matter how one theorizes the idea, it's hard to use words to describe the experience of interacting with 3D scenes generated by a photo or a sentence. Hope you enjoy this blog! 🤩❤️‍🔥

Bowen Wen (@bowenwen_me) 's Twitter Profile Photo

📢Time to upgrade your depth camera! Introducing **FoundationStereo**, a foundation model for stereo depth estimation in zero-shot (accepted to CVPR 2025 with full scores) [1/n] Code: github.com/NVlabs/Foundat… Website: nvlabs.github.io/FoundationSter… Paper: arxiv.org/abs/2501.09898