LP Morency (@lpmorency) 's Twitter Profile
LP Morency

@lpmorency

Associate Professor at CMU studying multimodal and Social AI. Ice hockey goalie.

ID: 3201769601

linkhttp://multicomp.cs.cmu.edu/ calendar_today24-04-2015 17:20:02

76 Tweet

1,1K Followers

21 Following

Paul Liang (@pliang279) 's Twitter Profile Photo

Happening in ~2 hours at #ICML2023 930am @ exhibit hall 2 Also happy to chat about - understanding multimodal interactions and modeling them - models for many diverse modalities esp beyond image+text - Applications in health, robots, education, social intelligence & more DM me!

AI for Global Goals (@globalgoalsai) 's Twitter Profile Photo

🔍 Louis-Philippe Morency LP Morency breaks it down: 1️⃣ Modalities are often linked—statistically or semantically 2️⃣ Statistical links are bottom-up; Semantic links are top-down 3️⃣ Relationships can be complex—think dependencies 🧠 From a captivating talk - time to rethink how we

Leena Mathur (@lmathur_) 's Twitter Profile Photo

The Artificial Social Intelligence Workshop and Inaugural Social-IQ Challenge are happening soon #ICCV2023 in Paris next week! 🇫🇷 This event will start at 9:15 am on Monday, October 2 in Room E01 (hybrid option available). Full schedule here: sites.google.com/view/asi-iccv-…

Chaitanya Ahuja (@chahuja) 's Twitter Profile Photo

Adapting Generative models for co-speech gestures to new speakers, but without forgetting the style of previous speakers? Let’s make it more challenging by only having 2-10 minutes of data for the new speakers. Website: chahuja.com/cdiffgan Find us at #ICCV2023 1/n

Paul Liang (@pliang279) 's Twitter Profile Photo

Multimodal AI studies the info in each modality & how it relates or combines with other modalities. This past year, we've been working towards a **foundation** for multimodal AI: I'm excited to share our progress at #NeurIPS2023 & #ICMI2023: arxiv.org/abs/2302.12247 see long 🧵:

Multimodal AI studies the info in each modality & how it relates or combines with other modalities. This past year, we've been working towards a **foundation** for multimodal AI:

I'm excited to share our progress at #NeurIPS2023 & #ICMI2023: arxiv.org/abs/2302.12247

see long 🧵:
Syeda Nahida Akter (@snat02792153) 's Twitter Profile Photo

In continued pretraining, how can we choose what to mask when the pretraining domain differs from the target domain? In our #EMNLP2023 paper, we propose Difference-Masking to address this problem and boost downstream task performance! Paper: arxiv.org/abs/2305.14577

In continued pretraining, how can we choose what to mask when the pretraining domain differs from the target domain?

In our #EMNLP2023 paper, we propose Difference-Masking to address this problem and boost downstream task performance!

Paper: arxiv.org/abs/2305.14577
Leena Mathur (@lmathur_) 's Twitter Profile Photo

Check out our #EMNLP2023 paper on Difference-Masking, an approach for choosing what to mask during continued pretraining!

Paul Liang (@pliang279) 's Twitter Profile Photo

If your downstream task data is quite different from your pretraining data, make sure you check out our new approach *Difference-Masking* at #EMNLP2023 findings. Excellent results on classifying citation networks, chemistry text, social videos, TV shows etc. see thread below:

Paul Liang (@pliang279) 's Twitter Profile Photo

Despite the successes of contrastive learning (eg CLIP), they have a fundamental limitation - it can only capture *shared* info between modalities, and ignores *unique* info To fix it, a thread for our #NeurIPS2023 paper w Zihao Martin James Zou LP Morency Russ Salakhutdinov:

Despite the successes of contrastive learning (eg CLIP), they have a fundamental limitation - it can only capture *shared* info between modalities, and ignores *unique* info

To fix it, a thread for our #NeurIPS2023 paper w Zihao Martin <a href="/james_y_zou/">James Zou</a> <a href="/lpmorency/">LP Morency</a> <a href="/rsalakhu/">Russ Salakhutdinov</a>:
Paul Liang (@pliang279) 's Twitter Profile Photo

Excited to attend #NeurIPS2023 this week! Find me to chat about the foundations of multimodal machine learning, multisensory foundation models, interactive multimodal agents, and their applications. I'm also on the academic job market, you can find my statements on my website:

Excited to attend #NeurIPS2023 this week! Find me to chat about the foundations of multimodal machine learning, multisensory foundation models, interactive multimodal agents, and their applications.

I'm also on the academic job market, you can find my statements on my website:
Leena Mathur (@lmathur_) 's Twitter Profile Photo

Agreed! Multimodal ML is not just a sub-area of various modalities. LP Morency is one of the researchers who has believed in multimodal for years. Anyone interested in multimodal ML should check out his course on this topic, co-taught with Paul Liang! cmu-multicomp-lab.github.io/mmml-course/fa…

Leena Mathur (@lmathur_) 's Twitter Profile Photo

Curious about socially-intelligent AI? Check out our paper on underlying technical challenges, open questions, and opportunities to advance social intelligence in AI agents: Work w/ LP Morency, Paul Liang 📰Paper: arxiv.org/abs/2404.11023 💻Repo: github.com/l-mathur/socia… 🧵1/9

Curious about socially-intelligent AI? Check out our paper on underlying technical challenges, open questions, and opportunities to advance social intelligence in AI agents:

Work w/ <a href="/lpmorency/">LP Morency</a>, <a href="/pliang279/">Paul Liang</a> 

📰Paper: arxiv.org/abs/2404.11023
💻Repo: github.com/l-mathur/socia…

🧵1/9
Russ Salakhutdinov (@rsalakhu) 's Twitter Profile Photo

#ICLR2024: Paul Liang Paul Liang is presenting our work on Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications. Paper: arxiv.org/abs/2306.04539 Code: github.com/pliang279/PID with Chun Kai Ling, Yun Cheng, Alex Obolenskiy, Yudong Liu, Rohan Pandey,

#ICLR2024: Paul Liang <a href="/pliang279/">Paul Liang</a> is presenting our work on Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications.

Paper: arxiv.org/abs/2306.04539
Code: github.com/pliang279/PID

with Chun Kai Ling, Yun Cheng, Alex Obolenskiy, Yudong Liu, Rohan Pandey,
Paul Liang (@pliang279) 's Twitter Profile Photo

Excited to release HEMM (Holistic Evaluation of Multimodal Foundation Models), the largest and most comprehensive evaluation for multimodal models like Gemini, GPT-4V, BLIP-2, OpenFlamingo, and more. HEMM contains 30 datasets carefully selected and categorized based on: 1. The

Excited to release HEMM (Holistic Evaluation of Multimodal Foundation Models), the largest and most comprehensive evaluation for multimodal models like Gemini, GPT-4V, BLIP-2, OpenFlamingo, and more.

HEMM contains 30 datasets carefully selected and categorized based on:

1. The
Leena Mathur (@lmathur_) 's Twitter Profile Photo

Check out HEMM, a framework for holistically evaluating multimodal models! HEMM enables models to be studied at 3 decoupled levels: basic multimodal skills (abilities such as alignment), information flow (how information transforms across modalities), and real-world use cases

Paul Liang (@pliang279) 's Twitter Profile Photo

📣 I'm thrilled to share that I’ll be joining MIT as an assistant professor this fall, joint between MIT Media Lab & MIT EECS. My group will advance the foundations of multisensory AI to enhance the human experience. I look forward to tackling exciting challenges in multimodal AI

📣 I'm thrilled to share that I’ll be joining MIT as an assistant professor this fall, joint between <a href="/medialab/">MIT Media Lab</a> &amp; <a href="/MITEECS/">MIT EECS</a>.

My group will advance the foundations of multisensory AI to enhance the human experience.

I look forward to tackling exciting challenges in multimodal AI
Leena Mathur (@lmathur_) 's Twitter Profile Photo

In a few weeks at #ECCV2024, we will have the 3rd edition of the Artificial Social Intelligence Workshop! This workshop will occur on September 29 in Milan 🇮🇹, with an interactive hybrid option available, as well sites.google.com/andrew.cmu.edu…

In a few weeks at #ECCV2024, we will have the 3rd edition of the Artificial Social Intelligence Workshop!

This workshop will occur on September 29 in Milan 🇮🇹, with an interactive hybrid option available, as well 

sites.google.com/andrew.cmu.edu…
Juan Pino (@juanmiguelpino) 's Twitter Profile Photo

FAIR is hiring across Europe and North America to build the next generation of AI systems. Please apply directly below or reach out! Postdoctoral Researcher: metacareers.com/jobs/380024285… LP Morency Research Scientist and Postdoctoral Researcher positions in developmental AI: