Chris Rytting (@chrisrytting) 's Twitter Profile
Chris Rytting

@chrisrytting

Trying to make AI improve human qol.

Formerly @UW, @nvidia, OSPC @AEI, @NewYorkFed Macroeconomic Research. PhD in CS/NLP from @BYU.

ID: 19300634

linkhttps://chrisrytting.github.io/ calendar_today21-01-2009 18:49:40

820 Tweet

412 Followers

561 Following

Taylor Sorensen (@ma_tay_) 's Twitter Profile Photo

Everyone's focused on chatbots, but... Can AI improve our difficult conversations with other people? In new work PNASNews, we find that receiving AI suggestions improves mutual respect in divisive conversations, without influencing views. (1/n) pnas.org/doi/10.1073/pn…

Everyone's focused on chatbots, but...
Can AI improve our difficult conversations with other people?
In new work <a href="/PNASNews/">PNASNews</a>, we find that receiving AI suggestions improves mutual respect in divisive conversations, without influencing views. (1/n)
pnas.org/doi/10.1073/pn…
Christian Lindke 🧐🐉🎬 (@christianlindke) 's Twitter Profile Photo

My students often ask why I'm such a fan of deliberation and Political Scientists like Jane Mansbridge. This is why. I also love how deliberation scholars have become an intersection for tech and philosophy.

Chris Rytting (@chrisrytting) 's Twitter Profile Photo

Do you ever feel like "what even is AI alignment when humans are so diverse?" or "I wonder how AI can serve everyone in pluralistic societies"? You're not alone! Check out our new piece on pluralistic alignment (or Taylor's thread) and expand that mind. Taylor Sorensen does it again!

Chris Rytting (@chrisrytting) 's Twitter Profile Photo

Helping people practice key skills in situations that are/feel realistic is one of the coolest, most appropriate applications of LMs, IMO. Check out our new work (captained by the intrepid Inna Lin) on helping people communicate effectively in challenging interpersonal convos!

Mike A. Merrill (@mike_a_merrill) 's Twitter Profile Photo

The question below is pretty easy for humans. Why can't GPT-4 get it right? In our new preprint we introduce "time series reasoning" and show that modern language models are surprisingly bad at interpreting these critical data. arxiv.org/abs/2404.11757

The question below is pretty easy for humans. Why can't GPT-4 get it right? In our new preprint we introduce "time series reasoning" and show that modern language models are surprisingly bad at interpreting these critical data. arxiv.org/abs/2404.11757
Chris Bail (chris_bail_duke 🧵) (@chris_bail) 's Twitter Profile Photo

Science education is so vital. Check out the amazing work of "Science Journal for Kids" which recently covered our research on using AI chatbots: sciencejournalforkids.org/articles/how-c… Lots of other amazing content created by a small but mighty team. Please share or support them!

Chris Rytting (@chrisrytting) 's Twitter Profile Photo

Proud to have contributed to yet another important evaluation for our AI agent buddies, led by my friends Mike A. Merrill and Alex Shaw The only thing I'll add to Mike's thoughts is that I personally believe improvement on evals will unlock much more meaningful AI progress