Yixing Jiang (@jyx_su) 's Twitter Profile
Yixing Jiang

@jyx_su

PhD student at Stanford | Stanford Machine Learning Group, HealthRex Lab | National Science Scholar | Previously Student Researcher at Google Deepmind

ID: 1539636575474069505

linkhttps://www.linkedin.com/in/jiangyx/ calendar_today22-06-2022 15:49:47

15 Tweet

275 Followers

11 Following

AK (@_akhaliq) 's Twitter Profile Photo

Many-Shot In-Context Learning in Multimodal Foundation Models Large language models are well-known to be effective at few-shot in-context learning (ICL). Recent advancements in multimodal foundation models have enabled unprecedentedly long context windows, presenting an

Many-Shot In-Context Learning in Multimodal Foundation Models

Large language models are well-known to be effective at few-shot in-context learning (ICL). Recent advancements in multimodal foundation models have enabled unprecedentedly long context windows, presenting an
Google (@google) 's Twitter Profile Photo

Today, we’re releasing Gemma 2 to researchers and developers globally. Available in both 9 billion and 27 billion parameter sizes, it’s much more powerful and efficient than the first generation. Learn more ↓ goo.gle/3RNC4k9

lmarena.ai (formerly lmsys.org) (@lmarena_ai) 's Twitter Profile Photo

Congrats Google DeepMind on the Gemma-2-2B release! Gemma-2-2B has been tested in the Arena under "guava-chatbot". With just 2B parameters, it achieves an impressive score 1130 on par with models 10x its size! (For reference: GPT-3.5-Turbo-0613: 1117, Mixtral-8x7b: 1114). This

Congrats <a href="/GoogleDeepMind/">Google DeepMind</a> on the Gemma-2-2B release!

Gemma-2-2B has been tested in the Arena under "guava-chatbot". With just 2B parameters, it achieves an impressive score 1130 on par with models 10x its size! (For reference: GPT-3.5-Turbo-0613: 1117, Mixtral-8x7b: 1114).

This
Kameron Black (@kameronblack633) 's Twitter Profile Photo

We highlight the fundamental shift from AI as a tool to AI as a teammate in our recent multi-agent benchmarking study that measures leading large language models in their ability to carry out tasks in medicine: Full study: bit.ly/41kdgFs Stanford AI Lab Stanford Medicine

Andrew Ng (@andrewyng) 's Twitter Profile Photo

Releasing a new "Agentic Reviewer" for research papers. I started coding this as a weekend project, and Yixing Jiang made it much better. I was inspired by a student who had a paper rejected 6 times over 3 years. Their feedback loop -- waiting ~6 months for feedback each time -- was

Releasing a new "Agentic Reviewer" for research papers. I started coding this as a weekend project, and <a href="/jyx_su/">Yixing Jiang</a> made it much better.

I was inspired by a student who had a paper rejected 6 times over 3 years. Their feedback loop -- waiting ~6 months for feedback each time -- was
Yixing Jiang (@jyx_su) 's Twitter Profile Photo

Excited to share that "Agentic Reviewer" (developed by Andrew Ng and me) has reviewed more papers than the entire NeurIPS 2025 submission count (21,575). Thank you for the enthusiasm from 160 countries! We are glad that over 95% of you found the generated reviews useful, and

Andrew Ng (@andrewyng) 's Twitter Profile Photo

NeurIPS received 21,575 paper submissions this year. Our Agentic Reviewer, released last week, just surpassed this in number of papers submitted and reviewed. It's clear agentic paper reviewing is here to stay and will be impactful!