Saining Xie (@sainingxie) 's Twitter Profile
Saining Xie

@sainingxie

researcher in #deeplearning #computervision | assistant professor at @NYU_Courant @nyuniversity | previous: research scientist @metaai (FAIR) @UCSanDiego

ID: 1283081795890626560

linkhttp://www.sainingxie.com calendar_today14-07-2020 16:51:59

328 Tweet

15,15K Takipçi

1,1K Takip Edilen

Peter Tong (@tongpetersb) 's Twitter Profile Photo

TLDR: We study benchmarks, data, vision, connectors, and recipes (anything other than LLMs in MLLM), and obtain very competitive performance. We hope our project can be a cornerstone for future MLLM research. Data & Model: huggingface.co/nyu-visionx Code: github.com/cambrian-mllm/…

Georgia Gkioxari (@georgiagkioxari) 's Twitter Profile Photo

Saining Xie This is a wonderful project! And love to see Omni3D be turned into a 3D-aware VQA benchmark; an attribute of VQA benchmarks that is currently missing!!

Saining Xie (@sainingxie) 's Twitter Profile Photo

a fun collaboration with the system group at nyu. through sparse all2all comm; dynamic load balancing and large batch hyperparameter scaling rule, now you can finally train your large 3dgs on many gpus🔥without any loss in quality. led by Hexu Zhao haoyang & Daohan Lu.

a fun collaboration with the system group at nyu. through sparse all2all comm; dynamic load balancing and large batch hyperparameter scaling rule, now you can finally train your large 3dgs on many gpus🔥without any loss in quality. 

led by <a href="/zhaohexu2001/">Hexu Zhao</a> haoyang &amp; <a href="/fred_lu_443/">Daohan Lu</a>.
Daohan Lu (@fred_lu_443) 's Twitter Profile Photo

Extremely interesting finding: 3D Gaussian Splatting, despite being a non-neural model, accepts similar learning rate scaling to training neural nets. This enables hassle-free up-scaling of 3D GS training in speed and parameter count, much like you would training a big NN.

Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

This is very interesting: The Cambrian-1 models were trained on preemptible TPU-v4s using FSDP with PyTorch XLA! I haven't seen many examples of PyTorch XLA FSDP being used for large-scale real-world usecases so this intrigued me. Of course, it didn't work out of the box.

This is very interesting:

The Cambrian-1 models were trained on preemptible TPU-v4s using FSDP with PyTorch XLA! 

I haven't seen many examples of PyTorch XLA FSDP being used for large-scale real-world usecases so this intrigued me.

Of course, it didn't work out of the box.
Jon Barron (@jon_barron) 's Twitter Profile Photo

The legendary Ross Girshick just posted his CVPR workshop slides about the 1.5 decades he spent ~solving object detection as it relates to the ongoing LLM singularity. Excellent read, highly recommended. drive.google.com/file/d/1VodGlj…

Lucas Beyer (bl16) (@giffmana) 's Twitter Profile Photo

✨PaliGemma report will hit arxiv tonight. We tried hard to make it interesting, and not "here model. sota results. kthxbye." So here's some of the many interesting ablations we did, check the paper tomorrow for more! 🧶

✨PaliGemma report will hit arxiv tonight.

We tried hard to make it interesting, and not "here model. sota results. kthxbye."

So here's some of the many interesting ablations we did, check the paper tomorrow for more!

🧶
Michael Albergo (@msalbergo) 's Twitter Profile Photo

🌞 One final bit of news — excited to announce I’ll be starting as an assistant professor in applied mathematics Harvard University and Kempner Institute at Harvard University Investigator in 2026 :) Please reach out if you may be interested in working with me. Grateful for everyone that made this possible!

Yao Qin (@yaoqin_ucsb) 's Twitter Profile Photo

🥰 Super excited to share this new work on benchmarking LLMs for carbohydrate estimation, which is a huge daily burden that every patient with diabetes needs to deal with multiple times every day. 👏👍Proud of my students for starting to investigate the potential of LLMs in

Jiawei (Joe) Zhou (@jzhou_jz) 's Twitter Profile Photo

🚀As July winds down, we're just 1 week away from the TTIC Multimodal AI Workshop! This rare gathering features an incredible lineup of keynote/speakers Mohit Bansal Saining Xie Ranjay Krishna Manling Li Pulkit Agrawal Xiaolong Wang from diverse fields. Excited buff.ly/3LaXVhF

🚀As July winds down, we're just 1 week away from the TTIC Multimodal AI Workshop! This rare gathering features an incredible lineup of keynote/speakers <a href="/mohitban47/">Mohit Bansal</a> <a href="/sainingxie/">Saining Xie</a> <a href="/RanjayKrishna/">Ranjay Krishna</a> <a href="/ManlingLi_/">Manling Li</a>  <a href="/pulkitology/">Pulkit Agrawal</a> <a href="/xiaolonw/">Xiaolong Wang</a> from diverse fields. Excited buff.ly/3LaXVhF
Ruilong Li (@ruilong_li) 's Twitter Profile Photo

🌟gsplat🌟 (docs.gsplat.studio) now supports multi-GPU distributed training, which nearly linearly reduces the training time and memory footprint. Now Gaussian Splatting is ready for the city-scale reconstruction! kudos to this amazing paper: daohanlu.github.io/scaling-up-3dg…

🌟gsplat🌟 (docs.gsplat.studio) now supports multi-GPU distributed training, which nearly linearly reduces the training time and memory footprint.

Now Gaussian Splatting is ready for the city-scale reconstruction!

kudos to this amazing paper: daohanlu.github.io/scaling-up-3dg…
NYU Data Science (@nyudatascience) 's Twitter Profile Photo

CDS welcomes Eunsol Choi (Eunsol Choi) as an Assistant Professor of Computer Science (NYU Courant) and Data Science! Her research focuses on advancing how computers interpret human language in real-world contexts. nyudatascience.medium.com/meet-the-facul…

Eunsol Choi (@eunsolc) 's Twitter Profile Photo

My lab will move to NYU Data Science and NYU Courant this Fall! I’m excited to connect with amazing researchers at CILVR and larger ML/NLP community in NYC. I will be recruiting students this cycle at NYU. Happy to be back to the city 🗽on the east coast as well. I had a

Chunting Zhou (@violet_zct) 's Twitter Profile Photo

Introducing *Transfusion* - a unified approach for training models that can generate both text and images. arxiv.org/pdf/2408.11039 Transfusion combines language modeling (next token prediction) with diffusion to train a single transformer over mixed-modality sequences. This

Introducing *Transfusion* - a unified approach for training models that can generate both text and images. arxiv.org/pdf/2408.11039

Transfusion combines language modeling (next token prediction) with diffusion to train a single transformer over mixed-modality sequences. This
batuhan taskaya (@isidentical) 's Twitter Profile Photo

i am sorry to inform you but clip, in fact, does not work. it is a deeply flawed model. sorry to be the one that is telling this

NYU Courant (@nyu_courant) 's Twitter Profile Photo

The Courant Institute is thrilled to welcome eleven new faculty members this year! The impressive group represents a wide range of backgrounds and research interests—read through their short bios and extend a warm welcome if you see them on campus: cims.nyu.edu/dynamic/news/1…

The Courant Institute is thrilled to welcome eleven new faculty members this year! The impressive group represents a wide range of backgrounds and research interests—read through their short bios and extend a warm welcome if you see them on campus: cims.nyu.edu/dynamic/news/1…