Xiang Fu (@xiangfu_ml) 's Twitter Profile
Xiang Fu

@xiangfu_ml

research scientist at FAIR @AIatMeta/@OpenCatalyst. prev PhD @MIT_CSAIL, research intern @MSFTResearch

ID: 1145854474184904705

linkhttps://xiangfu.co/ calendar_today02-07-2019 00:39:30

309 Tweet

1,1K Takipçi

417 Takip Edilen

Tao Chen (@taochenshh) 's Twitter Profile Photo

Introducing Vega: our first general-purpose robot developed in under 6 months at Dexmate ! 🤖✨ 💪 High-payload arms for heavy lifting 🖐️ Dexterous hands for challenging manipulation tasks 📏 Foldable torso & arms -- compact for transport, yet reaches up to 7'2" 🔄

Shuang Li (@shuangl13799063) 's Twitter Profile Photo

Video generation is powerful but too slow for real-world robotic tasks. How can we enable both video and action generation while ensuring real-time policy inference? Check out our work on the Unified Video Action Model (UVA) to find out! unified-video-action-model.github.io (1/7)

Max Zhdanov (@maxxxzdn) 's Twitter Profile Photo

🤹 Excited to share Erwin: A Tree-based Hierarchical Transformer for Large-scale Physical Systems joint work with Max Welling and Jan-Willem van de Meent Core components of Erwin: - hierarchical organization of data via ball trees - localized attention for linear complexity - hardware-efficient

Richard Sutton (@richardssutton) 's Twitter Profile Photo

awards.acm.org/about/2024-tur… Machines that learn from experience were explored by Alan Turing almost eighty years ago, which makes it particularly gratifying and humbling to receive an award in his name for reviving this essential but still nascent idea.

Andrew S. Rosen (@andrew_s_rosen) 's Twitter Profile Photo

A huge missing piece in the public discourse about federal funding at universities is that these are not handouts but are instead used to address challenges that the government considers important for the country and scientific progress. There is a massive disconnect here. 1/5

Chaitanya K. Joshi @ICLR2025 🇸🇬 (@chaitjo) 's Twitter Profile Photo

Introducing All-atom Diffusion Transformers — towards Foundation Models for generative chemistry, from my internship with the FAIR Chemistry team FAIR Chemistry AI at Meta There are a couple ML ideas which I think are new and exciting in here 👇

Introducing All-atom Diffusion Transformers 

— towards Foundation Models for generative chemistry, from my internship with the FAIR Chemistry team <a href="/OpenCatalyst/">FAIR Chemistry</a> <a href="/AIatMeta/">AI at Meta</a> 

There are a couple ML ideas which I think are new and exciting in here 👇
Hannes Stärk (@hannesstaerk) 's Twitter Profile Photo

New paper (and #ICLR2025 Oral :)): ProtComposer: Compositional Protein Structure Generation with 3D Ellipsoids arxiv.org/abs/2503.05025 Condition on your 3D layout (of ellipsoids) to generate proteins like this or to get better designability/diversity/novelty tradeoffs. 1/6

Xiang Fu (@xiangfu_ml) 's Twitter Profile Photo

We have released an eSEN model that is the current SOTA on Matbench-Discovery. Code/checkpoints are available for both non-commercial and commercial use: code: github.com/facebookresear… checkpoint: huggingface.co/facebook/OMAT24 paper (updated): arxiv.org/abs/2502.12147

Tian Xie (@xie_tian) 's Twitter Profile Photo

Thanks for the kind words, Taylor Sparks. I am a big fan of your work and Materialism Podcast. Really enjoyed the depth of the conversation and fantastic questions. Look forward to seeing AI make real-world impact in designing materials in the coming years!

Aaron Havens (@aaronjhavens) 's Twitter Profile Photo

New paper out with FAIR(+FAIR-Chemistry): Adjoint Sampling: Highly Scalable Diffusion Samplers via Adjoint Matching We present a scalable method for sampling from unnormalized densities beyond classical force fields. 📄: arxiv.org/abs/2504.11713

Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

We are presenting 3 orals and 1 spotlight at #ICLR2025 on two primary topics: On generalizing the data-driven flow matching algorithm to jump processes, arbitrary discrete corruption processes, and beyond. And on highly scalable algorithms for reward-driven learning settings.

Gabriele Corso (@gabricorso) 's Twitter Profile Photo

🚀 Excited to release a major update to the Boltz-1 model: Boltz-1x! Boltz-1x introduces inference-time steering for much higher physical quality, CUDA kernels for faster, more memory-efficient inference and training, and more! 🔥🧵

🚀 Excited to release a major update to the Boltz-1 model: Boltz-1x!

Boltz-1x introduces inference-time steering for much higher physical quality, CUDA kernels for faster, more memory-efficient inference and training, and more! 🔥🧵
AI at Meta (@aiatmeta) 's Twitter Profile Photo

Announcing the newest releases from Meta FAIR. We’re releasing new groundbreaking models, benchmarks, and datasets that will transform the way researchers approach molecular property prediction, language processing, and neuroscience. 1️⃣ Open Molecules 2025 (OMol25): A dataset

Muhammed Shuaibi (@mshuaibii) 's Twitter Profile Photo

Excited to share our latest releases to the FAIR Chemistry’s family of open datasets and models: OMol25 and UMA! AI at Meta FAIR Chemistry OMol25: huggingface.co/facebook/OMol25 UMA: huggingface.co/facebook/UMA Blog: ai.meta.com/blog/meta-fair… Demo: huggingface.co/spaces/faceboo…

Sam Blau (@sammblau) 's Twitter Profile Photo

The Open Molecules 2025 dataset is out! With >100M gold-standard ωB97M-V/def2-TZVPD calcs of biomolecules, electrolytes, metal complexes, and small molecules, OMol is by far the largest, most diverse, and highest quality molecular DFT dataset for training MLIPs ever made 1/N

The Open Molecules 2025 dataset is out! With &gt;100M gold-standard ωB97M-V/def2-TZVPD calcs of biomolecules, electrolytes, metal complexes, and small molecules, OMol is by far the largest, most diverse, and highest quality molecular DFT dataset for training MLIPs ever made 1/N
Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

We've open sourced Adjoint Sampling! It's part of a bundled release showcasing FAIR's research and open source commitment to AI for science. github.com/facebookresear… x.com/AIatMeta/statu…

Brandon Wood (@bwood_m) 's Twitter Profile Photo

We released the Open Molecules 2025 (OMol25) Dataset last week! 🚀🧪 OMol25 is a large (100M+) and diverse molecular DFT dataset for training machine learning models. It was a massive collaborative and interdisciplinary effort and I’m super proud of the whole team! 🙌 1/7

We released the Open Molecules 2025 (OMol25) Dataset last week! 🚀🧪 OMol25 is a large (100M+) and diverse molecular DFT dataset for training machine learning models. It was a massive collaborative and interdisciplinary effort and I’m super proud of the whole team! 🙌

1/7
Gabriele Corso (@gabricorso) 's Twitter Profile Photo

Excited to unveil Boltz-2, our new model capable not only of predicting structures but also binding affinities! Boltz-2 is the first AI model to approach the performance of FEP simulations while being more than 1000x faster! All open-sourced under MIT license! A thread… 🤗🚀

Zhuoran Qiao / 乔卓然 (@zhuoranq) 's Twitter Profile Photo

Introducing Chai-2 - foundation model enabling low-N de novo antibody design. We experimentally validated Chai-2 on 50+ targets, >15% binding success rate - exceeding what a lab could previously achieve. Looking forward to scaling these progresses to the entire proteome.

Brandon Wood (@bwood_m) 's Twitter Profile Photo

🚀Exciting news! We are releasing new UMA-1.1 models (Small and Medium) today and the UMA paper is now on arxiv! UMA represents a step-change in what’s possible with a single machine learning interatomic potential (short overview in the post below). The goal was to make a model