Chen-Hao (Lance) Chao (@chenhao_chao) 's Twitter Profile
Chen-Hao (Lance) Chao

@chenhao_chao

PhD in CS @UofT

ID: 1470612229020024832

linkhttps://chen-hao-chao.github.io/ calendar_today14-12-2021 04:31:10

19 Tweet

81 Takipçi

135 Takip Edilen

Gabriel Peyré (@gabrielpeyre) 's Twitter Profile Photo

Monte Carlo integration approximates integrals at a rate of 1/sqrt(n), independent of the dimension. en.wikipedia.org/wiki/Monte_Car…

Chen-Hao (Lance) Chao (@chenhao_chao) 's Twitter Profile Photo

📍Check out our updated blog post where we redefine the DLSM objective function for discrete variables. Read more: Blog: chen-hao-chao.github.io/dlsm/ Paper: arxiv.org/abs/2203.14206 #generative #AI #score #diffusion

📍Check out our updated blog post where we redefine the DLSM objective function for discrete variables.

Read more:
Blog: chen-hao-chao.github.io/dlsm/
Paper: arxiv.org/abs/2203.14206
#generative #AI #score #diffusion
Chen-Hao (Lance) Chao (@chenhao_chao) 's Twitter Profile Photo

(1/3) Thrilled to announce that our paper, "Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow" (MEow), has been accepted at NeurIPS 2024! Code: github.com/ChienFeng-hub/… Paper: arxiv.org/abs/2405.13629 #NeurIPS2024 #NVIDIA #RL #generative

(1/3)

Thrilled to announce that our paper, "Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow" (MEow), has been accepted at NeurIPS 2024!

Code: github.com/ChienFeng-hub/…
Paper: arxiv.org/abs/2405.13629

#NeurIPS2024 #NVIDIA #RL #generative
Chen-Hao (Lance) Chao (@chenhao_chao) 's Twitter Profile Photo

(2/3) MEow is the first MaxEnt RL framework that supports exact soft value function calculation and single loss function optimization. Superior performance on MuJoCo. Code: github.com/ChienFeng-hub/… Paper: arxiv.org/abs/2405.13629 #NeurIPS2024 #NVIDIA #RL #generative

(2/3)

MEow is the first MaxEnt RL framework that supports exact soft value function calculation and single loss function optimization. Superior performance on MuJoCo.

Code: github.com/ChienFeng-hub/…
Paper: arxiv.org/abs/2405.13629

#NeurIPS2024 #NVIDIA #RL #generative
Chen-Hao (Lance) Chao (@chenhao_chao) 's Twitter Profile Photo

(3/3) Test-time demonstration of MEow on the NVIDIA Omniverse Isaac Gym environments. Code: github.com/ChienFeng-hub/… Paper: arxiv.org/abs/2405.13629 #NeurIPS2024 #NVIDIA #RL #generative

Chen-Hao (Lance) Chao (@chenhao_chao) 's Twitter Profile Photo

Excited to present a poster at #NeurIPS2024 in person. Join our session on Dec. 12, 11:00 AM–2:00 PM at West Ballroom A-D #6403. Details below: - NeurIPS Page: neurips.cc/virtual/2024/p… - Project Page: chienfeng-hub.github.io/meow/ #NeurIPS2024 #NVIDIA #RL

Excited to present a poster at #NeurIPS2024 in person. Join our session on Dec. 12, 11:00 AM–2:00 PM at West Ballroom A-D #6403. Details below:

- NeurIPS Page: neurips.cc/virtual/2024/p…
- Project Page: chienfeng-hub.github.io/meow/

#NeurIPS2024 #NVIDIA #RL
Rahul G. Krishnan (@rahulgk) 's Twitter Profile Photo

Finally, if you're interested in understanding how to leverage energy-based normalizing flows, check out Chen-Hao (Lance) Chao 's work on Meow (chienfeng-hub.github.io/meow/) He'll be presenting on Dec. 12, 11:00 AM–2:00 PM at West Ballroom A-D #6403 🧵(7/7)

AK (@_akhaliq) 's Twitter Profile Photo

Large Language Diffusion Models introduce LLaDA, a diffusion model with an unprecedented 8B scale, trained entirely from scratch, rivaling LLaMA3 8B in performance. A text generation method different from the traditional left-to-right approach Prompt: Explain what artificial

Rahul G. Krishnan (@rahulgk) 's Twitter Profile Photo

🚀 Problem: Language models struggle with rapidly evolving info and context in fields like medicine & finance. We need ways to post-train LLMs to control how they absorb new knowledge. 🔍 Insight: Why not explain, and teach, LLMs how to learn? Younwoo (Ethan) Choi will be at #ICLR2025

Vahid Balazadeh (@vahidbalazadeh) 's Twitter Profile Photo

Can neural networks learn to map from observational datasets directly onto causal effects? YES! Introducing CausalPFN, a foundation model trained on simulated data that learns to do in-context heterogeneous causal effect estimation, based on prior-fitted networks (PFNs). Joint

Can neural networks learn to map from observational datasets directly onto causal effects?

YES! Introducing CausalPFN, a foundation model trained on simulated data that learns to do in-context heterogeneous causal effect estimation, based on prior-fitted networks (PFNs). Joint