Hao Wang (@hw_haowang) 's Twitter Profile
Hao Wang

@hw_haowang

Research scientist @RedHat & @MITIBMLab, PhD @Harvard. Research interests: information theory, statistical learning theory, trustworthy machine learning.

ID: 1233862322818437120

linkhttps://haowang94.github.io calendar_today29-02-2020 21:11:31

19 Tweet

124 Followers

128 Following

Akash Srivastava (@variational_i) 's Twitter Profile Photo

If you are a PhD student at Massachusetts Institute of Technology (MIT) (sorry about this constraint) looking for an #internship and are interested in any of the topics listed in this đź§µ, please get in touch with me or Hao Wang, as my group is seeking talented students to join us at the MIT-IBM Watson AI Lab

Flavio Calmon (@flaviocalmon) 's Twitter Profile Photo

Excited to announce the Workshop on Information-theoretic Methods for Trustworthy Machine Learning at the Simons Institute from May 22nd-25th! Stay tuned for more details: simons.berkeley.edu/workshops/asu-…

Flavio Calmon (@flaviocalmon) 's Twitter Profile Photo

Mario Diaz Torres, a brilliant researcher and mathematician, passed away suddenly on August 31st. Mario Diaz was a rising star in the LatAm math community and was doing exceptional work in information theory, differential privacy, and related areas. bit.ly/4d9XGPB

Isha Puri (@ishapuri101) 's Twitter Profile Photo

[1/x] can we scale small, open LMs to o1 level? Using classical probabilistic inference methods, YES! Joint MIT CSAIL / Red Hat AI Innovation Team work introduces a particle filtering approach to scaling inference w/o any training! check out …abilistic-inference-scaling.github.io

[1/x] can we scale small, open LMs to o1 level? Using classical probabilistic inference methods, YES! Joint <a href="/MIT_CSAIL/">MIT CSAIL</a> / <a href="/RedHat/">Red Hat</a> AI Innovation Team work introduces a particle filtering approach to scaling inference w/o any training! check out …abilistic-inference-scaling.github.io
Red Hat AI (@redhat_ai) 's Twitter Profile Photo

Join us this Friday for Random Samples, a weekly AI talk series from Red Hat AI Innovation Team. Topic: The State of LLM Compression — From Research to Production We’ll explore quantization, sparsity, academic vs. real-world benchmarks, and more. Join details in comments 👇

Join us this Friday for Random Samples, a weekly AI talk series from <a href="/RedHat_AI/">Red Hat AI</a> Innovation Team.

Topic: The State of LLM Compression — From Research to Production

We’ll explore quantization, sparsity, academic vs. real-world benchmarks, and more.

Join details in comments 👇
Hao Wang (@hw_haowang) 's Twitter Profile Photo

⚠️When using inference-time scaling, don't waste compute on reasoning steps likely to lead to dead ends. 💡In our latest work, we show that a calibrated PRM can estimate how likely each reasoning step is to reach the correct answer, enabling more efficient inference-time scaling.