Jakob Heiss (@jakobheiss) 's Twitter Profile
Jakob Heiss

@jakobheiss

PhD student in Mathematics at ETH Zürich

ID: 76379652

linkhttps://people.math.ethz.ch/~jheiss/ calendar_today22-09-2009 16:40:22

37 Tweet

11 Followers

37 Following

Sven Seuken (@svenseuken) 's Twitter Profile Photo

If you are at @icml2022, come to our spotlight talk: Thursday, 10:30-12:00 (Talk: 11:00-11:05), Session 8, Track 10, Room 310. Presentation: icml.cc/Conferences/20… Paper: arxiv.org/abs/2102.13640 Joint work with: Jakob Weissteiner, Hanna Wutte, Jakob Heiss, and Josef Teichmann

Sven Seuken (@svenseuken) 's Twitter Profile Photo

1. "Monotone-Value Neural Networks: Exploiting Preference Monotonicity in Combinatorial Assignment": arxiv.org/abs/2109.15117 2. "Fourier Analysis-based Iterative Combinatorial Auctions": arxiv.org/abs/2009.10749

Sven Seuken (@svenseuken) 's Twitter Profile Photo

If you are at #IJCAI2022, you can see both talks, today, in the Multi-Agent-Systems session, from 3:30pm-3:42pm. Joint work with: Jakob Weissteiner, Jakob Heiss, Chris Wendler, Ben Lubin, Julien Siems, and Markus Püschel. Universität Zürich ETH Zurich ETH AI Center European Research Council (ERC)

Jakob Heiss (@jakobheiss) 's Twitter Profile Photo

#Opera 11.00 ist da und extrem gut! Stapelbare Tabs, endlich Erweiterungen und weitere neue Funktionen! Da fällt mir die Browserwahl schwer.

Bertrand Charpentier (@bertrand_charp) 's Twitter Profile Photo

Jakob Heiss will present his ICML paper "NOMU - Neural Optimization-based Model Uncertainty" at the Uncertainty in AI reading group today at 5:30pm (Berlin time). uncertainty-reading-group.github.io/2021-11-28-tal… Co-author: Jakob Weissteiner, Hanna Wutte, Sven Seuken, Josef Teichmann

Ilia Azizi (@ilia_azizi) 's Twitter Profile Photo

ICLR 2026! 🎉 Our paper "CLEAR: Calibrated Learning for Epistemic and Aleatoric Risk", accepted ICLR 2026. Where does prediction uncertainty come from? Aleatoric (measurement noise) or epistemic (limited data)? Most methods only address one. CLEAR addresses both. (1/4)

ICLR 2026! 🎉

Our paper "CLEAR: Calibrated Learning for Epistemic and Aleatoric Risk", accepted <a href="/iclr_conf/">ICLR 2026</a>.

Where does prediction uncertainty come from? Aleatoric (measurement noise) or epistemic (limited data)? Most methods only address one. CLEAR addresses both.

(1/4)
Ilia Azizi (@ilia_azizi) 's Twitter Profile Photo

CLEAR wraps around any base model with just two calibration parameters, no retraining. It determines how much each uncertainty source contributes. Compatible with any pair of estimators (ensembles + quantile regression), using ideas from conformal prediction. (2/4)

CLEAR wraps around any base model with just two calibration parameters, no retraining.

It determines how much each uncertainty source contributes. Compatible with any pair of estimators (ensembles + quantile regression), using ideas from conformal prediction.

(2/4)
Ilia Azizi (@ilia_azizi) 's Twitter Profile Photo

On 17 real-world datasets with ensembles + quantile regression: - 28.3% tighter intervals vs aleatoric-only baselines - 17.5% tighter vs epistemic-only baselines - Top method on 15 of 17 datasets Similar gains with Deep Ensembles & Quantile NNs confirm generalizability. (3/4)

On 17 real-world datasets with ensembles + quantile regression:
- 28.3% tighter intervals vs aleatoric-only baselines
- 17.5% tighter vs epistemic-only baselines
- Top method on 15 of 17 datasets

Similar gains with Deep Ensembles &amp; Quantile NNs confirm generalizability.

(3/4)
Ilia Azizi (@ilia_azizi) 's Twitter Profile Photo

CLEAR is available as an open-source Python package, so feel free to try it: 🐍 pip install clear-uq 💻 unco3892.github.io/clear/ 📄 openreview.net/forum?id=RY4IH… 🔗 github.com/Unco3892/clear Joint work with my great co-authors juro bodik , Jakob Heiss, and Bin Yu. (4/4)