Roy Fox (@royf@sigmoid.social) (@roydfox) 's Twitter Profile
Roy Fox (@[email protected])

@roydfox

Assistant Professor @_indylab @UCIbrenICS @UCIrvine

ID: 1018861669

linkhttps://royf.org calendar_today18-12-2012 02:56:35

128 Tweet

309 Followers

171 Following

UC Irvine (@ucirvine) 's Twitter Profile Photo

UCI researchers launch first-of-its-kind #coronavirus statistics portal: Site provides comparisons between OC & neighboring counties. More: bit.ly/3adtl3V

UCI researchers launch first-of-its-kind #coronavirus statistics portal: Site provides comparisons between OC & neighboring counties. 

More: bit.ly/3adtl3V
Csaba Szepesvari (@csabaszepesvari) 's Twitter Profile Photo

Interested in hearing about the theoretical foundations of RL from a multidisciplinary perspective (CS, control, stats, OR)? If so, join us at the (all virtual) RL Theory Bootcamp at the Simons Institute next week. Lectures in the morning and the afternoon ==>

Roy Fox (@royf@sigmoid.social) (@roydfox) 's Twitter Profile Photo

This. Our top conferences are "captive regulators" of noteworthy research, and Big ML has subtly shifted their focus to experiments that have low scientific value, high environmental footprint, and — significantly — only they can run. #demeritNeurIPS

Noga Zaslavsky (@nogazaslavsky) 's Twitter Profile Photo

Come join us tomorrow (Mon) for the #NeurIPS2021 Meaning in Context workshop! We aim to advance human-machine communication by understanding how pragmatic reasoning works in humans and how it can inform AI website: mic-workshop.github.io NeurIPS link: neurips.cc/virtual/2021/w…

Stephen McAleer (@mcaleerstephen) 's Twitter Profile Photo

Policy Space Response Oracles (PSRO) mixes over a population of deep RL policies to approximate a Nash equilibrium, but exploitability can increase from one iteration to the next. We introduce Anytime PSRO which does not increase exploitability. Arxiv: arxiv.org/abs/2201.07700

Policy Space Response Oracles (PSRO) mixes over a population of deep RL policies to approximate a Nash equilibrium, but exploitability can increase from one iteration to the next. We introduce Anytime PSRO which does not increase exploitability.

Arxiv: arxiv.org/abs/2201.07700
Yevgeni Berzak (@whylikethis_) 's Twitter Profile Photo

המעבדה שלי בטכניון מחפשת עוזרי.ות מחקר! לפרטים נוספים: drive.google.com/file/d/16jetx0… כמו כן, מגייסים סטודנטים.ות לתארים מתקדמים. אל תהססו ליצור קשר! lacclab.github.io

Hugo Larochelle (@hugo_larochelle) 's Twitter Profile Photo

We (Barbara Engelhardt, Naila Murray and I) are proud to announce the creation of a Journal-to-Conference track, in collaboration with JMLR and conferences NeurIPS 2022, ICLR 2023 and ICML 2023! neurips.cc/public/Journal… iclr.cc/public/Journal… icml.cc/public/Journal…

Pieter Abbeel (@pabbeel) 's Twitter Profile Photo

Very excited for the 2022 edition of the #neurips Deep RL workshop! A few fun changes, see below. Also, submission deadline: Sep 22

Noga Zaslavsky (@nogazaslavsky) 's Twitter Profile Photo

📣 Very excited to announce our in-person #NeurIPS2022 workshop on Information-Theoretic Principles in Cognitive Systems! Check out our lineup of invited speakers and CFP, submit short papers by September 19 sites.google.com/view/infocog-n… #InfoCog2022 NeurIPS Conference

Noga Zaslavsky (@nogazaslavsky) 's Twitter Profile Photo

The deadline for our #NeurIPS2022 InfoCog workshop is approaching soon [updated 🗓️: Sep 22]. We expect to have some funding to support a few selected presenters of accepted papers, and a special issue of Open Mind associated with the workshop! More info 👇 sites.google.com/view/infocog-n…

Uri Shalit (@shalituri) 's Twitter Profile Photo

Are you interested in causality, machine learning and healthcare? Come work with Mihaela van der Schaar (Mihaela van der Schaar) and me in a joint PhD or postdoc at Cambridge University, UK and the Technion, Israel Contact via email: shalit-lab AT technion ac il

Kolby Nottingham (@kolbytn) 's Twitter Profile Photo

Excited to share our work, "Skill Set Optimization", a continual learning method for LLM actors that: - Automatically extracts modular subgoals to use as skills - Reinforces skills using environment reward - Facilitates skill retrieval based on state allenai.github.io/sso 🧵

Excited to share our work, "Skill Set Optimization", a continual learning method for LLM actors that:
- Automatically extracts modular subgoals to use as skills
- Reinforces skills using environment reward
- Facilitates skill retrieval based on state
allenai.github.io/sso
🧵