Tom Rainforth (@tom_rainforth) 's Twitter Profile
Tom Rainforth

@tom_rainforth

Associate Professor in Machine Learning at the University of Oxford,
Head of RainML Research Lab (rainml.uk)

ID: 797888987675365377

linkhttp://www.robots.ox.ac.uk/~twgr/ calendar_today13-11-2016 19:48:53

369 Tweet

5,5K Takipçi

295 Takip Edilen

Jin Xu (@jinxu06) 's Twitter Profile Photo

We construct neural processes by iteratively transforming a simple stochastic process into an expressive one, similar to flow/diffusion-based models, but in function space! Join us at our #NeurIPS2023 poster session: neurips.cc/virtual/2023/p… on Wednesday morning!

We construct neural processes by iteratively transforming a simple stochastic process into an expressive one, similar to flow/diffusion-based models, but in function space! 

Join us at our #NeurIPS2023 poster session: neurips.cc/virtual/2023/p… on Wednesday morning!
Christian Weilbach whilo@sigmoid.social (@wh1lo) 's Twitter Profile Photo

It is @NeurIPS time again! I am excited to present our trans-dimensional jump diffusion work with Andrew Campbell Will Harvey Valentin De Bortoli Tom Rainforth and Arnaud Doucet ! Come over on Thursday 2nd poster session, neurips.cc/virtual/2023/p…. arxiv.org/abs/2305.16261 #NeurIPS2023

Yee Whye Teh (@yeewhye) 's Twitter Profile Photo

Interested in large language models? Worried about impacts of climate change? Come join us OxCSML Leverhulme Centre for Nature Recovery University of Oxford in pushing the frontiers in #LLMs and at the same time help #NatureRecovery and address the impacts of #ClimateChange! bit.ly/4750IBO

Tim Reichelt (@timreichelt3) 's Twitter Profile Photo

I will be presenting our work on "Beyond Bayesian Model Averaging over Paths in Probabilistic Programs with Stochastic Support" at AISTATS in Valencia tomorrow (details in thread below). If you are interested in probabilistic programming, come and say hi at poster session 1!

Freddie Bickford Smith (@fbickfordsmith) 's Twitter Profile Photo

The current default recipe for Bayesian active learning doesn’t really work beyond MNIST scale. We suggest why that is and identify a simple fix. arxiv.org/abs/2404.17249 AISTATS Conference with @adamefoster Tom Rainforth 1/5

Jannik Kossen (@janundnik) 's Twitter Profile Photo

Are you at ICLR? Have you heard that In-Context Learning in LLMs does not learn label relationships? Well that's not true. Visit our poster TODAY to find out how LLMs incorporate label information. Spoiler: it's not Bayesian inference. Poster #129, May 7, 4.30 pm

Are you at ICLR? 

Have you heard that In-Context Learning in LLMs does not learn label relationships?

Well that's not true.

Visit our poster TODAY to find out how LLMs incorporate label information.

Spoiler: it's not Bayesian inference.

Poster #129, May 7, 4.30 pm
Tom Rainforth (@tom_rainforth) 's Twitter Profile Photo

In-context learning can learn novel input-output relationships beyond what can be picked up from input context alone, but doesn't behave like conventional learning algorithm. Find out more at our ICLR poster #129 this afternoon. Paper: openreview.net/forum?id=YPIA7…, led by Jannik Kossen

Tom Rainforth (@tom_rainforth) 's Twitter Profile Photo

I'm delighted to announce that from September I will officially be an Associate Professor (remaining at the Oxford stats department)

Tom Rainforth (@tom_rainforth) 's Twitter Profile Photo

I have an opening for a 2.5-year postdoc position in the RainML lab as part of my ERC grant on probabilistic machine learning and intelligent data acquisition. Application deadline 10th July 2024. See here for details and to apply: tinyurl.com/rainmlpostdoc

Tom Rainforth (@tom_rainforth) 's Twitter Profile Photo

I have an opening for a 2-year postdoc in probabilistic machine learning and/or experimental design. The application deadline is the 3rd of September. See here for details and how to apply: tinyurl.com/rainmlpostdoc2…

Jackson Atkins (@jacksonatkinsx) 's Twitter Profile Photo

Apple and Oxford just made AI 6.5x better at problem-solving. The secret: it teaches AI agents to ask perfect questions. This rockets success rates from 14% to 91%. No need for fine-tuning or retraining. It runs on current models. Here's how it works: It's a strategic loop

Apple and Oxford just made AI 6.5x better at problem-solving.

The secret: it teaches AI agents to ask perfect questions. This rockets success rates from 14% to 91%.

No need for fine-tuning or retraining. It runs on current models.

Here's how it works:

It's a strategic loop