InveniaLabs (@invenialabs) 's Twitter Profile
InveniaLabs

@invenialabs

Our team uses machine learning to address real-world problems. Right now, we’re putting ML into practice by optimising the electricity grids to lower pollution.

ID: 882011565196816384

linkhttps://www.invenia.ca/labs/ calendar_today03-07-2017 23:01:58

65 Tweet

555 Followers

110 Following

InveniaLabs (@invenialabs) 's Twitter Profile Photo

If you are thinking of contributing to open source software, our latest blog post is a great place to start. invenia.github.io/blog/2021/01/2…

InveniaLabs (@invenialabs) 's Twitter Profile Photo

Here is a Gentle Introduction to Optimal Power Flow, crucial to understanding electricity markets. invenia.github.io/blog/2021/06/1…

InveniaLabs (@invenialabs) 's Twitter Profile Photo

We’re excited to see Invenia and many other companies that are using machine learning to fight the climate crisis discussed in a Forbes article. It’s inspiring to read about so many initiatives! forbes.com/sites/robtoews…

InveniaLabs (@invenialabs) 's Twitter Profile Photo

The talks at #JuliaCon 2021 have been great so far! Today, keep an eye out for Frames Catherine White's lighting talk on ExprTools: Metaprogramming from reflection at 20:20 UTC pretalx.com/juliacon2021/t…

InveniaLabs (@invenialabs) 's Twitter Profile Photo

#Juliacon is now over. What a great success it was! Here are the highlights of Invenia's participation at the conference. invenia.github.io/blog/2021/08/1…

InveniaLabs (@invenialabs) 's Twitter Profile Photo

Thanks to Logan Kilpatrick for facilitating a great conversation on getting a job programming in the The Julia Language, and to everyone who attended. If you're interested in joining Invenia Labs, check out our positions here: joininvenia.com

Wessel (@ikwess) 's Twitter Profile Photo

The prior and posterior of BNNs are well understood as width → ∞. But what about mean-field variational inference? For odd activations, MFVI converges to the prior as width → ∞! arxiv.org/pdf/2202.11670… #AISTATS2022 w/ Beau Coker, @BurtDavidR, Weiwei Pan, Finale Doshi-Velez

The prior and posterior of BNNs are well understood as width → ∞. But what about mean-field variational inference? For odd activations, MFVI converges to the prior as width → ∞!

arxiv.org/pdf/2202.11670…

#AISTATS2022 w/ <a href="/beaujcoker/">Beau Coker</a>, @BurtDavidR, Weiwei Pan, Finale Doshi-Velez
Wessel (@ikwess) 's Twitter Profile Photo

Still using latent variables to get correlations out of your Neural Process? Then consider Gaussian Neural Processes (GNP)! ✓ Correlated predictions ✓ Tractable likelihood Stratis Markou, James Requeima, Wessel, Anna Vaughan & Rich Turner #ICLR2022 arxiv.org/abs/2203.08775

Still using latent variables to get correlations out of your Neural Process?

Then consider Gaussian Neural Processes (GNP)!

✓ Correlated predictions
✓ Tractable likelihood

<a href="/stratosmarkou/">Stratis Markou</a>, <a href="/jamesrequeima/">James Requeima</a>, <a href="/ikwess/">Wessel</a>, Anna Vaughan &amp; Rich Turner #ICLR2022

arxiv.org/abs/2203.08775
Wessel (@ikwess) 's Twitter Profile Photo

Fitting a Gaussian process (GP)? Then why not model the kernel with another GP?! The GPCM is one such model. We propose two variants of the GPCM and inference beyond mean field. w/ Martin Tegnér & Rich Turner Paper: arxiv.org/abs/2203.06997 Code: github.com/wesselb/gpcm

Fitting a Gaussian process (GP)? Then why not model the kernel with another GP?! 

The GPCM is one such model. We propose two variants of the GPCM and inference beyond mean field.

w/ Martin Tegnér &amp; Rich Turner

Paper: arxiv.org/abs/2203.06997
Code: github.com/wesselb/gpcm
InveniaLabs (@invenialabs) 's Twitter Profile Photo

New paper alert: A General Stochastic Optimization Framework for Convergence Bidding from Letif Mones and Sean Lovett, available here: arxiv.org/abs/2210.06543