Rob Nowak (@rdnowak) 's Twitter Profile
Rob Nowak

@rdnowak

Director of the Center for the Advancement of Progress

ID: 32713500

calendar_today18-04-2009 01:33:23

1,1K Tweet

2,2K Takipçi

437 Takip Edilen

Gabriel Peyré (@gabrielpeyre) 's Twitter Profile Photo

Oldies but goldies: M. Belkin, P. Niyogi, Laplacian Eigenmaps for Dimensionality Reduction and Data Representation, 2003. Non-linear dimensionality reduction by embedding data points using the eigenvectors of a graph Laplacian as coordinates. en.wikipedia.org/wiki/Nonlinear…

Kangwook Lee (@kangwook_lee) 's Twitter Profile Photo

LLMs excel at in-context learning; they identify patterns from labeled examples in the prompt and make predictions accordingly. Many believe more in-context examples are better. However, that's not always true if the early ascent phenomenon occurs.

LLMs excel at in-context learning; they identify patterns from labeled examples in the prompt and make predictions accordingly.

Many believe more in-context examples are better.

However, that's not always true if the early ascent phenomenon occurs.
Gabriel Peyré (@gabrielpeyre) 's Twitter Profile Photo

I have put online the content of 10 lectures of 2h on the basics maths of AI. It contains raw transcripts, so probably not super useful, but it might be helpful to give some ideas. mathematical-tours.github.io/maths-ia-cours…

I have put online the content of 10 lectures of 2h on the basics maths of AI. It contains raw transcripts, so probably not super useful, but it might be helpful to give some ideas. mathematical-tours.github.io/maths-ia-cours…
Rob Nowak (@rdnowak) 's Twitter Profile Photo

in addition to coffee and cappuccino, apparently this place in Madison also sells solutions to approximation problems with multilayer perceptrons

in addition to coffee and cappuccino, apparently this place in Madison also sells solutions to approximation problems with multilayer perceptrons
Csaba Szepesvari (@csabaszepesvari) 's Twitter Profile Photo

I am excited for this upcoming talk by Andrew about "optimally" exploring given some offline data! Bonus: We'll hear about the gap between verifiable and unverifiable learning! I hope to see you tomorrow!

Rob Nowak (@rdnowak) 's Twitter Profile Photo

Here’s my take on the “mathematical foundations” of machine learning and AI. These course notes cover the basics of statistical learning theory, optimization, and functional analysis. nowak.ece.wisc.edu/MFML.pdf

Gabriel Peyré (@gabrielpeyre) 's Twitter Profile Photo

Oldies but goldies: Frank Rosenblatt, The Perceptron-a perceiving and recognizing automaton, 1957. The first appearance of a neural network (single layer), later generalized to multiple layers. en.wikipedia.org/wiki/Perceptron

Gabriel Peyré (@gabrielpeyre) 's Twitter Profile Photo

Oldies but goldies: Alfred Haar, Zur Theorie der orthogonalen Funktionensysteme, 1910. The first wavelet transform. Later generalized by Ingrid Daubechies. en.wikipedia.org/wiki/Haar_wave…

Oldies but goldies: Alfred Haar, Zur Theorie der orthogonalen Funktionensysteme, 1910. The first wavelet transform. Later generalized by Ingrid Daubechies. en.wikipedia.org/wiki/Haar_wave…
Tom Goldstein (@tomgoldsteincs) 's Twitter Profile Photo

LLMs can memorize training data, causing copyright/privacy risks. Goldfish loss is a nifty trick for training an LLM without memorizing training data. I can train a 7B model on the opening of Harry Potter for 100 gradient steps in a row, and the model still doesn't memorize.

LLMs can memorize training data, causing copyright/privacy risks. Goldfish loss is a nifty trick for training an LLM without memorizing training data.

I can train a 7B model on the opening of Harry Potter for 100 gradient steps in a row, and the model still doesn't memorize.
David Leavitt 🎲🎮🧙‍♂️🌈 (@david_leavitt) 's Twitter Profile Photo

Donald Trump’s former Sec. Def. Mark Esper: “[Trump] was suggesting that...we should bring in the troops and shoot the protesters.” Q: “The commander-in-chief was suggesting that the U.S. military shoot protesters?” Esper: “Yes, in the streets of our nation’s capital.”

Rob Nowak (@rdnowak) 's Twitter Profile Photo

We are pleased to announce plans for a special issue of Signal Processing Magazine focused on the mathematics of deep learning: signalprocessingsociety.org/blog/ieee-spm-… We look forward to your submissions!

Rob Nowak (@rdnowak) 's Twitter Profile Photo

Joe Shenouda led this project. We learned a lot about how multi-output neural nets differ from single-output nets. Interesting consequences for network compression. jmlr.org/papers/v25/23-…

Sina Alemohammad (@sinaalmd) 's Twitter Profile Photo

Does training a generative model on its own synthetic data always result in MADness/model collapse?Turns out it doesn’t! We show that a diffusion model can “self-improve” using its own synthetic data while preventing MADness/model collapse altogether! Link to paper:

Does training a generative model on its own synthetic data always result in MADness/model collapse?Turns out it doesn’t! We show that a diffusion model can “self-improve” using its own synthetic data while preventing MADness/model collapse altogether! Link to paper:
Rob Nowak (@rdnowak) 's Twitter Profile Photo

We are excited to launch the first ever weekly caption contest on Toondeloo. Submit your caption at toondeloo.com over the next week and come back to vote. Please share with others, the more the merrier! Here is this week's cartoon.

We are excited to launch the first ever weekly caption contest on Toondeloo. Submit your caption at toondeloo.com over the next week and come back to vote. Please share with others, the more the merrier! Here is this week's cartoon.