Rob Nowak(@rdnowak) 's Twitter Profileg
Rob Nowak

@rdnowak

Director of the Center for the Advancement of Progress

ID:32713500

calendar_today18-04-2009 01:33:23

1,0K Tweets

1,9K Followers

438 Following

Jifan Zhang(@jifan_zhang) 's Twitter Profile Photo

Check out our label efficient SFT paper for examples of the clever prompting methods:x.com/rdnowak/status…

account_circle
Csaba Szepesvari(@CsabaSzepesvari) 's Twitter Profile Photo

I am excited for this upcoming talk by Andrew about 'optimally' exploring given some offline data! Bonus: We'll hear about the gap between verifiable and unverifiable learning! I hope to see you tomorrow!

account_circle
Rob Nowak(@rdnowak) 's Twitter Profile Photo

in addition to coffee and cappuccino, apparently this place in Madison also sells solutions to approximation problems with multilayer perceptrons

in addition to coffee and cappuccino, apparently this place in Madison also sells solutions to approximation problems with multilayer perceptrons
account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

I have put online the content of 10 lectures of 2h on the basics maths of AI. It contains raw transcripts, so probably not super useful, but it might be helpful to give some ideas. mathematical-tours.github.io/maths-ia-cours…

I have put online the content of 10 lectures of 2h on the basics maths of AI. It contains raw transcripts, so probably not super useful, but it might be helpful to give some ideas. mathematical-tours.github.io/maths-ia-cours…
account_circle
Kangwook Lee(@Kangwook_Lee) 's Twitter Profile Photo

LLMs excel at in-context learning; they identify patterns from labeled examples in the prompt and make predictions accordingly.

Many believe more in-context examples are better.

However, that's not always true if the early ascent phenomenon occurs.

LLMs excel at in-context learning; they identify patterns from labeled examples in the prompt and make predictions accordingly. Many believe more in-context examples are better. However, that's not always true if the early ascent phenomenon occurs.
account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

Oldies but goldies: M. Belkin, P. Niyogi, Laplacian Eigenmaps for Dimensionality Reduction and Data Representation, 2003. Non-linear dimensionality reduction by embedding data points using the eigenvectors of a graph Laplacian as coordinates. en.wikipedia.org/wiki/Nonlinear…

account_circle
Randall Balestriero(@randall_balestr) 's Twitter Profile Photo

Keep training your Deep Network past the point of perfect training set accuracy and its robustness will increase. Why? Because the spline partition keeps concentrating near the decision boundary ➡️the DN is affine all around the training samples!
arxiv.org/abs/2402.15555

Keep training your Deep Network past the point of perfect training set accuracy and its robustness will increase. Why? Because the spline partition keeps concentrating near the decision boundary ➡️the DN is affine all around the training samples! arxiv.org/abs/2402.15555
account_circle
Dimitris Papailiopoulos(@DimitrisPapail) 's Twitter Profile Photo

'Looped Transformers are Better at Learning Learning Algorithms' in ICLR

Liu Yang offers a simple and clean message in this paper.

When it comes to emulating learning algorithms, using a looped transformer (i.e., one where the iterative structure is hardcoded) helps a lot.

'Looped Transformers are Better at Learning Learning Algorithms' in ICLR @Yang_Liuu offers a simple and clean message in this paper. When it comes to emulating learning algorithms, using a looped transformer (i.e., one where the iterative structure is hardcoded) helps a lot.
account_circle
AreaRED(@AreaRED) 's Twitter Profile Photo

Who wouldn’t want to be a Badger after watching this?!?! Thanks Sam Li for helping show the world the #1 college town in America. Big things are indeed coming… 🦡

account_circle
Aditya Gopalan(@today_itself) 's Twitter Profile Photo

Aadirupa Saha It also features a stalwart panel comprising Craig Boutilier Rob Nowak kamalikac & Tobias Schnabel!

sites.google.com/view/pref-lear…

We want to hear from you about topics you'd love to see covered in our main session or discussed in the panel. Write to us ASAP w/ suggestions! (2/2)

account_circle