Gabriel Peyré(@gabrielpeyre) 's Twitter Profileg
Gabriel Peyré

@gabrielpeyre

@CNRS researcher at @ENS_ULM. One tweet a day on computational mathematics.

ID:3097519864

linkhttp://www.gpeyre.com calendar_today19-03-2015 19:28:29

5,6K Tweets

93,0K Followers

457 Following

Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

The gradient field defines the steepest descent direction. The gradient flow dynamic defines a segmentation of the space into attraction basins of the local minimizers. en.wikipedia.org/wiki/Gradient en.wikipedia.org/wiki/Gradient_…

account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

Oldies but goldies: S. Linnainmaa, The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors, 1970. Often credited as the first appearance of reverse mode automatic differentiation, which is at the heart of many recent

Oldies but goldies: S. Linnainmaa, The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors, 1970. Often credited as the first appearance of reverse mode automatic differentiation, which is at the heart of many recent
account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

Is there a way in python to compute the max along the rows of a sparse matrix, but assuming non stored entries are -inf (so that they are not taken into account)? So basically switching from +* to max+ algebraic manipulation.

account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

Convolutional neural networks are shift-invariant representations obtained by iterating convolutions and pointwise non-linearities. Introduced by Le Cun. en.wikipedia.org/wiki/Convoluti…

Convolutional neural networks are shift-invariant representations obtained by iterating convolutions and pointwise non-linearities. Introduced by Le Cun. en.wikipedia.org/wiki/Convoluti…
account_circle
Rob Nowak(@rdnowak) 's Twitter Profile Photo

Here’s my take on the “mathematical foundations” of machine learning and AI. These course notes cover the basics of statistical learning theory, optimization, and functional analysis. nowak.ece.wisc.edu/MFML.pdf

account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

Oldies but goldies: Alan Turing, The Chemical Basis of Morphogenesis, 1952. Showed that reaction-diffusion generates complex patterns and can serve as mathematical models for biological processes. en.wikipedia.org/wiki/The_Chemi… dna.caltech.edu/courses/cs191/…

account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

Markov chain Monte Carlo methods are used to sample from a Gibbs distribution without knowing the normalizing constant. As the temperature epsilon gets small, samples cluster close to minimizers. Metropolis-Hastings is the simplest provably converging Markov Chain.

account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

Oldies but goldies: L Rudin, S Osher, E Fatemi, Nonlinear total variation based noise removal algorithms, 1992. Introduced total variation as an edge-preserving regularizer. Initiated lots of research on variational and PDE methods in imaging. en.wikipedia.org/wiki/Total_var…

account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

Strong convexity and smoothness are the two key hypotheses to make optimization well-posed and obtain a linear rate of descent schemes. They define upper and lower bounding quadratic approximants. Generalizes the condition number of linear systems. en.wikipedia.org/wiki/Condition…

Strong convexity and smoothness are the two key hypotheses to make optimization well-posed and obtain a linear rate of descent schemes. They define upper and lower bounding quadratic approximants. Generalizes the condition number of linear systems. en.wikipedia.org/wiki/Condition…
account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

Oldies but goldies: Vladimir Marchenko and Leonid Pastur, Distribution of eigenvalues for some sets of random matrices, 1967. Describes the asymptotic behavior of singular values of large rectangular random matrices. en.wikipedia.org/wiki/Marchenko…

Oldies but goldies: Vladimir Marchenko and Leonid Pastur, Distribution of eigenvalues for some sets of random matrices, 1967. Describes the asymptotic behavior of singular values of large rectangular random matrices. en.wikipedia.org/wiki/Marchenko…
account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

The co-area formula is the most fundamental tool of geometric measure theory. It expresses the Total Variation semi-norm as the total perimeters of level sets. Enables the computation of TV for non-smooth functions. en.wikipedia.org/wiki/Coarea_fo… en.wikipedia.org/wiki/Geometric…

The co-area formula is the most fundamental tool of geometric measure theory. It expresses the Total Variation semi-norm as the total perimeters of level sets. Enables the computation of TV for non-smooth functions. en.wikipedia.org/wiki/Coarea_fo… en.wikipedia.org/wiki/Geometric…
account_circle
Sibylle Marcotte(@SibylleMarcotte) 's Twitter Profile Photo

Newbies but goldies 😌: “Keep the Momentum: Conservation Laws beyond Euclidean Gradient Flows” with Gabriel Peyré and Rémi Gribonval.
We study conservation laws during the (euclidean or not) gradient or momentum flow of neural networks.
arxiv.org/abs/2405.12888

Newbies but goldies 😌: “Keep the Momentum: Conservation Laws beyond Euclidean Gradient Flows” with @gabrielpeyre and @RemiGribonval. We study conservation laws during the (euclidean or not) gradient or momentum flow of neural networks. arxiv.org/abs/2405.12888
account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

Oldies but goldies: George Dantzig, Origins of the simplex method, 1987. Dantzig invented the simplex in 1947, and it remains one of the workhorses of linear programming, used in countless applications. en.wikipedia.org/wiki/Simplex_a… en.wikipedia.org/wiki/George_Da…

Oldies but goldies: George Dantzig, Origins of the simplex method, 1987. Dantzig invented the simplex in 1947, and it remains one of the workhorses of linear programming, used in countless applications. en.wikipedia.org/wiki/Simplex_a… en.wikipedia.org/wiki/George_Da…
account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

The distance function defines an offset of shapes and curves. Its singularities define the medial axis. en.wikipedia.org/wiki/Signed_di… en.wikipedia.org/wiki/Distance_…

account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

Oldies but goldies: Gaspard Monge, Memoire sur la theorie des deblais et des remblais, 1776. Defines the optimal transport problem as an optimization over a set of maps. Intractable both in theory and practice before its reformulation by Kantorovitch in 1942 and its resolution by

account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

The structure tensor is the local covariance matrix field of the gradient vector field. It encodes the local anisotropy of an image. At the heart of anisotropic filtering and corner detection. en.wikipedia.org/wiki/Structure…

The structure tensor is the local covariance matrix field of the gradient vector field. It encodes the local anisotropy of an image. At the heart of anisotropic filtering and corner detection. en.wikipedia.org/wiki/Structure…
account_circle
Gabriel Peyré(@gabrielpeyre) 's Twitter Profile Photo

Oldies but goldies: Carl de Boor, On calculating with B-splines, 1971. Introduced (with Cox) an efficient algorithm for evaluating B-splines, which revolutionized Computer Aided Geometric Design. en.wikipedia.org/wiki/De_Boor%2…

account_circle