Différences

Ci-dessous, les différences entre deux révisions de la page.

Lien vers cette vue comparative

Les deux révisions précédentes Révision précédente
mega:seminaire [2026/04/21 11:42] Raphaël BUTEZmega:seminaire [2026/04/22 11:26] (Version actuelle) Raphaël BUTEZ
Ligne 19: Ligne 19:
 Vendredi **29 Mai**, **scéance commune avec le GdR IASIS, à l'IHP en amphi Hermite .** Vendredi **29 Mai**, **scéance commune avec le GdR IASIS, à l'IHP en amphi Hermite .**
  
-     * 10h30-12h00:  mini cours de **[[https://brloureiro.github.io/|Bruno Loureiro ]]** //Some recent developments on random matrix theory for machine learning . //\\+     * 10h30-12h00:  mini cours de **[[https://brloureiro.github.io/|Bruno Loureiro ]]** //Some recent developments on random matrix theory for machine learning. //\\
 Abstract: In this mini-tutorial, I will review some recent results on the analysis of non-linear models motivated from machine learning using tools from random matrix theory. Starting from the analysis of the two-layer neural networks at initialisation (a.k.a. random features model), we will discuss the notion of Gaussian universality, which allows to effectively treat non-linear functions of random matrix with standard tools. We will then discuss the problem of feature learning, when the network weights are trained and develop correlations with the data, and how this it can be treated with ideas that generalise universality. This will allow us to show the advantage of feature learning over kernel methods. Finally, I will discuss some of the more recent progress concerning the analysis of the spectrum of trained two-layer neural networks.   Abstract: In this mini-tutorial, I will review some recent results on the analysis of non-linear models motivated from machine learning using tools from random matrix theory. Starting from the analysis of the two-layer neural networks at initialisation (a.k.a. random features model), we will discuss the notion of Gaussian universality, which allows to effectively treat non-linear functions of random matrix with standard tools. We will then discuss the problem of feature learning, when the network weights are trained and develop correlations with the data, and how this it can be treated with ideas that generalise universality. This will allow us to show the advantage of feature learning over kernel methods. Finally, I will discuss some of the more recent progress concerning the analysis of the spectrum of trained two-layer neural networks.  
    
  
-    * 14h00-14h45: Séminaire de **[[https://sites.google.com/view/vanessapiccolo/|Vanessa Piccolo]]** // .// \\ +    * 14h00-14h45: Séminaire de **[[https://sites.google.com/view/vanessapiccolo/|Vanessa Piccolo]]** // Heavy-tailed random features models: new spectral phenomena.// \\ 
-Abstract: +Abstract: In recent years, models from machine learning have motivated the study of nonlinear random matrices, that is, random matrices involving the entrywise application of a deterministic nonlinear function. In this talk, we will focus on matrices of the form YY* with Y = f(WX). Here, W and X are random rectangular matrices with i.i.d. centered entries, representing the weights and data in a two-layer feed-forward neural network, and f is a nonlinear activation function. This setting is commonly known as the random features model.  
 +When the entries of both the weights and the inputs are light-tailed, the asymptotic behavior of the eigenvalues is by now well understood and coincides with that of a simple Gaussian-equivalent model. In this talk, I will instead focus on the regime where the weights are heavy-tailed, based on recent joint work with Alice Guionnet. This regime is motivated by empirical observations in trained neural networks, where learned weights often exhibit strong correlations and heavy-tailed distributions. We will show that, in this context, the spectral behavior departs significantly from the light-tailed regime, leading to new spectral phenomena, with a richer combinatorial structure in the moment expansion.
  
     * 15h15-16h00:  Séminaire de **[[https://hugolebeau.github.io/|Hugo Lebeau]]** // A Random Matrix Approach to Low-Multilinear-Rank Tensor Approximation.// \\     * 15h15-16h00:  Séminaire de **[[https://hugolebeau.github.io/|Hugo Lebeau]]** // A Random Matrix Approach to Low-Multilinear-Rank Tensor Approximation.// \\
Ligne 80: Ligne 81:
  
 * Vendredi **29 mai**, **[[https://gdr-iasis.cnrs.fr/reunions/matrices-et-tenseurs-aleatoires-pour-linference-et-lapprentissage/|Journée commune MEGA & IASIS** à l'IHP]]** * Vendredi **29 mai**, **[[https://gdr-iasis.cnrs.fr/reunions/matrices-et-tenseurs-aleatoires-pour-linference-et-lapprentissage/|Journée commune MEGA & IASIS** à l'IHP]]**
-       * 10h30-12h00: mini cours par **[[https://brloureiro.github.io/|Bruno Loureiro ]]** .//.// +       * 10h30-12h00: mini cours par **[[https://brloureiro.github.io/|Bruno Loureiro ]]** //Some recent developments on random matrix theory for machine learning ..// 
-       * 14h00-14h45: **[[https://sites.google.com/view/vanessapiccolo/|Vanessa Piccolo]]** .//.// +       * 14h00-14h45: **[[https://sites.google.com/view/vanessapiccolo/|Vanessa Piccolo]]** //Heavy-tailed random features models: new spectral phenomena.// 
-       * 14h45-15h30: **[[https://hugolebeau.github.io/|Hugo Lebeau]]** //.//+       * 14h45-15h30: **[[https://hugolebeau.github.io/|Hugo Lebeau]]**  //A Random Matrix Approach to Low-Multilinear-Rank Tensor Approximation.//
        * 16h-17h: Contributed talks        * 16h-17h: Contributed talks
  
  • mega/seminaire.txt
  • Dernière modification : 2026/04/22 11:26
  • de Raphaël BUTEZ