Aller au contenu  Aller au menu  Aller à la recherche

Bienvenue - Laboratoire Jacques-Louis Lions

Print this page |

Un poste de lecturer IA est annoncé sur le site de SCAI

Chiffres-clé

Chiffres clefs

189 personnes travaillent au LJLL

86 permanents

80 chercheurs et enseignants-chercheurs permanents

6 ingénieurs, techniciens et personnels administratifs

103 personnels non permanents

74 doctorants

15 post-doc et ATER

14 émérites et collaborateurs bénévoles

 

Chiffres janvier 2022

 

Leçons Jacques-Louis Lions 2023 : Andrew Stuart

 

Leçons Jacques-Louis Lions 2023 (Andrew Stuart)

12-15 décembre 2023

 

Cliquer ici pour la version pdf du programme des Leçons Jacques-Louis Lions 2023 (Andrew Stuart)Nouvelle fenêtre

Cliquer ici pour la version jpg (0.2 Mo) de l’affiche des Leçons Jacques-Louis Lions 2023 (Andrew Stuart)Nouvelle fenêtre

Cliquer ici pour la version pdf (10.7 Mo) de l’affiche des Leçons Jacques-Louis Lions 2023 (Andrew Stuart)Nouvelle fenêtre

 

 

Données par Andrew Stuart (Institut de technologie de Californie (Caltech)), les Leçons Jacques-Louis Lions 2023 ont consisté en :

— un mini-cours intitulé
Ensemble Kalman filter : Algorithms, analysis and applications
3 séances, les mardi 12, mercredi 13 et jeudi 14 décembre 2023 de 11h à 12h30,
Salle du séminaire du Laboratoire Jacques-Louis Lions,
barre 15-16, 3ème étage, salle 09 (15-16-3-09),
Sorbonne Université, Campus Jussieu, 4 place Jussieu, Paris 5ème,

— et un colloquium intitulé
Operator learning : Acceleration and discovery of computational models
le vendredi 15 décembre 2023 de 14h à 15h,
Amphithéâtre 25,
entrée face à la tour 25, niveau dalle Jussieu,
Sorbonne Université, Campus Jussieu, 4 place Jussieu, Paris 5ème.

Tous les exposés ont été donnés en présence et retransmis en temps réel par Zoom.

 

Résumé du mini-cours
Ensemble Kalman filter : Algorithms, analysis and applications
In 1960 Rudolph Kalman [1] published what is arguably the first paper to develop a systematic, principled approach to the use of data to improve the predictive capability of dynamical systems. As our ability to gather data grows at an enormous rate, the importance of this work continues to grow too. Kalman’s paper is confined to linear dynamical systems subject to Gaussian noise ; the work of Geir Evensen [2] in 1994 opened up far wider applicability of Kalman’s ideas by introducing the ensemble Kalman filter.
The ensemble Kalman filter applies to the setting in which nonlinear and noisy observations are used to make improved predictions of the state of a Markov chain. The algorithm results in an interacting particle system combining elements of the Markov chain and the observation process. In these lectures I will introduce a unifying mean-field perspective on the algorithm, derived in the limit of an infinite number of interacting particles. I will then describe how the methodology can be used to study inverse problems, opening up diverse applications beyond prediction in dynamical systems. Finally I will describe analysis of the accuracy of the methodology, both in terms of accuracy and uncertainty quantification ; despite its widespread adoption in applications, a complete mathematical theory is lacking and there are many opportunities for analysis in this area.

Lecture 1 : The algorithm
Lecture 2 : Inverse problems and applications
Lecture 3 : Analysis of accuracy and uncertainty quantification

[1] R. Kalman, A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82:35–45, 1960.
[2] G. Evensen, Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. Journal of Geophysical Research : Oceans, 99(C5):10143–10162, 1994.

notes du mini-cours 1 de Andrew Stuart 12 décembre 2023 - 0.7 MoNouvelle fenêtre

notes du mini-cours 2 de Andrew Stuart 13 décembre 2023 - 0.9 MoNouvelle fenêtre

diaporama du mini-cours 3 de Andrew Stuart 14 décembre 2023 - 0.4 MoNouvelle fenêtre


Résumé du colloquium
Operator learning : Acceleration and discovery of computational models
Neural networks have shown great success at learning function approximators between spaces X and Y, in the setting where X is a finite dimensional Euclidean space and where Y is either a finite dimensional Euclidean space (regression) or a set of finite cardinality (classification) ; the neural networks learn the approximator from N data pairs (x_n, y_n). In many problems arising in physics it is desirable to learn maps between spaces of functions X and Y ; this may be either for the purposes of scientific discovery, or to provide cheap surrogate models which accelerate computations. New ideas are needed to successfully address this learning problem in a scalable, efficient manner.
In this talk I will overview the methods that have been introduced in this area and describe theoretical results underpinning the emerging methodologies. Illustrations will be given from a variety of PDE-based problems including learning the solution operator for dissipative PDEs, learning the homogenization operator in various settings, and learning the smoothing operator in data assimilation.

diaporama du colloquium de Andrew Stuart 15 décembre 2023 - 1.9 MoNouvelle fenêtre

 

Cliquer ici pour la version pdf du programme des Leçons Jacques-Louis Lions 2023 (Andrew Stuart)Nouvelle fenêtre

 

Pour des informations sur les autres Leçons Jacques-Louis Lions, voir
https://www.ljll.math.upmc.fr/fr/evenements/lecons-jacques-louis-lions Nouvelle fenêtre