Aller au contenu  Aller au menu  Aller à la recherche

Bienvenue - Laboratoire Jacques-Louis Lions

Postes Enseignants-Chercheurs :

Cliquer sur : Operation POSTES sur le site de la SMAINouvelle fenêtre

Cliquer sur : GALAXIENouvelle fenêtre

 

Cliquer sur : les postes ouverts au Laboratoire Jacques-Louis Lions en 2017

 

» En savoir +

Chiffres-clé

Chiffres clefs

217 personnes travaillent au LJLL

83 personnels permanents

47 enseignants chercheurs

13 chercheurs CNRS

9 chercheurs INRIA

2 chercheurs CEREMA

12 ingénieurs, techniciens et personnels administratifs

134 personnels non permanents

85 doctorants

16 post-doc et ATER

5 chaires et délégations

12 émérites et collaborateurs bénévoles

16 visiteurs

 

Chiffres janvier 2014

 

Leçons Jacques-Louis Lions 2017 : Emmanuel Candès

Leçons Jacques-Louis Lions 2017 (Emmanuel Candès)

14-17 mars 2017

 

Cliquer ici pour la version pdf de l’annonce des Leçons Jacques-Louis Lions 2017 (Emmanuel Candès) Nouvelle fenêtre

 

 

Cliquer ici pour la version jpg (0.4 Mo) de l’affiche des Leçons Jacques-Louis Lions 2017 (Emmanuel Candès)Nouvelle fenêtre

Cliquer ici pour la version pdf (16 Mo) de l’affiche des Leçons Jacques-Louis Lions 2017 (Emmanuel Candès)Nouvelle fenêtre

 

Données par Emmanuel Candès (Université de Stanford) du 14 au 17 mars 2017, les Leçons Jacques-Louis Lions 2017 comprendront
— un mini cours
Statistics for the big data era
3 séances, mardi 14, mercredi 15 et jeudi 16 mars 2017, de 11h30 à 13h
salle du séminaire du Laboratoire Jacques-Louis Lions
barre 15-16, 3ème étage, salle 09 (15-16-3-09)
Université Pierre et Marie Curie, Campus Jussieu, 4 place Jussieu, Paris 5ème
— et un colloquium
Around the reproducibility of scientific research in the big data era: what statistics can offer?
vendredi 17 mars 2017 de 14h à 15h
attention, lieu exceptionnel : amphithéâtre 45 A
Université Pierre et Marie Curie, Campus Jussieu, 4 place Jussieu, Paris 5ème
(entrée face à la tour 45, niveau dalle Jussieu)

Abstract of the mini-course
Statistics for the big data era
For a long time, science has operated as follows: a scientific theory can only be empirically tested, and only after it has been advanced. Predictions are deduced from the theory and compared with the results of decisive experiments so that they can be falsified or corroborated. This principle formulated by Karl Popper and operationalized by Ronald Fisher has guided the development of scientific research and statistics for nearly a century. We have, however, entered a new world where large data sets are available prior to the formulation of scientific theories. Researchers mine these data relentlessly in search of new discoveries and it has been observed that we have run into the problem of irreproducibilty. Consider the April 23, 2013 Nature editorial: “Over the past year, Nature has published a string of articles that highlight failures in the reliability and reproducibility of published research.” The field of statistics needs to re-invent itself to adapt to the new reality where scientific hypotheses/theories are generated by data snooping. We will make the case that statistical science is taking on this great challenge and discuss exciting achievements.

Abstract of the colloquium
Around the reproducibility of scientific research in the big data era: what statistics can offer?
The big data era has created a new scientific paradigm: collect data first, ask questions later. When the universe of scientific hypotheses that are being examined simultaneously is not taken account, inferences are likely to be false. The consequence is that follow up studies are likely not to be able to reproduce earlier reported findings or discoveries. This reproducibility failure bears a substantial cost and this talk is about new statistical tools to address this issue. In the last two decades, statisticians have developed many techniques for addressing this look-everywhere effect, whose proper use would help in alleviating the problems discussed above. This lecture will discuss some of these proposed solutions including the Benjamin-Hochberg procedure for false discovery rate (FDR) control and the knockoff filter, a method which reliably selects which of the many potentially explanatory variables of interest (e.g. the absence or not of a mutation) are indeed truly associated with the response under study (e.g. the log fold increase in HIV-drug resistance).