On the consistency of supervised learning with missing values.

Authors
  • JOSSE Julie
  • PROST Nicolas
  • SCORNET Erwan
  • VAROQUAUX Gael
Publication date
2019
Publication type
Other
Summary In many application settings, the data have missing features which make data analysis challenging. An abundant literature addresses missing data in an inferential framework: estimating parameters and their variance from incomplete tables. Here, we consider supervised-learning settings: predicting a target when missing values appear in both training and testing data. We show the consistency of two approaches in prediction. A striking result is that the widely-used method of imputing with the mean prior to learning is consistent when missing values are not informative. This contrasts with inferential settings where mean imputation is pointed at for distorting the distribution of the data. That such a simple approach can be consistent is important in practice. We also show that a predictor suited for complete observations can predict optimally on incomplete data, through multiple imputation. We analyze further decision trees. These can naturally tackle empirical risk minimization with missing values, due to their ability to handle the half-discrete nature of incomplete variables. After comparing theoretically and empirically different missing values strategies in trees, we recommend using the ``missing incorporated in attribute'' method as it can handle both non-informative and informative missing values. −0.05 0 +0.05 −0.02 −0.01 0 +0.01 +0.02 −0.1 −0.05 0 +0.05 +0.1 0. M I A 2. i m p u t e m e a n + m a s k 3. i m p u t e m e a n 4. i m p u t e G a u s s i a n + m a s k 5. i m p u t e G a u s s i a n 6. r p a r t (s u r r o g a t e s) + m a s k 7. r p a r t (s u r r o g a t e s) 8. c t r e e (s u r r o g a t e s) + m a s k 9. c t r e e (s u r r o g a t e s) 0. M I A 2. i m p u t e m e a n + m a s k 3. i m p u t e m e a n 4. i m p u t e G a u s s i a n + m a s k 5. i m p u t e G a u s s i a n 0. M I A 1. b l o c k 2. i m p u t e m e a n + m a s k 3. i m p u t e m e a n 4. i m p u t e G a u s s i a n + m a s k 5. i m p u t e G a u s s i a n Relative explained variance.
Topics of the publication
Themes detected by scanR from retrieved publications. For more information, see https://scanr.enseignementsup-recherche.gouv.fr