Maximum likelihood estimation in partially observed Markov models with applications to counting time series.

Authors
  • SIM Tepmony
  • ROUEFF Francois
  • DOUC Randal
  • GENON CATALOT Valentine
  • ROBIN Stephane
  • SOULIER Philippe
  • LE GLAND Francois
  • ZAKOIAN Jean michel
Publication date
2016
Publication type
Thesis
Summary Maximum likelihood estimation is a widely used method for identifying a parameterized time series model from a sample of observations. For well-specified models, it is essential to obtain the consistency of the estimator, i.e. its convergence to the true parameter when the size of the sample of observations tends to infinity. For many time series models, for example hidden Markov models (HMM), the property of "strong" consistency can however be difficult to establish. We can then focus on the consistency of the maximum likelihood estimator (MLE) in a weak sense, i.e. when the sample size tends to infinity, the MLE converges to a set of parameters that are all associated with the same probability distribution of the observations as the true parameter. Consistency in this sense, which remains a preferred property in many time series applications, is referred to as equivalence class consistency. Obtaining equivalence class consistency generally requires two important steps: 1) showing that the MLE converges to the set that maximizes the asymptotic normalized log-likelihood . and 2) showing that each parameter in this set produces the same distribution of the observation process as the true parameter. The main purpose of this thesis is to establish the equivalence class consistency of partially observed Markov models (PMMs), such as HMMs and observation-driven models (ODMs).
Topics of the publication
Themes detected by scanR from retrieved publications. For more information, see https://scanr.enseignementsup-recherche.gouv.fr