Fault tolerance to detect undesirable neural network behavior.

Authors
  • LUSSIER Benjamin
  • SCHON Walter
  • GERONIMI Stephane
  • RHAZALI Kaoutar
Publication date
2017
Publication type
Proceedings Article
Summary Due to the rapid progress that artificial neural networks are currently experiencing, their implementation is extending to several domains. However, their use is not allowed in critical applications because their behavior is considered unpredictable and unsafe. In this paper, we present two approaches to provide software fault tolerance in neural networks in order to improve their safety-insecurity. Our goal is to develop neural networks capable of detecting unknown situations that differ from the learned ones. One approach is to use diversified redundancy at the network level, and the second is to add a new output to the network capable of recognizing outliers.
Topics of the publication
  • ...
  • No themes identified
Themes detected by scanR from retrieved publications. For more information, see https://scanr.enseignementsup-recherche.gouv.fr