Study of two stochastic control problems: American put with discrete dividends and dynamic programming principle with constraints in probabilities.

Authors
Publication date
2013
Publication type
Thesis
Summary In this thesis, we treat two stochastic optimal control problems. Each problem corresponds to a part of this paper. The first problem is very specific, it is the valuation of American put contracts in the presence of discrete dividends (Part I). The second one is more general, since it is about proving the existence of a dynamic programming principle under probability constraints in a discrete time framework (Part II). Although the two problems are quite distinct, the dynamic programming principle is at the heart of both problems. The relation between the valuation of an American Put and a free boundary problem has been proved by McKean. The frontier of this problem has a clear economic meaning since it corresponds at any moment to the upper bound of the set of asset prices for which it is preferable to exercise one's right to sell immediately. The shape of this frontier in the presence of discrete dividends has not been solved to our knowledge. Under the assumption that the dividend is a deterministic function of the asset price at the time preceding its payment, we study how the frontier is modified. In the vicinity of the dividend dates, and in the model of Chapter 3, we know how to qualify the monotonicity of the frontier, and in some cases quantify its local behavior. In Chapter 3, we show that the smooth-fit property is satisfied at all dates except the dividend dates. In both Chapters 3 and 4, we give conditions to guarantee the continuity of this frontier outside the dividend dates. Part II is originally motivated by the optimal management of the production of a hydro-electric plant with a constraint in probability on the water level of the dam at certain dates. Using Balder's work on Young's relaxation of optimal control problems, we focus more specifically on solving them by dynamic programming. In Chapter 5, we extend the results of Evstigneev to the framework of Young's measurements. We then establish that it is possible to solve by dynamic programming some problems with constraints in conditional expectations. Thanks to the work of Bouchard, Elie, Soner and Touzi on stochastic target problems with controlled loss, we show in Chapter 6 that a problem with expectation constraints can be reduced to a problem with conditional expectation constraints. As a special case, we prove that the initial dam management problem can be solved by dynamic programming.
Topics of the publication
Themes detected by scanR from retrieved publications. For more information, see https://scanr.enseignementsup-recherche.gouv.fr