LAPEYRE Bernard

< Back to ILB Patrimony
Topics of productions
Affiliations
  • 2012 - 2020
    Centre d'Enseignement et de Recherche en Mathématiques et Calcul Scientifique
  • 2012 - 2019
    Mathematical risk handling
  • 2014 - 2015
    Centre de recherche insulaire et observatoire de l'environnement
  • 2014 - 2015
    Centre de recherche en biologie cellulaire de Montpellier
  • 2021
  • 2020
  • 2019
  • 2018
  • 2017
  • 2016
  • 2015
  • 2014
  • 2013
  • 2011
  • 2007
  • 2005
  • 2004
  • 2001
  • 2000
  • Neural network regression for Bermudan option pricing.

    Bernard LAPEYRE, Jerome LELONG
    Monte Carlo Methods and Applications | 2021
    No summary available.
  • Jacobi Stochastic Volatility factor for the Libor Market Model.

    Pierre edouard ARROUY, Alexandre BOUMEZOUED, Bernard LAPEYRE, Sophian MEHALLA
    2020
    We propose a new method to efficiently price swap rates derivatives under the LIBOR Market Model with Stochastic Volatility and Displaced Diffusion (DDSVLMM). This method uses polynomial processes combined with Gram-Charlier expansion techniques. The standard pricing method for this model relies on dynamics freezing to recover an Heston-type model for which analytical formulas are available. This approach is time consuming and efficient approximations based on Gram-Charlier expansions have been recently proposed. In this article, we first discuss the fact that for a class of stochastic volatility model, including the Heston one, the classical sufficient condition ensuring the convergence of the Gram-Charlier series can not be satisfied. Then, we propose an approximating model based on Jacobi process for which we can prove the stability of the Gram-Charlier expansion. For this approximation, we have been able to prove a strong convergence toward the original model. moreover, we give an estimate of the convergence rate. We also prove a new result on the convergence of the Gram-Charlier series when the volatility factor is not bounded from below. We finally illustrate our convergence results with numerical examples.
  • Weak error analysis of time and particle discretization of nonlinear stochastic differential equations in the McKean sense.

    Oumaima BENCHEIKH, Benjamin JOURDAIN, Bernard LAPEYRE, Benjamin JOURDAIN, Noufel FRIKHA, Lukasz SZPRUCH, Mireille BOSSY, Jean francois CHASSAGNEUX, Stephane MENOZZI, Noufel FRIKHA, Lukasz SZPRUCH
    2020
    This thesis is devoted to the theoretical and numerical study of the weak error of time and particle discretization of nonlinear Stochastic Differential Equations in the McKean sense. In the first part, we analyze the weak convergence speed of the time discretization of standard SDEs. More specifically, we study the convergence in total variation of the Euler-Maruyama scheme applied to d-dimensional DEs with a measurable drift coefficient and additive noise. We obtain, assuming that the drift coefficient is bounded, a weak order of convergence 1/2. By adding more regularity on the drift, namely a spatial divergence in the sense of L[rho]-space-uniform distributions in time for some [rho] greater than or equal to d, we reach a convergence order equal to 1 (within a logarithmic factor) at terminal time. In dimension 1, this result is preserved when the spatial derivative of the drift is a measure in space with a total mass bounded uniformly in time. In the second part of the thesis, we analyze the weak discretization error in both time and particles of two classes of nonlinear DHSs in the McKean sense. The first class consists of multi-dimensional SDEs with regular drift and diffusion coefficients in which the law dependence occurs through moments. The second class consists of one-dimensional SDEs with a constant diffusion coefficient and a singular drift coefficient where the law dependence occurs through the distribution function. We approximate the EDS by the Euler-Maruyama schemes of the associated particle systems and we obtain for both classes a weak order of convergence equal to 1 in time and in particles. In the second class, we also prove a result of chaos propagation of optimal order 1/2 in particles and a strong order of convergence equal to 1 in time and 1/2 in particles. All our theoretical results are illustrated by numerical simulations.
  • A forward equation for computing derivatives exposure.

    Bernard LAPEYRE, Marouan iben TAARIT
    International Journal of Theoretical and Applied Finance | 2019
    No summary available.
  • Neural network regression for Bermudan option pricing.

    Bernard LAPEYRE, Jerome LELONG
    2019
    The pricing of Bermudan options amounts to solving a dynamic programming principle , in which the main difficulty, especially in large dimension, comes from the computation of the conditional expectation involved in the continuation value. These conditional expectations are classically computed by regression techniques on a finite dimensional vector space. In this work, we study neural networks approximation of conditional expectations. We prove the convergence of the well-known Longstaff and Schwartz algorithm when the standard least-square regression is replaced by a neural network approximation.
  • Valuation of Xva adjustments: from expected exposure to adverse correlation risks.

    Marouan IBEN TAARIT, Bernard LAPEYRE, Monique JEANBLANC, Bernard LAPEYRE, Romuald ELIE, Etienne VARLOOT, Stephane CREPEY, Frederic ABERGEL
    2018
    We begin this thesis report by evaluating the expected exposure, which represents one of the major components of XVA adjustments. Under the assumption of independence between exposure and financing and credit costs, we derive in Chapter 3 a new representation of expected exposure as the solution of an ordinary differential equation with respect to the time of default observation. For the one-dimensional case, we rely on arguments similar to those for Dupire's local volatility. And for the multidimensional case, we refer to the Co-aire formula. This representation allows us to explain the impact of volatility on the expected exposure: this time value involves the volatility of the underlyings as well as the first-order sensitivity of the price, evaluated on a finite set of points. Despite numerical limitations, this method is an accurate and fast approach for valuing unit XVA in dimension 1 and 2.The following chapters are dedicated to the risk aspects of correlations between exposure envelopes and XVA costs. We present a model of the general correlation risk through a multivariate stochastic diffusion, including both the underlying assets of the derivatives and the default intensities. In this framework, we present a new approach to valuation by asymptotic developments, such that the price of an XVA adjustment corresponds to the price of the zero-correlation adjustment, plus an explicit sum of corrective terms. Chapter 4 is devoted to the technical derivation and study of the numerical error in the context of the valuation of default contingent derivatives. The quality of the numerical approximations depends solely on the regularity of the credit intensity diffusion process, and is independent of the regularity of the payoff function. The valuation formulas for CVA and FVA are presented in Chapter 5. A generalization of the asymptotic developments for the bilateral default framework is addressed in Chapter 6.We conclude this dissertation by addressing a case of the specific correlation risk related to rating migration contracts. Beyond the valuation formulas, our contribution consists in presenting a robust approach for the construction and calibration of a rating transition model consistent with market implied default probabilities.
  • Analysis of the dynamics of the contagion phenomenon between European sovereign bonds during recent episodes of financial crises.

    Marc henri THOUMIN, Alain GALLI, Bernard LAPEYRE, Alain GALLI, Margaret ARMSTRONG, Sandrine UNGARI, Delphine LAUTIER, Siem jan KOOPMAN
    2017
    Periods of intense risk aversion often cause significant distortions in market prices and substantial losses for investors. Each episode of financial crisis shows that the movements of generalized sales on the markets have very negative consequences on the real economy. Thus, exploring the risk aversion phenomenon and the dynamics of the propagation of panic sentiment in financial markets can help to understand these periods of high volatility.In this theme report, we explore different dimensions of the risk aversion phenomenon, in the context of European sovereign bond portfolios. The yield on government bonds, quoted by traders, is thought to reflect, among other things, the risk that the Treasury will default on its debt before the bond matures. This is the sovereign risk. Financial crises usually cause an important movement of yields to higher levels. This type of correction reflects an increase in sovereign risk, and necessarily implies an increase in the cost of financing for national Treasuries. One objective of this report is therefore to provide explicit details to Treasuries on how bond yields are expected to deteriorate in periods of risk aversion.Chapter I explores sovereign risk in the context of a probabilistic model involving heavy-tailed distributions, as well as the GAS method that allows capturing the dynamics of volatility. The fit obtained with the Generalized Hyperbolic Distributions is robust, and the results suggest that our approach is particularly effective during periods marked by erratic volatility. In order to simplify, we describe the implementation of a timeless volatility estimator, meant to reflect the intrinsic volatility of each bond. This estimator suggests that the volatility grows quadratically when it is expressed as a function of the distribution function of the yield variations. In a second step we explore a bivariate version of the model. The calibration is robust and highlights the correlations between each bond. As a general observation, our analysis confirms that tails distributions are quite appropriate for the exploration of market prices during a financial crisis.Chapter II explores different ways to exploit our probabilistic model. In order to identify the dynamics of contagion between sovereign bonds, we analyze the expected market response to a series of financial shocks. We consider an important level of granularity in terms of the severity of the underlying shock, and this allows us to identify empirical laws that are assumed to generalize the behavior of market action when risk aversion increases. We then incorporate our volatility and market action estimators to some recognized portfolio optimization approaches and we note an improvement in portfolio resilience in this new version. Finally, we develop a new portfolio optimization methodology based on the mean-reversion principle.Chapter III is dedicated to the pricing of interest rate derivatives. We now consider that risk aversion causes the emergence of discontinuities in market prices, which we simulate through jump processes. Our model focuses on Hawkes processes which have the advantage of capturing the presence of self-excitation in volatility. We develop a calibration procedure that differs from the usual procedures. The implied volatility results are consistent with the realized volatility, and suggest that the risk premium coefficients have been successfully estimated.
  • A Simple GDP-based Model for Public Investments at Risk.

    Bernard LAPEYRE, Emile QUINET
    Journal of Benefit-Cost Analysis | 2017
    Investment decision rules in risk situations have been extensively analyzed for firms. Most research focus on financial options and the wide range of methods based on dynamic programming currently used by firms to decide on whether and when to implement an irreversible investment under uncertainty. The situation is quite different for public investments, which are decided and largely funded by public authorities. These investments are assessed by public authorities, not through market criteria, but through public Cost Benefit Analysis (CBA) procedures. Strangely enough, these procedures pay little attention to risk and uncertainty. The present text aims at filling this gap. We address the classic problem of whether and when an investment should be implemented. This stopping time problem is established in a framework where the discount rate is typically linked to GDP, which follows a Brow-nian motion, and where the benefits and cost of implementation follow linked Brownian motions. We find that the decision rule depends on a threshold value of the First Year Ad-vantage/Cost ratio. This threshold can be expressed in a closed form including the means, standard deviations and correlations of the stochastic variables. Simulations with sensible current values of these parameters show that the systemic risk, coming from the correlation between the benefits of the investment and economic growth, is not that high, and that more attention should be paid to risks relating to the construction cost of the investment. furthermore , simple rules of thumb are designed for estimating the above mentioned threshold.
  • Infrastructure maintenance, regeneration and service quality economics: A rail example.

    Marc GAUDRY, Bernard LAPEYRE, Emile QUINET
    Transportation Research Part B: Methodological | 2016
    This paper proposes a formalized framework for the joint economic optimization of continuous maintenance and periodic regeneration of rail transport infrastructure taking into account output consisting not only in traffic levels but also in track service quality. In contrast with much optimization work pertaining to spatially contiguous maintenance works, its principal economic emphasis and objective focus are centered on the optimal allocation of current maintenance and periodic renewal expenses, on their yearly distribution among large network partitions, and on infrastructure pricing. The model equations are based on very simple assumptions of infrastructure degradation laws and on a manager's objective function optimized through optimal control procedures. Equations are tested on national French rail track segment databases using Box–Cox transformations and match rail regeneration and maintenance practices prevailing in France. More generally, the paper makes a broad contribution to capital theory, on the optimal maintenance and renewal of equipment, and defines a method applicable not only to other transport infrastructure but to a wide range of capital goods, including housing, cars and industrial machines.
  • Infrastructure maintenance, regeneration and service quality economics: A rail example.

    Marc GAUDRY, Bernard LAPEYRE, Emile QUINET
    2015
    An earlier draft of 6th October 2010 by Gaudry and Quinet, entitled Optimisation de l’entretien et de la régénération d’une infrastructure: exploration d’hypothèses, benefitted from comments by Bernard Caillaud and Matthieu de Lapparent and was presented without econometric tests at the Kuhmo Nectar Conference on Transportation Economics in Stockholm on 1st July 2011 under the title “Joint optimization of continuous maintenance and periodic renewal”. The authors thank Marc Antoni, Richard Arnott, David Meunier and Yves Puttalaz for discussions or comments, Cong-Liem Tran for computing assistance and are grateful to Société nationale des chemins de fer français (SNCF) for financial support and for allowing inclusion in this version of estimates based on databases constructed by Michel Ikonomov and Pascaline Boyer. Exploratory estimates obtained from fixed form regression specifications were presented at the Kuhmo Nectar Conference on Transportation Economics in Berlin on 21st June 2012 through David Meunier’s good offices.
  • Modeling the volatility smile for interest rate derivatives.

    Ernesto PALIDDA, Bernard LAPEYRE, Nicole EL KAROUI, Bernard LAPEYRE, Aurelien ALFONSI, Christophe MICHEL, Damiano BRIGO, Martino GRASSELLI
    2015
    The purpose of this thesis is to study a model of the dynamics of the interest rate curve for the valuation and management of derivative products. In particular, we wish to model the dynamics of prices depending on volatility. The market practice consists in using a parametric representation of the market, and in constructing the hedging portfolios by calculating the sensitivities with respect to the model parameters. As the model parameters are calibrated on a daily basis so that the model reproduces market prices, the self-financing property is not verified. Our approach is different, and consists in replacing the parameters by factors, which are assumed to be stochastic. Hedging portfolios are constructed by cancelling the price sensitivities to these factors. The portfolios thus obtained verify the self-financing property.
  • TRM6/61 connects PKCα with translational control through tRNAiMet stabilization: impact on tumorigenesis.

    F MACARI, Y EL HOUFI, G BOLDINA, H XU, S KHOURY HANNA, J OLLIER, L YAZDANI, G ZHENG, I BIECHE, N LEGRAND, D PAULET, S DURRIEU, A BYSTROM, S DELBECQ, B LAPEYRE, L BAUCHET, J PANNEQUIN, F HOLLANDE, T PAN, M TEICHMANN, S VAGNER, A DAVID, A CHOQUET, D JOUBERT
    Oncogene | 2015
    Accumulating evidence suggests that changes of the protein synthesis machinery alter translation of specific mRNAs and participate in malignant transformation. Here we show that protein kinase C α (PKCα) interacts with TRM61, the catalytic subunit of the TRM6/61 tRNA methyltransferase. The TRM6/61 complex is known to methylate the adenosine 58 of the initiator methionine tRNA (tRNAiMet), a nuclear post-transcriptional modification associated with the stabilization of this crucial component of the translation-initiation process. Depletion of TRM6/61 reduced proliferation and increased death of C6 glioma cells, effects that can be partially rescued by overexpression of tRNAiMet. In contrast, elevated TRM6/61 expression regulated the translation of a subset of mRNAs encoding proteins involved in the tumorigenic process and increased the ability of C6 cells to form colonies in soft agar or spheres when grown in suspension. In TRM6/61/tRNAiMet-overexpressing cells, PKCα overexpression decreased tRNAiMet expression and both colony- and sphere-forming potentials. A concomitant increase in TRM6/TRM61 mRNA and tRNAiMet expression with decreased expression of PKCα mRNA was detected in highly aggressive glioblastoma multiforme as compared with Grade II/III glioblastomas, highlighting the clinical relevance of our findings. Altogether, we suggest that PKCα tightly controls TRM6/61 activity to prevent translation deregulation that would favor neoplastic development.
  • Feedback effects in finance: applications to optimal execution and volatility models.

    Pierre BLANC, Aurelien ALFONSI, Bernard LAPEYRE, Aurelien ALFONSI, Michel CROUHY, Jean philippe BOUCHAUD, Olivier GUEANT, Mathieu ROSENBAUM, Jim GATHERAL
    2015
    In this thesis, we consider two types of applications of feedback effects in finance. These effects come into play when market participants execute sequences of trades or take part in chain reactions, which generate peaks of activity. The first part presents a dynamic optimal execution model in the presence of an exogenous stochastic market order flow. We start from the benchmark model of Obizheva and Wang, which defines an optimal execution framework with a mixed price impact. We add an order flow modeled using Hawkes processes, which are jump processes with a self-excitation property. Using stochastic control theory, we determine the optimal strategy analytically. Then we determine the conditions for the existence of Price Manipulation Strategies, as introduced by Huberman and Stanzl. These strategies can be excluded if the self-excitation of the order flow exactly offsets the price resilience. In a second step, we propose a calibration method for the model, which we apply on high frequency financial data from CAC40 stock prices. On these data, we find that the model explains a non-negligible part of the price variance. An evaluation of the optimal strategy in backtesting shows that it is profitable on average, but that realistic transaction costs are sufficient to prevent price manipulation. Then, in the second part of the thesis, we focus on the modeling of intraday volatility. In the literature, most of the backward-looking volatility models focus on the daily time scale, i.e., on day-to-day price changes. The objective here is to extend this type of approach to shorter time scales. We first present an ARCH-type model with the particularity of taking into account separately the contributions of past intraday and night-time returns. A calibration method for this model is studied, as well as a qualitative interpretation of the results on US and European stock returns. In the next chapter, we further reduce the time scale considered. We study a high-frequency volatility model, the idea of which is to generalize the Hawkes process framework to better reproduce some empirical market characteristics. In particular, by introducing quadratic feedback effects inspired by the QARCH discrete time model we obtain a power law distribution for volatility as well as time skewness.
  • Pricing derivatives on graphics processing units using Monte Carlo simulation.

    Lokman a. ABBAS TURKI, Stephane VIALLE, Bernard LAPEYRE, Patrick MERCIER
    Concurrency and Computation: Practice and Experience | 2014
    This paper is about using the existing Monte Carlo approach for pricing European and American contracts on a state-of-the-art graphics processing unit (GPU) architecture. First, we adapt on a cluster of GPUs two different suitable paradigms of parallelizing random number generators, which were developed for CPU clusters. Because in financial applications, we request results within seconds of simulation, the sufficiently large computations should be implemented on a cluster of machines. Thus, we make the European contract comparison between CPUs and GPUs using from one up to 16 nodes of a CPU/GPU cluster. We show that using GPUs for European contracts reduces the execution time by ∼ 40 and diminishes the energy consumed by ∼ 50 during the simulation. In the second set of experiments, we investigate the benefits of using GPUs' parallelization for pricing American options that require solving an optimal stopping problem and which we implement using the Longstaff and Schwartz regression method. The speedup result obtained for American options varies between two and 10 according to the number of generated paths, the dimensions, and the time discretization.
  • Stochastic modeling of order books.

    Aymen JEDIDI, Frederic ABERGEL, Jim GATHERAL, Frederic ABERGEL, Bernard LAPEYRE, Mathieu ROSEMBAUM, Emmanuel BACRY, Jean philippe BOUCHARD, Bernard LAPEYRE, Mathieu ROSEMBAUM
    2014
    This thesis studies some aspects of stochastic modeling of order books. In the first part, we analyze a model in which the order arrival times are Poissonian independent. We show that the order book is stable (in the sense of Markov chains) and that it converges to its stationary distribution exponentially fast. We deduce that the price generated in this framework converges to a Brownian motion at large time scales. We illustrate the results numerically and compare them to market data, highlighting the successes of the model and its limitations. In a second part, we generalize the results to a framework where arrival times are governed by self- and mutually-existing processes, under assumptions about the memory of these processes. The last part is more applied and deals with the identification of a realistic multivariate model from the order flows. We detail two approaches: the first one by likelihood maximization and the second one from the covariance density, and succeed in having a remarkable agreement with the data. We apply the estimated model to two concrete algorithmic trading problems, namely the measurement of the execution probability and the cost of a limit order.
  • Acceleration of the Monte Carlo method for diffusion processes and applications in Finance.

    Kaouther HAJJI, Ahmed KEBAIER, Mohamed BEN ALAYA, Gilles PAGES, Jean stephane DHERSIN, Gersende FORT, Yueyun HU, Denis TALAY, Bernard LAPEYRE
    2014
    In this thesis, we focus on the combination of variance reduction and complexity reduction methods of the Monte Carlo method. In a first part of this thesis, we consider a continuous diffusion model for which we build an adaptive algorithm by applying importance sampling to the Romberg Statistical method. We prove a Lindeberg Feller type central limit theorem for this algorithm. In this same framework and in the same spirit, we apply importance sampling to the Multilevel Monte Carlo method and we also prove a central theorem for the obtained adaptive algorithm. In the second part of this thesis, we develop the same type of algorithm for a non-continuous model, namely the Lévy processes. Similarly, we prove a central limit theorem of the Lindeberg Feller type. Numerical illustrations have been carried out for the different algorithms obtained in the two frameworks with jumps and without jumps.
  • Using Premia and Nsp for Constructing a Risk Management Benchmark for Testing Parallel Architecture.

    Jean philippe CHANCELIER, Bernard LAPEYRE, Jerome LELONG
    Concurrency and Computation: Practice and Experience | 2014
    Financial institutions have massive computations to carry out overnight which are very demanding in terms of the consumed CPU. The challenge is to price many different products on a cluster-like architecture. We have used the Premia software to valuate the financial derivatives. In this work, we explain how Premia can be embedded into Nsp, a scientific software like Matlab, to provide a powerful tool to valuate a whole portfolio. Finally, we have integrated an MPI toolbox into Nsp to enable to use Premia to solve a bunch of pricing problems on a cluster. This unified framework can then be used to test different parallel architectures.
  • Empirical properties and modeling of high frequency assets.

    Riadh ZAATOUR, Frederic ABERGEL, Bernard LAPEYRE, Frederic ABERGEL, Fulvio BALDOVIN, Emmanuel BACRY, Stephane TYC, Bernard LAPEYRE, Fulvio BALDOVIN
    2013
    This thesis explores theoretically and empirically some aspects of the formation and evolution of financial asset prices observed in high frequency. We begin by studying the joint dynamics of the option and its underlying. Since high-frequency data make the realized volatility process of the underlying observable, we investigate whether this information is used to price options. We find that the market does not exploit it. Stochastic volatility models are therefore to be considered as reduced-form models. Nevertheless, this study allows us to test the relevance of an empirical hedging measure that we call effective delta. It is the slope of the regression of the option price returns on those of the underlying. It provides a fairly satisfactory indicator of hedging that is independent of any modeling. For price dynamics, we turn in the following chapters to more explicit models of the market microstructure. One of the characteristics of market activity is its clustering. Hawkes processes, which are point processes with this characteristic, therefore provide an adequate mathematical framework for the study of this activity. The Markovian representation of these processes, as well as their affine character when the kernel is exponential, allow us to use the powerful analytical tools of the infinitesimal generator and Dynkin's formula to compute various quantities related to them, such as the moments or autocovariances of the number of events on a given interval. We start with a one-dimensional framework, simple enough to illuminate the approach, but rich enough to allow applications such as grouping order arrival times, predicting future market activity knowing past activity, or characterizing unusual, but nevertheless observed, forms of signature plot where the measured volatility decreases as the sampling frequency increases. Our calculations also allow us to make the calibration of Hawkes processes instantaneous by using the method of moments. The generalization to the multidimensional case then allows us to capture, with clustering, the mean reversion phenomenon that also characterizes the market activity observed at high frequency. General formulas for the signature plot are then obtained and allow us to link its shape to the relative importance of clustering or mean reversion. Our calculations also allow us to obtain the explicit form of the volatility associated with the diffusive limit, connecting the microscopic level dynamics to the volatility observed macroscopically, for example on a daily scale. Moreover, the modeling of buying and selling activities by Hawkes processes allows to compute the impact of a meta order on the asset price. We then find and explain the concave shape of this impact as well as its temporal relaxation. The analytical results obtained in the multidimensional case then provide the appropriate framework for the study of correlation. We then present general results on the Epps effect, as well as on the formation of the correlation and the lead lag.
  • A Forward Solution for Computing Risk-Neutral Derivatives Exposure.

    Marouan IBEN TAARIT, Bernard LAPEYRE
    SSRN Electronic Journal | 2013
    In this paper, we derive a forward analytical formula for computing the expected exposure of financial derivatives. Under general assumptions about the underlying diffusion process, we give a convenient decomposition of the exposure into two terms: The first term is an intrinsic value part which is directly deduced from the term structure of the forward mark-to-market. The second term expresses the variability of the future mark-to-market and represents the time value part. Abstract In the spirit of Dupire's equation for local volatility, our representation establishes a differential equation for the evolution of the expected exposure with respect to the observation dates. Our results are twofold: First, we derive analytically an integral formula for the exposure's expectation and we highlight straightforward links with local times and the co-area formula. Second, we show that from a numerical perspective, our solution can be significantly efficient when compared to standard numerical methods. The accuracy and time-efficiency of the forward representation are of special interest in benchmarking XVA valuation adjustments at the trade level.
  • Numerical methods and models applied to market risks and financial valuation.

    Jose arturo INFANTE ACEVEDO, Tony LELIEVRE, Bernard LAPEYRE, Tony LELIEVRE, Mohamed BACCOUCHE, Aurelien ALFONSI, Frederic ABERGEL, Yves ACHDOU
    2013
    This thesis addresses two topics: (i) The use of a new numerical method for the valuation of options on a basket of assets, (ii) Liquidity risk, order book modeling and market microstructure. First topic: A greedy algorithm and its applications to solve partial differential equations. The typical example in finance is the valuation of an option on a basket of assets, which can be obtained by solving the Black-Scholes PDE having as dimension the number of assets considered. We propose to study an algorithm that has been proposed and studied recently in [ACKM06, BLM09] to solve high dimensional problems and try to circumvent the curse of dimension. The idea is to represent the solution as a sum of tensor products and to iteratively compute the terms of this sum using a gluttonous algorithm. The solution of PDEs in high dimension is strongly related to the representation of functions in high dimension. In Chapter 1, we describe different approaches to represent high-dimensional functions and introduce the high-dimensional problems in finance that are addressed in this thesis work. The method selected in this manuscript is a nonlinear approximation method called Proper Generalized Decomposition (PGD). Chapter 2 shows the application of this method for the approximation of the solution of a linear PDE (the Poisson problem) and for the approximation of an integrable square function by a sum of tensor products. A numerical study of the latter problem is presented in Chapter 3. The Poisson problem and the approximation of an integrable square function will be used as a basis in Chapter 4 to solve the Black-Scholes equation using the PGD approach. In numerical examples, we have obtained results up to dimension 10. In addition to approximating the solution of the Black-Scholes equation, we propose a variance reduction method of classical Monte Carlo methods for pricing financial options. Second topic: Liquidity risk, order book modeling, market microstructure. Liquidity risk and market microstructure have become very important topics in financial mathematics. The deregulation of financial markets and the competition between them to attract more investors is one of the possible reasons. In this work, we study how to use this information to optimally execute the sale or purchase of orders. Orders can only be placed in a price grid. At each moment, the number of pending buy (or sell) orders for each price is recorded. In [AFS10], Alfonsi, Fruth and Schied proposed a simple model of the order book. In this model, it is possible to explicitly find the optimal strategy to buy (or sell) a given quantity of shares before a maturity. The idea is to split the buy (or sell) order into other smaller orders in order to find the balance between the acquisition of new orders and their price. This thesis work focuses on an extension of the order book model introduced by Alfonsi, Fruth and Schied. Here, the originality is to allow the depth of the order book to depend on time, which is a new feature of the order book that has been illustrated by [JJ88, GM92, HH95, KW96]. In this framework, we solve the optimal execution problem for discrete and continuous strategies. This gives us, in particular, sufficient conditions to exclude price manipulation in the sense of Huberman and Stanzl [HS04] or Transaction-Triggered Price Manipulation (see Alfonsi, Schied and Slynko).
  • Matrix processes: simulation and modeling of dependency in finance.

    Abdelkoddousse AHDIDA, Bernard LAPEYRE, Nizar TOUZI, Bernard LAPEYRE, Aur?lien ALFONSI, Lorenzo BERGOMI, Josef TEICHMANN, Arturo KOHATSU HIGA
    2011
    The first part of this thesis is devoted to the simulation of stochastic differential equations defined on the nexus of positive symmetric matrices. We present new high order discretization schemes for this type of stochastic differential equations, and study their weak convergence. We are particularly interested in the Wishart process, often used in financial modeling. For this process we propose both a law-accurate scheme and high order discretizations. To date, this method is the only one that can be used whatever the parameters involved in the definition of these models. We show, moreover, how we can reduce the algorithmic complexity of these methods and we verify the theoretical results on numerical implementations. In the second part, we are interested in processes with values in the space of correlation matrices. We propose a new class of stochastic differential equations defined in this space. This model can be considered as an extension of the Wright-Fisher model (or Jacobi process) to the space of correlation matrices. We study the weak and strong existence of solutions. Then, we explain the links with Wishart processes and multi-allele Wright-Fisher processes. We show the ergodic character of the model and give Girsanov representations that can be used in finance. For practical use, we explain two high order discretization schemes. This part concludes with numerical results illustrating the convergence behavior of these schemes. The last part of this thesis is devoted to the use of these processes for multi-dimensional modeling issues in finance. An important modeling issue, which is still difficult to address, is the identification of a type of model that allows to calibrate both the options market on an index and on its components. We propose, here, two types of models: one with local correlation and the other with stochastic correlation. In both cases, we explain which procedure should be adopted to obtain a good calibration of the market data.
  • Asymptotic study of stochastic algorithms and computation of Parisian options prices.

    Jerome LELONG, Bernard LAPEYRE
    2007
    This thesis deals with two independent subjects. The first part is devoted to the study of stochastic algorithms. In a first introductory chapter, I present the algorithm of in a parallel with Newton's algorithm for deterministic optimization. These few reminders then allow me to introduce the randomly truncated stochastic algorithms of which are at the heart of this thesis. The first study of this algorithm concerns its almost sure convergence which is sometimes established under rather changing assumptions. This first chapter is an opportunity to clarify the assumptions of the almost sure convergence and to present a simplified proof. In the second chapter, we continue the study of this algorithm by focusing this time on its speed of convergence. More precisely, we consider a moving average version of this algorithm and prove a central limit theorem for this variant. The third chapter is devoted to two applications of these algorithms to finance: the first example presents a method for calibrating the correlation for multidimensional market models while the second example continues the work of by improving its results. The second part of this thesis focuses on the valuation of Parisian options based on the work of Chesney, Jeanblanc-Picqué, and Yor. The valuation method is based on obtaining closed formulas for the Laplace transforms of prices with respect to maturity. We establish these formulas for single and double barrier Parisian options. We then study a numerical inversion method for these transforms. We establish a result on the accuracy of this very efficient numerical method. On this occasion, we also prove results related to the regularity of prices and the existence of a density with respect to the Lebesgues measure for Parisian times.
  • Monte Carlo methods and option valuation.

    Nicola MORENI, Bernard LAPEYRE, Guido MONTAGNA
    2005
    No summary available.
  • Monte Carlo methods and stochastic algorithms.

    Bouhari AROUNA, Bernard LAPEYRE
    2004
    No summary available.
  • Numerical methods and probabilistic algorithms for the evaluation of exotic interest rate derivatives in the framework of Libor and Swap rate market models.

    Mouaoya NOUBIR, Bernard LAPEYRE
    2001
    This thesis addresses the problem of valuing exotic interest rate options in the context of Libor and swap rate market models. It consists of four chapters. The first chapter is devoted to the theoretical construction and calibration of the Libor and swap rate market models. The second chapter presents the valuation of exotic interest rate options using closed-form approximations. We also present a theoretical and numerical study of the error of these approximations. The third chapter presents the valuation of exotic interest rate products using Monte Carlo and Quasi-Monte Carlo methods. We numerically compare these methods using several sequences with low discrepancies. We also study numerically the speed of convergence of the Euler and Milstein schemes and the Richardson extrapolation method. The fourth chapter deals with the evaluation of American and Bermuda interest rate options. We introduce approximations to the stochastic differential equations that govern the Libor and swap rate market models. These approximations reduce the size of the market models and allow the valuation of American and Bermuda options by classical valuation techniques (partial differential equations, variational inequalities, the tree method, etc.). ). We evaluate, numerically and theoretically, and compare this approach to other methods, based on the Monte Carlo method, recently proposed for the valuation of American and Bermuda options in high dimension.
  • Approximate hedging of exotic options: pricing of Asian options.

    Emmanuel TEMAM, Bernard LAPEYRE
    2001
    No summary available.
  • Automation of variance reduction methods for solving the transport equation.

    Jean marc DEPINAY, Bernard LAPEYRE
    2000
    Monte-Carlo methods are often used to solve neutron problems. The large size of the problem and the complexity of real geometries make traditional numerical methods difficult to implement. These methods are relatively easy to implement but have the drawback of converging slowly, the accuracy of the calculation being in 1/square root n where n is the number of simulations. Many studies have been conducted to accelerate the convergence of this type of algorithm. This work is part of this trend and aims to research and describe convergence acceleration techniques that can be easily implemented and automated. In this thesis, we are interested in preferential sampling methods. These classical techniques for transport equations use parameters that are usually fixed ernpirically by specialists. The main originality of our work is to propose methods that are easily automated. The originality of the algorithm lies on the one hand in the use of a preferential sampling on the angular variable (angular bias), used in addition to the sampling of the position variable, and on the other hand in the description of an explicit computation technique of all the parameters useful in the variance reduction. This last point allows the almost complete automation of the variance reduction procedure.
Affiliations are detected from the signatures of publications identified in scanR. An author can therefore appear to be affiliated with several structures or supervisors according to these signatures. The dates displayed correspond only to the dates of the publications found. For more information, see https://scanr.enseignementsup-recherche.gouv.fr