PAGES Gilles

< Back to ILB Patrimony
Topics of productions
Affiliations
  • 2018 - 2021
    Mondes anciens et medievaux
  • 2012 - 2021
    Laboratoire de Probabilités, Statistique et Modélisation
  • 2012 - 2020
    Laboratoire de probabilités et modèles aléatoires
  • 2015 - 2017
    Université Paris 6 Pierre et Marie Curie
  • 2021
  • 2020
  • 2019
  • 2018
  • 2017
  • 2016
  • 2015
  • 2014
  • 2013
  • 2012
  • 2011
  • 2010
  • 2009
  • 2008
  • 2006
  • 2005
  • 2003
  • 2000
  • Risk Quantization by Magnitude and Propensity.

    Olivier p. FAUGERAS, Gilles PAGES
    2021
    We propose a novel approach in the assessment of a random risk variable $X$ by introducing magnitude-propensity risk measures $(m_X,p_X)$. This bivariate measure intends to account for the dual aspect of risk, where the magnitudes $x$ of $X$ tell how hign are the losses incurred, whereas the probabilities $P(X=x)$ reveal how often one has to expect to suffer such losses. The basic idea is to simultaneously quantify both the severity $m_X$ and the propensity $p_X$ of the real-valued risk $X$. This is to be contrasted with traditional univariate risk measures, like VaR or Expected shortfall, which typically conflate both effects. In its simplest form, $(m_X,p_X)$ is obtained by mass transportation in Wasserstein metric of the law $P^X$ of $X$ to a two-points $\{0, m_X\}$ discrete distribution with mass $p_X$ at $m_X$. The approach can also be formulated as a constrained optimal quantization problem. This allows for an informative comparison of risks on both the magnitude and propensity scales. Several examples illustrate the proposed approach.
  • Monotone convex order for the McKean-Vlasov processes.

    Yating LIU, Gilles PAGES
    2021
    In this paper, we establish the monotone convex order between two $\mathbb{R}$-valued McKean-Vlasov processes $X=(X_t)_{t\in [0, T]}$ and $Y=(Y_t)_{t\in [0, T]}$ defined on a filtered probability space $(\Omega, \mathcal{F}, (\mathcal{F}_{t})_{t\geq0}, \mathbb{P})$ by \begin{align} &dX_{t}=b(t, X_{t}, \mu_{t})dt+\sigma(t, X_{t}, \mu_{t})dB_{t}, \quad X_{0}\in L^{p}(\mathbb{P})\. \text{with}\. p\geq 2,\nonumber\\ &dY_{t}=\beta(t, Y_{t}, \nu_{t})dt+\theta(t, \,Y_{t}, \nu_{t})\,dB_{t}, \,\quad Y_{0}\in L^{p}(\mathbb{P}), \nonumber \end{align} where $\forall\, t\in [0, T],\: \mu_{t}=\mathbb{P}\circ X_{t}^{-1}, \:\nu_{t}=\mathbb{P}\circ Y_{t}^{-1}. $ If we make the convexity and monotony assumption (only) on $b$ and $|\sigma|$ and if $b\leq \beta$ and $|\sigma|\leq |\theta|$, then the monotone convex order for the initial random variable $X_0\preceq_{\,\text{mcv}} Y_0$ can be propagated to the whole path of processes $X$ and $Y$. That is, if we consider a non-decreasing convex functional $F$ defined on the path space with polynomial growth, we have $\mathbb{E}\, F(X)\leq \mathbb{E}\, F(Y)$. for a non-decreasing convex functional $G$ defined on the product space involving the path space and its marginal distribution space, we have $\mathbb{E}\, G(X, (\mu_{t})_{t\in [0, T]})\leq \mathbb{E}\, G(Y, (\nu_{t})_{t\in [0, T]})$ under appropriate conditions. The symmetric setting is also valid, that is, if $Y_0\preceq_{\,\text{mcv}} X_0$ and $|\theta|\leq |\sigma|$, then $\mathbb{E}\, F(Y)\leq \mathbb{E}\, F(X)$ and $\mathbb{E}\, G(Y, (\nu_{t})_{t\in [0, T]})\leq \mathbb{E}\, G(X, (\mu_{t})_{t\in [0, T]})$. The proof is based on several forward and backward dynamic programming principle and the convergence of the truncated Euler scheme of the McKean-Vlasov equation.
  • Fast Hybrid Schemes for Fractional Riccati Equations (Rough Is Not So Tough).

    Giorgia CALLEGARO, Martino GRASSELLI, Gilles PAGES
    Mathematics of Operations Research | 2021
    We solve a family of fractional Riccati equations with constant (possibly complex) coefficients. These equations arise, for example, in fractional Heston stochastic volatility models, which have received great attention in the recent financial literature because of their ability to reproduce a rough volatility behavior. We first consider the case of a zero initial value corresponding to the characteristic function of the log-price. Then we investigate the case of a general starting value associated to a transform also involving the volatility process. The solution to the fractional Riccati equation takes the form of power series, whose convergence domain is typically finite. This naturally suggests a hybrid numerical algorithm to explicitly obtain the solution also beyond the convergence domain of the power series. Numerical tests show that the hybrid algorithm is extremely fast and stable. When applied to option pricing, our method largely outperforms the only available alternative, based on the Adams method.
  • Modeling and optimal strategies in short-term energy markets.

    Laura TINSI, Peter TANKOV, Arnak DALALYAN, Gilles PAGES, Peter TANKOV, Arnak DALALYAN, Gilles PAGES, Almut e. d. VERAART, Huyen PHAM, Olivier FERON, Marc HOFFMANN, Almut e. d. VERAART, Huyen PHAM
    2021
    This thesis aims at providing theoretical tools to support the development and management of intermittent renewable energies in short-term electricity markets.In the first part, we develop an exploitable equilibrium model for price formation in intraday electricity markets. To this end, we propose a non-cooperative game between several generators interacting in the market and facing intermittent renewable generation. Using game theory and stochastic control theory, we derive explicit optimal strategies for these generators and a closed-form equilibrium price for different information structures and player characteristics. Our model is able to reproduce and explain the main stylized facts of the intraday market such as the specific time dependence of volatility and the correlation between price and renewable generation forecasts.In the second part, we study dynamic probabilistic forecasts as diffusion processes. We propose several stochastic differential equation models to capture the dynamic evolution of the uncertainty associated with a forecast, derive the associated predictive densities and calibrate the model on real weather data. We then apply it to the problem of a wind producer receiving sequential updates of probabilistic wind speed forecasts, which are used to predict its production, and make buying or selling decisions on the market. We show to what extent this method can be advantageous compared to the use of point forecasts in decision-making processes. Finally, in the last part, we propose to study the properties of aggregated shallow neural networks. We explore the PAC-Bayesian framework as an alternative to the classical empirical risk minimization approach. We focus on Gaussian priors and derive non-asymptotic risk bounds for aggregate neural networks. This analysis also provides a theoretical basis for parameter tuning and offers new perspectives for applications of aggregate neural networks to practical high-dimensional problems, which are increasingly present in energy-related decision processes involving renewable generation or storage.
  • Numerical methods by optimal quantization in finance.

    Thibaut MONTES, Gilles PAGES, Vincent LEMAIRE, Benjamin JOURDAIN, Idris KHARROUBI, Huyen PHAM, Abass SAGNA, Giorgia CALLEGARO, Benoite de SAPORTA
    2020
    This thesis is divided into four parts that can be read independently. In this manuscript, we make some contributions to the theoretical study and to the applications in finance of optimal quantization. In the first part, we recall the theoretical foundations of optimal quantization as well as the classical numerical methods to construct optimal quantizers. The second part focuses on the numerical integration problem in dimension 1, which arises when one wishes to compute numerically expectations, such as in the valuation of derivatives. We recall the existing strong and weak error results and extend the results of the second order convergences to other classes of less regular functions. In a second part, we present a weak error result in dimension 1 and a second development in higher dimension for a product quantizer. In the third part, we focus on a first numerical application. We introduce a stationary Heston model in which the initial condition of the volatility is assumed to be random with the stationary distribution of the EDS of the CIR governing the volatility. This variant of the original Heston model produces a more pronounced implied volatility smile for European options on short maturities than the standard model. We then develop a numerical method based on recursive quantization produced for the evaluation of Bermudian and barrier options. The fourth and last part deals with a second numerical application, the valuation of Bermudian options on exchange rates in a 3-factor model. These products are known in the markets as PRDCs. We propose two schemes to evaluate this type of options, both based on optimal product quantization and establish a priori error estimates.
  • Unajusted Langevin algorithm with multiplicative noise: Total variation and Wasserstein bounds.

    Gilles PAGES, Fabien PANLOUP
    2020
    In this paper, we focus on non-asymptotic bounds related to the Euler scheme of an ergodic diffusion with a possibly multiplicative diffusion term (non-constant diffusion coefficient). More precisely, the objective of this paper is to control the distance of the standard Euler scheme with decreasing step (usually called Unajusted Langevin Algorithm in the Monte-Carlo literature) to the invariant distribution of such an ergodic diffusion. In an appropriate Lyapunov setting and under uniform ellipticity assumptions on the diffusion coefficient, we establish (or improve) such bounds for Total Variation and L 1-Wasserstein distances in both multiplicative and additive and frameworks. These bounds rely on weak error expansions using Stochastic Analysis adapted to decreasing step setting.
  • New weak error bounds and expansions for optimal quantization.

    Vincent LEMAIRE, Thibaut MONTES, Gilles PAGES
    Journal of Computational and Applied Mathematics | 2020
    We propose new weak error bounds and expansion in dimension one for optimal quantization-based cubature formula for different classes of functions, such that piecewise affine functions, Lipschitz convex functions or differentiable function with piecewise-defined locally Lipschitz or α-Hölder derivatives. This new results rest on the local behaviors of optimal quantizers, the L r-L s distribution mismatch problem and Zador's Theorem. This new expansion supports the definition of a Richardson-Romberg extrapolation yielding a better rate of convergence for the cubature formula. An extension of this expansion is then proposed in higher dimension for the first time. We then propose a novel variance reduction method for Monte Carlo estimators, based on one dimensional optimal quantizers.
  • Market Impact in Systematic Trading and Option Pricing.

    Emilio SAID, Frederic ABERGEL, Gilles PAGES, Mathieu ROSENBAUM, Aurelien ALFONSI, Damien CHALLET, Sophie LARUELLE, Mathieu ROSENBAUM, Aurelien ALFONSI
    2020
    The main objective of this thesis is to understand the various aspects of market impact. It consists of four chapters in which market impact is studied in different contexts and at different scales. The first chapter presents an empirical study of the market impact of limit orders in European equity markets. In the second chapter, we have extended the methodology presented for the equity markets to the options markets. This empirical study has shown that our definition of an options meta-order allows us to recover all the results highlighted in the equity markets. The third chapter focuses on market impact in the context of derivatives valuation. This chapter attempts to bring a microstructure component to the valuation of options by proposing a theory of market impact disturbances during the re-hedging process. In the fourth chapter, we explore a fairly simple model for metaorder relaxation. Metaorder relaxation is treated in this section as an informational process that is transmitted to the market. Thus, starting from the point of departure that at the end of the execution of a meta-order the information carried by it is maximal, we propose an interpretation of the relaxation phenomenon as being the result of the degradation of this information at the expense of the external noise of the market.
  • Conditional Monte Carlo Learning for Diffusions I: main methodology and application to backward stochastic differential equations.

    Lokman ABBAS TURKI, G. PAGES, B DIALLO
    2020
    We present a new algorithm based on a One-layered Nested Monte Carlo (1NMC) to simulate functionals U of a Markov process X. The main originality of the proposed methodology comes from the fact that it provides a recipe to simulate U_{t≥s} conditionally on X_s. Because of the nested structure that allows a Taylor-like expansion, it is possible to use a very reduced basis for the regression. Although this methodology can be adapted for a large number of situations, we only apply it here for the simulation of Backward Stochastic Differential Equations (BSDEs). The generality and the stability of this algorithm, even in high dimension, make its strength. It is heavier than a straight Monte Carlo (MC) but it is far more accurate to simulate quantities that are almost impossible to simulate with MC. The parallel suitability of 1NMC makes it feasible in a reasonable computing time. This paper explains the main version of this algorithm and provides first results of error estimates. We also give various numerical examples with a dimension equal to 100 that are executed from few seconds to few minutes on one Graphics Processing Unit (GPU).
  • Stochastic non-Markovian differential games and mean-field Langevin dynamics.

    Kaitong HU, Nizar TOUZI, Caroline HILLAIRET, Nizar TOUZI, Stephane VILLENEUVE, Johannes MUHLE KARBE, Zhenjie REN, Gilles PAGES, Jean francois CHASSAGNEUX, Stephane VILLENEUVE, Johannes MUHLE KARBE
    2020
    This thesis is composed of two independent parts, the first one grouping two distinct problems. In the first part, we first study the Principal-Agent problem in degenerate systems, which arise naturally in partially observed environments where the Agent and the Principal observe only a part of the system. We present an approach based on the stochastic maximum principle, which aims to extend existing work that uses the principle of dynamic programming in non-degenerate systems. First, we solve the Principal problem in an extended contract set given by the first-order condition of the Agent problem in the form of a path-dependent stochastic differential equation (EDSPR). Then we use the sufficient condition of the Agent problem to verify that the obtained optimal contract is implementable. A parallel study is devoted to the existence and uniqueness of the solution of path-dependent EDSPRs in Chapter IV. We extend the decoupling field method to cases where the coefficients of the equations can depend on the trajectory of the forward process. We also prove a stability property for such EDSPRs. Finally, we study the moral hazard problem with several Principals. The Agent can only work for one Principal at a time and thus faces an optimal switching problem. Using the randomization method we show that the Agent's value function and its optimal effort are given by an Itô process. This representation helps us to solve the Principal problem when there are infinitely many Principals in equilibrium according to a mean-field game. We justify the mean-field formulation by a chaos propagation argument.The second part of this thesis consists of chapters V and VI. The motivation of this work is to give a rigorous theoretical foundation for the convergence of gradient descent type algorithms which are often used in the solution of non-convex problems such as the calibration of a neural network. For non-convex problems of the hidden layer neural network type, the key idea is to transform the problem into a convex problem by raising it in the space of measurements. We show that the corresponding energy function admits a unique minimizer which can be characterized by a first order condition using the derivation in the space of measures in the sense of Lions. We then present an analysis of the long term behavior of the Langevin mean-field dynamics, which has a gradient flow structure in the 2-Wasserstein metric. We show that the marginal law flow induced by the mean-field Langevin dynamics converges to a stationary law using La Salle's invariance principle, which is the minimizer of the energy function.In the case of deep neural networks, we model them using a continuous-time optimal control problem. We first give the first order condition using Pontryagin's principle, which will then help us to introduce the system of mean-field Langevin equations, whose invariant measure corresponds to the minimizer of the optimal control problem. Finally, with the reflection coupling method we show that the marginal law of the mean-field Langevin system converges to the invariant measure with an exponential speed.
  • New approach to greedy vector quantization.

    Rancy EL NMEIR, Harald LUSCHGY, Gilles PAGES
    2020
    We extend some rate of convergence results of greedy quantization sequences already investigated in [16]. We show, for a more general class of distributions satisfying a certain control, that the quantization error of these sequences have an n − 1 d rate of convergence and that the distortion mis-match property is satisfied. We will give some non-asymptotic Pierce type estimates. The recursive character of greedy vector quantization allows some improvements to the algorithm of computation of these sequences and the implementation of a recursive formula to quantization-based numerical integration. Furthermore, we establish further properties of sub-optimality of greedy quantization sequences.
  • Convergence Rate of Optimal Quantization and Application to the Clustering Performance of the Empirical Measure.

    Yating LIU, Gilles PAGES
    2020
    We study the convergence rate of the optimal quantization for a probability measure sequence (µn) n∈N* on R^d converging in the Wasserstein distance in two aspects: the first one is the convergence rate of optimal quantizer x (n) ∈ (R d) K of µn at level K. the other one is the convergence rate of the distortion function valued at x^(n), called the "performance" of x^(n). Moreover, we also study the mean performance of the optimal quantization for the empirical measure of a distribution µ with finite second moment but possibly unbounded support. As an application, we show that the mean performance for the empirical measure of the multidimensional normal distribution N (m, Σ) and of distributions with hyper-exponential tails behave like O(log n √ n). This extends the results from [BDL08] obtained for compactly supported distribution. We also derive an upper bound which is sharper in the quantization level K but suboptimal in n by applying results in [FG15].
  • Stationary Heston model: Calibration and Pricing of exotics using Product Recursive Quantization.

    Vincent LEMAIRE, Thibaut MONTES, Gilles PAGES
    2020
    A major drawback of the Standard Heston model is that its implied volatility surface does not produce a steep enough smile when looking at short maturities. For that reason, we introduce the Stationary Heston model where we replace the deterministic initial condition of the volatility by its invariant measure and show, based on calibrated parameters, that this model produce a steeper smile for short maturities than the Standard Heston model. We also present numerical solution based on Product Recursive Quantization for the evaluation of exotic options (Bermudan and Barrier options).
  • Weak Error for Nested Multilevel Monte Carlo.

    Daphne GIORGI, Vincent LEMAIRE, Gilles PAGES
    Methodology and Computing in Applied Probability | 2020
    This article discusses MLMC estimators with and without weights, applied to nested expectations of the form E [f (E [F (Y, Z)|Y ])]. More precisely, we are interested on the assumptions needed to comply with the MLMC framework, depending on whether the payoff function f is smooth or not. A new result to our knowledge is given when f is not smooth in the development of the weak error at an order higher than 1, which is needed for a successful use of MLMC estimators with weights.
  • Stochastic Control: from Gradient Methods and Dynamic Programming to Statistical Learning.

    Gilles PAGES, Olivier PIRONNEAU
    2020
    In this article the authors wish to contribute to the evaluation of statistical learning for stochastic control. We will review the well known methods for stochastic control and compare their numerical performance to those of a neural network. This will be done on a simple but practical example arising for fishing quotas to preserve the biomass of fish.
  • Weak and strong error analysis of recursive quantization: a general approach with an application to jump diffusions.

    Gilles PAGES, Abass SAGNA
    IMA Journal of Numerical Analysis | 2020
    No summary available.
  • Some aspects of the central role of financial market microstructure : Volatility dynamics, optimal trading and market design.

    Paul JUSSELIN, Mathieu ROSENBAUM, Nicole EL KAROUI, Mathieu ROSENBAUM, Jean philippe BOUCHAUD, Darrell DUFFIE, Gilles PAGES, Peter TANKOV, Marc HOFFMANN, Nizar TOUZI, Jean philippe BOUCHAUD, Darrell DUFFIE
    2020
    This thesis is organized in three parts. The first part examines the relationship between microscopic and macroscopic market dynamics by focusing on the properties of volatility. In the second part, we focus on the stochastic optimal control of point processes. Finally, in the third part, we study two market design problems. We start this thesis by studying the links between the no-arbitrage principle and the volatility irregularity. Using a scaling method, we show that we can effectively connect these two notions by analyzing the market impact of metaorders. More precisely, we model the market order flow using linear Hawkes processes. We then show that the no-arbitrage principle and the existence of a non-trivial market impact imply that volatility is rough and more precisely that it follows a rough Heston model. We then examine a class of microscopic models where the order flow is a quadratic Hawkes process. The objective is to extend the rough Heston model to continuous models allowing to reproduce the Zumbach effect. Finally, we use one of these models, the quadratic rough Heston model, for the joint calibration of the SPX and VIX volatility slicks. Motivated by the intensive use of point processes in the first part, we are interested in the stochastic control of point processes in the second part. Our objective is to provide theoretical results for applications in finance. We start by considering the case of Hawkes process control. We prove the existence of a solution and then propose a method to apply this control in practice. We then examine the scaling limits of stochastic control problems in the context of population dynamics models. More precisely, we consider a sequence of models of discrete population dynamics which converge to a model for a continuous population. For each model we consider a control problem. We prove that the sequence of optimal controls associated to the discrete models converges to the optimal control associated to the continuous model. This result is based on the continuity, with respect to different parameters, of the solution of a backward-looking schostatic differential equation.In the last part we consider two market design problems. First, we examine the question of the organization of a liquid derivatives market. Focusing on an options market, we propose a two-step method that can be easily applied in practice. The first step is to select the options that will be listed on the market. For this purpose, we use a quantization algorithm that allows us to select the options most in demand by investors. We then propose a pricing incentive method to encourage market makers to offer attractive prices. We formalize this problem as a principal-agent problem that we solve explicitly. Finally, we find the optimal duration of an auction for markets organized in sequential auctions, the case of zero duration corresponding to the case of a continuous double auction. We use a model where the market takers are in competition and we consider that the optimal duration is the one corresponding to the most efficient price discovery process. After proving the existence of a Nash equilibrium for the competition between market takers, we apply our results on market data. For most assets, the optimal duration is between 2 and 10 minutes.
  • Quantization and martingale couplings.

    Benjamin JOURDAIN, Gilles PAGES
    2020
    No summary available.
  • Optimal dual quantizers of 1D log-concave distributions: uniqueness and Lloyd like algorithm.

    Benjamin JOURDAIN, Gilles PAGES
    2020
    We establish for dual quantization the counterpart of Kieffer's uniqueness result for compactly supported one dimensional probability distributions having a $\log$-concave density (also called strongly unimodal): for such distributions, $L^r$-optimal dual quantizers are unique at each level $N$, the optimal grid being the unique critical point of the quantization error. An example of non-strongly unimodal distribution for which uniqueness of critical points fails is exhibited. In the quadratic $r=2$ case, we propose an algorithm to compute the unique optimal dual quantizer. It provides a counterpart of Lloyd's method~I algorithm in a Voronoi framework. Finally semi-closed forms of $L^r$-optimal dual quantizers are established for power distributions on compacts intervals and truncated exponential distributions.
  • Recursive computation of invariant distributions of Feller processes.

    Gilles PAGES, Clement REY
    Stochastic Processes and their Applications | 2020
    No summary available.
  • Numerical methods and deep learning for stochastic control problems and partial differential equations.

    Come HURE, Huyen PHAM, Frederic ABERGEL, Gilles PAGES, Huyen PHAM, Frederic ABERGEL, Gilles PAGES, Romuald ELIE, John g. m. SCHOENMAKERS, Charles albert LEHALLE, Emmanuel GOBET, Jean francois CHASSAGNEUX, Romuald ELIE, John g. m. SCHOENMAKERS
    2019
    The thesis deals with numerical schemes for Markovian decision problems (MDPs), partial differential equations (PDEs), backward stochastic differential equations (SRs), as well as reflected backward stochastic differential equations (SRDEs). The thesis is divided into three parts.The first part deals with numerical methods for solving MDPs, based on quantization and local or global regression. A market-making problem is proposed: it is solved theoretically by rewriting it as an MDP. and numerically by using the new algorithm. In a second step, a Markovian embedding method is proposed to reduce McKean-Vlasov type probabilities with partial information to MDPs. This method is implemented on three different McKean-Vlasov type problems with partial information, which are then numerically solved using numerical methods based on regression and quantization.In the second part, new algorithms are proposed to solve MDPs in high dimension. The latter are based on neural networks, which have proven in practice to be the best for learning high dimensional functions. The consistency of the proposed algorithms is proved, and they are tested on many stochastic control problems, which allows to illustrate their performances.In the third part, we focus on methods based on neural networks to solve PDEs, EDSRs and reflected EDSRs. The convergence of the proposed algorithms is proved and they are compared to other recent algorithms of the literature on some examples, which allows to illustrate their very good performances.
  • Precision of caracterization of paper for recycling.

    Gilles PAGES, Victor REUTENAUER
    2019
    No summary available.
  • Quantization-based Bermudan option pricing in the FX world.

    Jean michel FAYOLLE, Vincent LEMAIRE, Thibaut MONTES, Gilles PAGES
    2019
    This paper proposes two numerical solution based on Product Optimal Quan-tization for the pricing of Foreign Echange (FX) linked long term Bermudan options e.g. Bermudan Power Reverse Dual Currency options, where we take into account stochastic domestic and foreign interest rates on top of stochastic FX rate, hence we consider a 3-factor model. For these two numerical methods, we give an estimation of the $L^2$-error induced by such approximations and we illustrate them with market-based examples that highlight the speed of such methods.
  • Uncertainty and robustness analysis for models with functional inputs and outputs.

    Mohamed EL AMRI, Clementine PRIEUR, Celine HELBERT, Herve MONOD, Julien BECT, Delphine SINOQUET, Miguel MUNOZ ZUNIGA, Gilles PAGES, Josselin GARNIER
    2019
    The objective of this thesis is to solve a problem of inversion under uncertainties of expensive functions to be evaluated within the framework of the parameterization of the control of a system of depollution of vehicles.The effect of these uncertainties is taken into account through the expectation of the quantity of interest. A difficulty lies in the fact that the uncertainty is partly due to a known functional input through a given sample. We propose two approaches based on an approximation of the costly code by Gaussian processes and a reduction of the dimension of the functional variable by a Karhunen-Loève method.The first approach consists in applying a SUR (Stepwise Uncertainty Reduction) inversion method on the expectation of the quantity of interest. At each evaluation point in the control space, the expectation is estimated by a gluttonous functional quantization method that provides a discrete representation of the functional variable and an efficient sequential estimation from the given sample of the functional variable.The second approach consists in applying the SUR method directly on the quantity of interest in the joint space of the control variables and the uncertain variables. A strategy of enrichment of the design of experiments dedicated to the inversion under functional uncertainties and exploiting the properties of Gaussian processes is proposed.These two approaches are compared on toy functions and are applied to an industrial case of after-treatment of exhaust gases of a vehicle. The problem is to determine the control settings of the system allowing the respect of the pollution control standards in the presence of uncertainties on the driving cycle.
  • Optimal Quantization : Limit Theorem, Clustering and Simulation of the McKean-Vlasov Equation.

    Yating LIU, Gilles PAGES, Marc HOFFMANN, Gerard BIAU, Francois BOLLEY, Jean francois CHASSAGNEUX, Clementine PRIEUR, Benjamin JOURDAIN, Harald LUSCHGY
    2019
    This thesis contains two parts. In the first part, we prove two limit theorems of optimal quantization. The first limit theorem is the characterization of the convergence under the Wasserstein distance of a sequence of probability measures by the simple convergence of the quantization error functions. These results are established in Rd and also in a separable Hilbert space. The second limit theorem shows the speed of convergence of the optimal grids and the quantization performance for a sequence of probability measures which converge under the Wasserstein distance, in particular the empirical measure. The second part of this thesis focuses on the approximation and simulation of the McKean-Vlasov equation. We start this part by proving, by Feyel's method (see Bouleau (1988) [Section 7]), the existence and uniqueness of a strong solution of the McKean-Vlasov equation dXt = b(t, Xt, μt)dt + σ(t, Xt, μt)dBt under the condition that the coefficient functions b and σ are lipschitzian. Then, the convergence speed of the theoretical Euler scheme of the McKean-Vlasov equation is established and also the convex order functional results for the McKean-Vlasov equations with b(t,x,μ) = αx+β, α,β ∈ R. In the last chapter, the error of the particle method, several quantization-based schemes and a hybrid particle-quantization scheme are analyzed. At the end, two example simulations are illustrated: the Burgers equation (Bossy and Talay (1997)) in dimension 1 and the FitzHugh-Nagumo neural network (Baladron et al. (2012)) in dimension 3.
  • Recursive computation of the invariant distributions of Feller processes: Revisited examples and new applications.

    Gilles PAGES, Clement REY
    Monte Carlo Methods and Applications | 2019
    No summary available.
  • New Weak Error bounds and expansions for Optimal Quantization.

    Vincent LEMAIRE, Thibaut MONTES, Gilles PAGES
    2019
    We propose new weak error bounds and expansion in dimension one for optimal quantization-based cubature formula for different classes of functions, such that piecewise affine functions, Lipschitz convex functions or differentiable function with piecewise-defined locally Lipschitz or α-Hölder derivatives. This new results rest on the local behaviors of optimal quantizers, the L r-L s distribution mismatch problem and Zador's Theorem. This new expansion supports the definition of a Richardson-Romberg extrapolation yielding a better rate of convergence for the cubature formula. An extension of this expansion is then proposed in higher dimension for the first time. We then propose a novel variance reduction method for Monte Carlo estimators, based on one dimensional optimal quantizers.
  • Probability measure characterization by L^p-quantization error function.

    Yating LIU, Gilles PAGES
    Bernoulli | 2019
    We establish conditions to characterize probability measures by their L^p-quantization error functions in both R^d and Hilbert settings. This characterization is two-fold: static (identity of two distributions) and dynamic (convergence for the L^p-Wasserstein distance). We first propose a criterion on the quantization level N, valid for any norm on Rd and any order p based on a geometrical approach involving the Voronoi diagram. Then, we prove that in the L^2-case on a (separable) Hilbert space, the condition on the level N can be reduced to N = 2, which is optimal. More quantization based characterization cases in dimension 1 and a discussion of the completeness of a distance defined by the quantization error function can be found at the end of this paper.
  • Convex order, quantization and monotone approximations of ARCH models.

    Benjamin JOURDAIN, Gilles PAGES
    2019
    We are interested in proposing approximations of a sequence of probability measures in the convex order by finitely supported probability measures still in the convex order. We propose to alternate transitions according to a martingale Markov kernel mapping a probability measure in the sequence to the next and dual quantization steps. In the case of ARCH models and in particular of the Euler scheme of a driftless Brownian diffusion, the noise has to be truncated to enable the dual quantization step. We analyze the error between the original ARCH model and its approximation with truncated noise and exhibit conditions under which the latter is dominated by the former in the convex order at the level of sample-paths. Last, we analyse the error of the scheme combining the dual quantization steps with truncation of the noise according to primal quantization.
  • X-Valuation adjustments computations by nested simulation on graphics processing units.

    Babacar DIALLO, Stephane CREPEY, Agathe GUILLOUX, Stephane CREPEY, Agathe GUILLOUX, Aurelien ALFONSI, Lokmane ABBAS TURKI, Gilles PAGES, Aurelien ALFONSI
    2019
    This thesis deals with the computation of X-value adjustments, where X includes C for credit, F for financing, M for margin and K for capital. We study different approaches based on nested simulation and implemented on graphics processing units (GPUs). We first consider the problem, for an insurance company or a bank, of numerically computing its economic capital in the form of a value-at-risk or an expected shortfall over a given time horizon. Using a stochastic approximation approach on the value-at-risk or the expected shortfall we establish the convergence of the resulting patterns of the economic capital simulation. Then, we present a nested Monte Carlo (NMC) approach for the computation of XVA. We show that the overall computation of XVAs involves five layers of dependence. The highest layers are run first and trigger nested simulations on the fly if needed to compute an element from a lower layer. Finally, we present a single-layer nested Monte Carlo (1NMC) based algorithm to simulate the U-functions of a Markov process X. The main originality of the proposed method comes from the fact that it provides a recipe for simulating U_{t>=s} conditionally on X_s. The generality, the stability and the iterative character of this algorithm, even in high dimension, make its strength.
  • Optimal control, statistical learning and order book modelling.

    Othmane MOUNJID, Mathieu ROSENBAUM, Bruno BOUCHARD DENIZE, Mathieu ROSENBAUM, Charles albert LEHALLE, Gilles PAGES, Eric MOULINES, Sophie LARUELLE, Jean philippe BOUCHAUD, Olivier GUEANT, Xin GUO
    2019
    The main objective of this thesis is to understand the interactions between financial agents and the order book. We consider in the first chapter the control problem of an agent trying to take into account the available liquidity in the order book in order to optimize the placement of a unit order. Our strategy reduces the risk of adverse selection. Nevertheless, the added value of this approach is weakened in the presence of latency: predicting future price movements is of little use if agents' reaction time is slow.In the next chapter, we extend our study to a more general execution problem where agents trade non-unitary quantities in order to limit their impact on the price. In the third chapter, we build on the previous approach to solve this time market making problems rather than execution problems. This allows us to propose relevant strategies compatible with the typical actions of market makers. Then, we model the behavior of directional high frequency traders and institutional brokers in order to simulate a market where our three types of agents interact optimally with each other.We propose in the fourth chapter an agent model where the flow dynamics depend not only on the state of the order book but also on the market history. To do so, we use generalizations of nonlinear Hawkes processes. In this framework, we are able to compute several relevant indicators based on individual flows. In particular, it is possible to classify market makers according to their contribution to volatility.To solve the control problems raised in the first part of the thesis, we have developed numerical schemes. Such an approach is possible when the dynamics of the model are known. When the environment is unknown, stochastic iterative algorithms are usually used. In the fifth chapter, we propose a method to accelerate the convergence of such algorithms.The approaches considered in the previous chapters are suitable for liquid markets using the order book mechanism. However, this methodology is not necessarily relevant for markets governed by specific operating rules. To address this issue, we propose, first, to study the behavior of prices in the very specific electricity market.
  • Nonlinear Randomized Urn Models: a Stochastic Approximation Viewpoint.

    Sophie LARUELLE, Gilles PAGES
    2018
    This paper extends the link between stochastic approximation (SA) theory and randomized urn models developed in Laruelle, Pagès (2013), and their applications to clinical trials introduced in Bai, HU (1999,2005) and Bai, Hu, Shen (2002). We no longer assume that the drawing rule is uniform among the balls of the urn (which contains d colors), but can be reinforced by a function f. This is a way to model risk aversion. Firstly, by considering that f is concave or convex and by reformulating the dynamics of the urn composition as an SA algorithm with remainder, we derive the a.s. convergence and the asymptotic normality (Central Limit Theorem, CLT) of the normalized procedure by calling upon the so-called ODE and SDE methods. An in-depth analysis of the case d=2 exhibits two different behaviors: A single equilibrium point when f is concave, and when f is convex, a transition phase from a single attracting equilibrium to a system with two attracting and one repulsive equilibrium points. The last setting is solved using results on non-convergence toward noisy and noiseless ``traps" in order to deduce the a.s. convergence toward one of the attracting points. Secondly, the special case of a Polya urn (when the addition rule is the identity matrix) is analyzed, still using result from SA theory about ``traps''. Finally, these results are applied to a function with regular variation and to an optimal asset allocation in Finance.
  • Numerical methods for Stochastic differential equations: two examples.

    Paul eric chaudru DE RAYNAL, Gilles PAGES, Clement REY
    ESAIM: Proceedings and Surveys | 2018
    No summary available.
  • Seminar of Probability XLIX.

    Emmanuel BOISSARD, Patrick CATTIAUX, Arnaud GUILLIN, Laurent MICLO, Florian BOUGUET, J. BROSSARD, C. LEURIDAN, Mireille CAPITAINE, Nicolas CHAMPAGNAT, Kolehe abdoulaye COULIBALY PASQUIER, Denis VILLEMONAIS, Henri elad ALTMAN, Peter KRATZ, Etienne PARDOUX, Antoine LEJAY, Paul MCGILL, Gilles PAGES, Benedikt WILBERTZ, Pierre PETIT, B. RAJEEV, Laurent SERLET, Hiroshi TSUKADA
    Lecture Notes in Mathematics | 2018
    No summary available.
  • Biased Monte Carlo Simulation, Multilevel Paradigm.

    Gilles PAGES
    Numerical Probability | 2018
    No summary available.
  • Optimal Quantization Methods I: Cubatures.

    Gilles PAGES
    Numerical Probability | 2018
    No summary available.
  • Improved error bounds for quantization based numerical schemes for BSDE and nonlinear filtering.

    Gilles PAGES, Abass SAGNA
    Stochastic Processes and their Applications | 2018
    We take advantage of recent (see~\cite{GraLusPag1, PagWil}) and new results on optimal quantization theory to improve the quadratic optimal quantization error bounds for backward stochastic differential equations (BSDE) and nonlinear filtering problems. For both problems, a first improvement relies on a Pythagoras like Theorem for quantized conditional expectation. While allowing for some locally Lipschitz functions conditional densities in nonlinear filtering, the analysis of the error brings into playing a new robustness result about optimal quantizers, the so-called distortion mismatch property: $L^r$-quadratic optimal quantizers of size $N$ behave in $L^s$ in term of mean error at the same rate $N^{-\frac 1d}$, $0
  • The Diffusion Bridge Method: Application to Path-Dependent Options (II).

    Gilles PAGES
    Numerical Probability | 2018
    No summary available.
  • Variance Reduction.

    Gilles PAGES
    Numerical Probability | 2018
    No summary available.
  • Numerical Probability.

    Gilles PAGES
    Universitext | 2018
    No summary available.
  • Quantitative Finance under rough volatility.

    Omar EL EUCH, Mathieu ROSENBAUM, Jean JACOD, Bruno BOUCHARD DENIZE, Jean philippe BOUCHAUD, Gilles PAGES, Peter TANKOV, Nizar TOUZI, Josef TEICHMANN, Walter SCHACHERMAYER
    2018
    This thesis aims at understanding several aspects of the roughness of volatility observed universally on financial assets. This is done in six steps. In the first part, we explain this property from the typical behaviors of agents in the market. More precisely, we build a microscopic price model based on Hawkes processes reproducing the important stylized facts of the market microstructure. By studying the long-run price behavior, we show the emergence of a rough version of the Heston model (called rough Heston model) with leverage. Using this original link between Hawkes processes and Heston models, we compute in the second part of this thesis the characteristic function of the log-price of the rough Heston model. This characteristic function is given in terms of a solution of a Riccati equation in the case of the classical Heston model. We show the validity of a similar formula in the case of the rough Heston model, where the Riccati equation is replaced by its fractional version. This formula allows us to overcome the technical difficulties due to the non-Markovian character of the model in order to value derivatives. In the third part, we address the issue of risk management of derivatives in the rough Heston model. We present hedging strategies using the underlying asset and the forward variance curve as instruments. This is done by specifying the infinite-dimensional Markovian structure of the model. Being able to value and hedge derivatives in the rough Heston model, we confront this model with the reality of financial markets in the fourth part. More precisely, we show that it reproduces the behavior of implied and historical volatility. We also show that it generates the Zumbach effect, which is a time-reversal asymmetry observed empirically on financial data. In the fifth part, we study the limiting behavior of the implied volatility at low maturity in the framework of a general stochastic volatility model (including the rough Bergomi model), by applying a density development of the asset price. While the approximation based on Hawkes processes has addressed several questions related to the rough Heston model, in Part 6 we consider a Markovian approximation applying to a more general class of rough volatility models. Using this approximation in the particular case of the rough Heston model, we obtain a numerical method for solving the fractional Riccati equations. Finally, we conclude this thesis by studying a problem not related to the rough volatility literature. We consider the case of a platform seeking the best make-take fee scheme to attract liquidity. Using the principal-agent framework, we describe the best contract to offer to the market maker as well as the optimal quotes displayed by the latter. We also show that this policy leads to better liquidity and lower transaction costs for investors.
  • Weighted multilevel Langevin simulation of invariant measures.

    Gilles PAGES, Fabien PANLOUP
    The Annals of Applied Probability | 2018
    We investigate a weighted Multilevel Richardson-Romberg extrapolation for the ergodic approximation of invariant distributions of diffusions adapted from the one introduced in~[Lemaire-Pag\`es, 2013] for regular Monte Carlo simulation. In a first result, we prove under weak confluence assumptions on the diffusion, that for any integer $R\ge2$, the procedure allows us to attain a rate $n^{\frac{R}{2R+1}}$ whereas the original algorithm convergence is at a weak rate $n^{1/3}$. Furthermore, this is achieved without any explosion of the asymptotic variance. In a second part, under stronger confluence assumptions and with the help of some second order expansions of the asymptotic error, we go deeper in the study by optimizing the choice of the parameters involved by the method. In particular, for a given $\varepsilon>0$, we exhibit some semi-explicit parameters for which the number of iterations of the Euler scheme required to attain a Mean-Squared Error lower than $\varepsilon^2$ is about $\varepsilon^{-2}\log(\varepsilon^{-1})$. Finally, we numerically this Multilevel Langevin estimator on several examples including the simple one-dimensional Ornstein-Uhlenbeck process but also on a high dimensional diffusion motivated by a statistical problem. These examples confirm the theoretical efficiency of the method.
  • Stochastic Approximation with Applications to Finance.

    Gilles PAGES
    Numerical Probability | 2018
    No summary available.
  • Discretization Scheme(s) of a Brownian Diffusion.

    Gilles PAGES
    Numerical Probability | 2018
    No summary available.
  • Sharp Rate for the Dual Quantization Problem.

    Gilles PAGES, Benedikt WILBERTZ
    Lecture Notes in Mathematics | 2018
    In this paper we establish the sharp rate of the optimal dual quantization problem. The notion of dual quantization was recently introduced in the paper [8], where it was shown that, at least in an Euclidean setting, dual quantizers are based on a Delaunay triangulation, the dual counterpart of the Voronoi tessellation on which "regular" quantization relies. Moreover, this new approach shares an intrinsic stationarity property, which makes it very valuable for numerical applications. We establish in this paper the counterpart for dual quantization of the celebrated Zador theorem, which describes the sharp asymptotics for the quantization error when the quantizer size tends to infinity. The proof of this theorem relies among others on an extension of the so-called Pierce Lemma by means of a random quantization argument.
  • A general weak and strong error analysis of the recursive quantization with an application to jump diffusions.

    Gilles PAGES, Abass SAGNA
    2018
    Observing that the recent developments of the recursive (product) quantization method induces a family of Markov chains which includes all standard discretization schemes of diffusions processes , we propose to compute a general error bound induced by the recursive quantization schemes using this generic markovian structure. Furthermore, we compute a marginal weak error for the recursive quantization. We also extend the recursive quantization method to the Euler scheme associated to diffusion processes with jumps, which still have this markovian structure, and we say how to compute the recursive quantization and the associated weights and transition weights.
  • Product Markovian Quantization of a Diffusion Process with Applications to Finance.

    Lucio FIORIN, Gilles PAGES, Abass SAGNA
    Methodology and Computing in Applied Probability | 2018
    No summary available.
  • Optimal Stopping, Multi-asset American/Bermudan Options.

    Gilles PAGES
    Numerical Probability | 2018
    No summary available.
  • Simulation of Random Variables.

    Gilles PAGES
    Numerical Probability | 2018
    No summary available.
  • Back to Sensitivity Computation.

    Gilles PAGES
    Numerical Probability | 2018
    No summary available.
  • The Monte Carlo Method and Applications to Option Pricing.

    Gilles PAGES
    Numerical Probability | 2018
    No summary available.
  • The Quasi-Monte Carlo Method.

    Gilles PAGES
    Numerical Probability | 2018
    No summary available.
  • Miscellany.

    Gilles PAGES
    Numerical Probability | 2018
    No summary available.
  • Non-Asymptotic Gaussian Estimates for the Recursive Approximation of the Invariant Measure of a Diffusion.

    Igor HONORE, Stephane MENOZZI, Gilles PAGES
    2018
    We obtain non-asymptotic Gaussian concentration bounds for the difference between the invariant measure ν of an ergodic Brownian diffusion process and the empirical distribution of an approximating scheme with decreasing time step along a suitable class of (smooth enough) test functions f such that f − ν(f) is a coboundary of the infinitesimal generator. We show that these bounds can still be improved when the (squared) Fröbenius norm of the diffusion coefficient lies in this class. We apply these bounds to design computable non-asymptotic confidence intervals for the approximating scheme. As a theoretical application, we finally derive non-asymptotic deviation bounds for the almost sure Central Limit Theorem.
  • Supplement to ``Weighted Multilevel Langevin Simulation of Invariant Measures.

    Gilles PAGES, Fabien PANLOUP
    2018
    No summary available.
  • Weak error for nested Multilevel Monte Carlo.

    Daphne GIORGI, Vincent LEMAIRE, Gilles PAGES
    2018
    This article discusses MLMC estimators with and without weights, applied to nested expectations of the form E [f (E [F (Y, Z)|Y ])]. More precisely, we are interested on the assumptions needed to comply with the MLMC framework, depending on whether the payoff function f is smooth or not. A new result to our knowledge is given when f is not smooth in the development of the weak error at an order higher than 1, which is needed for a successful use of MLMC estimators with weights.
  • Discretization of processes with stopping times and uncertainty quantification for stochastic algorithms.

    Uladzislau STAZHYNSKI, Emmanuel GOBET, Gilles PAGES, Emmanuel GOBET, Mathieu ROSENBAUM, Josselin GARNIER, Gersende FORT, Fabien PANLOUP, Philip e. PROTTER
    2018
    This thesis contains two parts that study two different topics. Chapters 1-4 are devoted to problems of discretization of processes with stopping times. In Chapter 1 we study the optimal discretization error for stochastic integrals with respect to a continuous multidimensional Brownian semimartingale. In this framework we establish a trajectory lower bound for the renormalized quadratic variation of the error. We provide a sequence of stopping times that gives an asymptotically optimal discretization. This sequence is defined as the output time of random ellipsoids by the semimartingale. Compared to the previous results we allow a rather large class of semimartingales. We prove that the lower bound is exact. In Chapter 2 we study the adaptive version of the model of the optimal discretization of stochastic integrals. In Chapter 1 the construction of the optimal strategy uses the knowledge of the diffusion coefficient of the considered semimartingale. In this work we establish an asymptotically optimal discretization strategy that is adaptive to the model and does not use any information about the model. We prove the optimality for a rather general class of discretization grids based on kernel techniques for adaptive estimation. In Chapter 3 we study the convergence of renormalized discretization error laws of Itô processes for a concrete and rather general class of discretization grids given by stopping times. Previous works on the subject consider only the case of dimension 1. Moreover they concentrate on particular cases of grids, or prove results under abstract assumptions. In our work the boundary distribution is given explicitly in a clear and simple form, the results are shown in the multidimensional case for the process and for the discretization error. In Chapter 4 we study the parametric estimation problem for diffusion processes based on time-lapse observations. Previous works on the subject consider deterministic, strongly predictable or random observation times independent of the process. Under weak assumptions, we construct a suite of consistent estimators for a large class of observation grids given by stopping times. An asymptotic analysis of the estimation error is performed. Furthermore, for the parameter of dimension 1, for any sequence of estimators that verifies an unbiased LCT, we prove a uniform lower bound for the asymptotic variance. We show that this bound is exact. Chapters 5-6 are devoted to the uncertainty quantification problem for stochastic approximation bounds. In Chapter 5 we analyze the uncertainty quantification for stochastic approximation limits (SA). In our framework the limit is defined as a zero of a function given by an expectation. This expectation is taken with respect to a random variable for which the model is supposed to depend on an uncertain parameter. We consider the limit of SA as a function of this parameter. We introduce an algorithm called USA (Uncertainty for SA). It is a procedure in increasing dimension to compute the basic chaos expansion coefficients of this function in a basis of a well chosen Hilbert space. The convergence of USA in this Hilbert space is proved. In Chapter 6 we analyze the convergence rate in L2 of the USA algorithm developed in Chapter 5. The analysis is non-trivial because of the infinite dimension of the procedure. The rate obtained depends on the model and the parameters used in the USA algorithm. Its knowledge allows to optimize the rate of growth of the dimension in USA.
  • Recursive computation of the invariant distribution of Markov and Feller processes.

    Gilles PAGES, Clement REY
    2017
    This paper provides a general and abstract approach to approximate ergodic regimes of Markov and Feller processes. More precisely, we show that the recursive algorithm presented by Lamberton an Pagès in 2002, and based on simulation algorithms of stochastic schemes with decreasing step can be used to build invariant measures for general Markov and Feller processes. We also propose applications in three different configurations: Approximation of Markov switching Brownian diffusion ergodic regimes using Euler scheme, approximation of Markov Brownian diffusion ergodic regimes with Milstein scheme and approximation of general diffusions with jump components ergodic regimes.
  • Multilevel Richardson–Romberg extrapolation.

    Vincent LEMAIRE, Gilles PAGES
    Bernoulli | 2017
    We propose and analyze a Multilevel Richardson-Romberg ($MLRR$) estimator which combines the higher order bias cancellation of the Multistep Richardson-Romberg ($MSRR$) method introduced in [Pages 07] and the variance control resulting from the stratification in the Multilevel Monte Carlo ($MLMC$) method (see [Heinrich, 01] and [Giles, 08]). Thus we show that in standard frameworks like discretization schemes of diffusion processes an assigned quadratic error $\varepsilon$ can be obtained with our ($MLRR$) estimator with a global complexity of $\log(1/\varepsilon)/\varepsilon^2$ instead of $(\log(1/\varepsilon))^2/\varepsilon^2$ with the standard ($MLMC$) method, at least when the weak error $E[Y_h]-E[Y_0]$ of the biased implemented estimator $Y_h$ can be expanded at any order in $h$. We analyze and compare these estimators on two numerical problems: the classical vanilla and exotic option pricing by Monte Carlo simulation and the less classical Nested Monte Carlo simulation.
  • Limit theorems for weighted and regular Multilevel estimators.

    Daphne GIORGI, Vincent LEMAIRE, Gilles PAGES
    Monte Carlo Methods and Applications | 2017
    We aim at analyzing in terms of a.s. convergence and weak rate the performances of the Multilevel Monte Carlo estimator (MLMC) introduced in [Gil08] and of its weighted version, the Multilevel Richardson Romberg estimator (ML2R), introduced in [LP14]. These two estimators permit to compute a very accurate approximation of $I_0 = \mathbb{E}[Y_0]$ by a Monte Carlo type estimator when the (non-degenerate) random variable $Y_0 \in L^2(\mathbb{P})$ cannot be simulated (exactly) at a reasonable computational cost whereas a family of simulatable approximations $(Y_h)_{h \in \mathcal{H}}$ is available. We will carry out these investigations in an abstract framework before applying our results, mainly a Strong Law of Large Numbers and a Central Limit Theorem, to some typical fields of applications: discretization schemes of diffusions and nested Monte Carlo.
  • Product Markovian quantization of an R^d -valued Euler scheme of a diffusion process with applications to finance.

    Fiorin LUCIO, Gilles PAGES, Abass SAGNA
    2017
    We introduce a new approach to quantize the Euler scheme of an $\mathbb R^d$-valued diffusion process. This method is based on a Markovian and componentwise product quantization and allows us, from a numerical point of view, to speak of {\em fast online quantization} in dimension greater than one since the product quantization of the Euler scheme of the diffusion process and its companion weights and transition probabilities may be computed quite instantaneously. We show that the resulting quantization process is a Markov chain, then, we compute the associated companion weights and transition probabilities from (semi-) closed formulas. From the analytical point of view, we show that the induced quantization errors at the $k$-th discretization step $t_k$ is a cumulative of the marginal quantization error up to time $t_k$. Numerical experiments are performed for the pricing of a Basket call option, for the pricing of a European call option in a Heston model and for the approximation of the solution of backward stochastic differential equations to show the performances of the method.
  • Some quick algorithms for quantitative finance.

    Guillaume SALL, Gilles PAGES, Olivier PIRONNEAU, Julien BERESTYCKI, Youssef ALLAOUI, Mike GILES, Denis TALAY
    2017
    In this thesis, we focus on critical nodes of counterparty risk computation, the rapid valuation of derivatives and their sensitivities. We propose several mathematical and computational methods to address this problem. We contribute to four different areas: an extension of the Vibrato method and the application of multilevel Monte Carlo methods for the computation of high order Greeks n>1 with an automatic differentiation technique. The third contribution concerns the valuation of American products, here we use a parareal scheme for the acceleration of the valuation process and we also make an application for the solution of a backward stochastic differential equation. The fourth contribution is the design of a high-performance computing engine with parallel architecture.
  • Limit theorems for Multilevel estimators with and without weights. Comparisons and applications.

    Daphne GIORGI, Gilles PAGES, Nicole EL KAROUI, Ahmed KEBAIER, Vincent LEMAIRE, Mike GILES, Benjamin JOURDAIN
    2017
    In this work, we are interested in Multilevel Monte Carlo estimators. These estimators will appear in their standard form, with weights and in a randomized form. We will recall their definitions and the existing results concerning these estimators in terms of simulation cost minimization. We will then show a strong law of large numbers and a central limit theorem. After that we will study two application frameworks. The first one is that of diffusions with antithetic discretization schemes, where we will extend the Multilevel estimators to Multilevel estimators with weights. The second is the nested framework, where we will focus on strong and weak error assumptions. We will conclude with the implementation of the randomized form of Multilevel estimators, comparing it to Multilevel estimators with and without weights.
  • New paradigms in heterogeneous population dynamics: trajectory modeling, aggregation, and empirical data.

    Sarah KAAKAI, Nicole EL KAROUI, Gilles PAGES, Ana maria DEBON AUCEJO, Romuald ELIE, Stephane LOISEL, Sylvie MELEARD, Etienne PARDOUX
    2017
    This thesis deals with the probabilistic modeling of the heterogeneity of human populations and its impact on longevity. In recent years, numerous studies have shown an alarming increase in geographic and socioeconomic mortality inequalities. This paradigm shift poses problems that traditional demographic models cannot solve, and whose formalization requires a fine observation of data in a multidisciplinary context. With population dynamics models as a guideline, this thesis proposes to illustrate this complexity from different points of view: The first one proposes to show the link between heterogeneity and nonlinearity in the presence of changes in population composition. The process called Birth Death Swap is defined by an equation directed by a Poisson measure using a trajectory comparison result. When swaps are faster than demographic events, an averaging result is established by stable convergence and comparison. In particular, the aggregate population tends towards non-linear dynamics. We then study empirically the impact of heterogeneity on aggregate mortality, using data from the English population structured by age and socioeconomic circumstances. We show through numerical simulations how heterogeneity can compensate for the reduction of a cause of mortality. The last point of view is an interdisciplinary review on the determinants of longevity, accompanied by a reflection on the evolution of the tools to analyze it and the new modeling challenges in the face of this paradigm shift.
  • Vibrato and Automatic Differentiation for High Order Derivatives and Sensitivities of Financial Options.

    Gilles PAGES, Olivier PIRONNEAU, Guillaume SALL
    Journal of Computational Finance | 2017
    This paper deals with the computation of second or higher order greeks of financial securities. It combines two methods, Vibrato and automatic differentiation and compares with other methods. We show that this combined technique is faster than standard finite difference, more stable than automatic differentiation of second order derivatives and more general than Malliavin Calculus. We present a generic framework to compute any greeks and present several applications on different types of financial contracts: European and American options, multidimensional Basket Call and stochastic volatility models such as Heston's model. We give also an algorithm to compute derivatives for the Longstaff-Schwartz Monte Carlo method for American options. We also extend automatic differentiation for second order derivatives of options with non-twice differentiable payoff.
  • Stochastic algorithms for risk management and indexing of media databases.

    Victor REUTENAUER, Denis TALAY, Gilles PAGES, Nicole EL KAROUI, Denis TALAY, Gilles PAGES, Nicole EL KAROUI, Jean francois CHASSAGNEUX, Benjamin JOURDAIN, Emmanuel GOBET, Jean francois CHASSAGNEUX, Benjamin JOURDAIN
    2017
    This thesis deals with various control and optimization problems for which only approximate solutions exist to date. On the one hand, we are interested in techniques to reduce or eliminate approximations in order to obtain more precise or even exact solutions. On the other hand, we develop new approximation methods to deal more quickly with larger scale problems. We study numerical methods for simulating stochastic differential equations and for improving expectation calculations. We implement quantization-type techniques for the construction of control variables and the stochastic gradient method for solving stochastic control problems. We are also interested in clustering methods related to quantization, as well as in information compression by neural networks. The problems studied are not only motivated by financial issues, such as stochastic control for option hedging in incomplete markets, but also by the processing of large media databases commonly referred to as Big data in Chapter 5. Theoretically, we propose different majorizations of the convergence of numerical methods on the one hand for the search of an optimal hedging strategy in incomplete market in chapter 3, on the other hand for the extension of the Beskos-Roberts technique of differential equation simulation in chapter 4. We present an original use of the Karhunen-Loève decomposition for a variance reduction of the expectation estimator in chapter 2.
  • Seminar of Probability XLVIII.

    Mathias BEIGLBOCK, Martin HUESMANN, Florian STEBEGG, Nicolas JUILLET, Gilles PAGES, Dai TAGUCHI, Alexis DEVULDER, Matyas BARCZY, Peter KERN, Ismael BAILLEUL, Jurgen ANGST, Camille TARDIF, Nicolas PRIVAULT, Anita BEHME, Alexander LINDNER, Makoto MAEJIMA, Cedric LECOUVEY, Kilian RASCHEL, Christophe PROFETA, Thomas SIMON, Oleskiy KHORUNZHIY, Songzi LI, Franck MAUNOURY, Stephane LAURENT, Anna AKSAMIT, Libo LI, David APPLEBAUM, Wendelin WERNER
    Lecture Notes in Mathematics | 2016
    In addition to its further exploration of the subject of peacocks, introduced in recent Séminaires de Probabilités, this volume continues the series’ focus on current research themes in traditional topics such as stochastic calculus, filtrations and random matrices. Also included are some particularly interesting articles involving harmonic measures, random fields and loop soups. The featured contributors are Mathias Beiglböck, Martin Huesmann and Florian Stebegg, Nicolas Juillet, Gilles Pags, Dai Taguchi, Alexis Devulder, Mátyás Barczy and Peter Kern, I. Bailleul, Jürgen Angst and Camille Tardif, Nicolas Privault, Anita Behme, Alexander Lindner and Makoto Maejima, Cédric Lecouvey and Kilian Raschel, Christophe Profeta and Thomas Simon, O. Khorunzhiy and Songzi Li, Franck Maunoury, Stéphane Laurent, Anna Aksamit and Libo Li, David Applebaum, and Wendelin Werner. .
  • The Parareal Algorithm for American Options.

    Gilles PAGES, Olivier PIRONNEAU, Guillaume SALL
    Comptes Rendus Mathématique | 2016
    This note provides a description of the parareal method, a numerical section to assess the performance of the method for American contracts in the scalar case computed by LSMC and parallelized by parareal time decomposition with two or more levels. It contains also a convergence proof for the two levels pa- rareal Monte-Carlo method when the coarse grid solution is computed by an Euler explicit scheme with time step ∆t > δt, the time step used for the Euler scheme at the fine grid level. Hence the theorem provides a tool to analyze also the multilevel parareal method.
  • Ninomiya-Victoir scheme : strong convergence, asymptotics for the normalized error and multilevel Monte Carlo methods.

    Anis AL GERBI, Benjamin JOURDAIN, Emmanuelle CLEMENT, Gilles PAGES, Benjamin JOURDAIN, Emmanuelle CLEMENT, Pierre HENRY LABORDERE, Ahmed KEBAIER, Antoine LEJAY, Fabien PANLOUP
    2016
    This thesis is devoted to the study of the strong convergence properties of the Ninomiya and Victoir scheme. The authors of this scheme propose to approximate the solution of a stochastic differential equation (SDE), denoted $X$, by solving $d+1$ ordinary differential equations (ODE) on each time step, where $d$ is the dimension of the Brownian motion. The aim of this study is to analyze the use of this scheme in a multi-step Monte-Carlo method. Indeed, the optimal complexity of this method is directed by the order of convergence towards $0$ of the variance between the schemes used on the coarse and on the fine grid. This order of convergence is itself related to the strong order of convergence between the two schemes. We then show in chapter $2$, that the strong order of the Ninomiya-Victor scheme, denoted $X^{NV,eta}$ and of time step $T/N$, is $1/2$. Recently, Giles and Szpruch proposed a multi-step Monte Carlo estimator realizing $Oleft(epsilon^{-2}right)$ complexity using a modified Milstein scheme. In the same spirit, we propose a modified Ninomiya-Victoir scheme that can be coupled at high order $1$ with the Giles and Szpruch scheme at the last level of a multi-step Monte Carlo method. This idea is inspired by Debrabant and Rossler. These authors suggest using a high low order scheme at the finest discretization level. Since the optimal number of discretization levels of a multi-step Monte Carlo method is directed by the low error of the scheme used on the fine grid of the last discretization level, this technique allows to accelerate the convergence of the multi-step Monte Carlo method by obtaining a high low order approximation. The use of the $1$ coupling with the Giles-Szpruch scheme allows us to keep a multi-step Monte-Carlo estimator realizing an optimal complexity $Oleft( epsilon^{-2} right)$ while taking advantage of the $2$ low order error of the Ninomiya-Victoir scheme. In the third chapter, we are interested in the renormalized error defined by $sqrt{N}left(X - X^{NV,eta}right)$. We show the stable law convergence to the solution of an affine SDE, whose source term is formed by the Lie brackets between the Brownian vector fields. Thus, when at least two Brownian vector fields do not commute, the limit is non-trivial. This ensures that the strong order $1/2$ is optimal. On the other hand, this result can be seen as a first step towards proving a central limit theorem for multi-step Monte-Carlo estimators. To do so, we need to analyze the stable law error of the scheme between two successive discretization levels. Ben Alaya and Kebaier proved such a result for the Euler scheme. When the Brownian vector fields commute, the limit process is zero. We show that in this particular case, the strong order is $1$. In chapter 4, we study the convergence to a stable law of the renormalized error $Nleft(X - X^{NV}right)$ where $X^{NV}$ is the Ninomiya-Victor scheme when the Brownian vector fields commute. We demonstrate the convergence of the renormalized error process to the solution of an affine SDE. When the dritf vector field does not commute with at least one of the Brownian vector fields, the strong convergence speed obtained previously is optimal.
  • Vibrato and automatic differentiation for high order derivatives and sensitivities of financial options.

    Gilles PAGES, Olivier PIRONNEAU, Guillaume SALL
    2016
    This paper deals with the computation of second or higher order greeks of financial securities. It combines two methods, Vibrato and automatic differentiation and compares with other methods. We show that this combined technique is faster than standard finite difference, more stable than automatic differentiation of second order derivatives and more general than Malliavin Calculus. We present a generic framework to compute any greeks and present several applications on different types of financial contracts: European and American options, multidimensional Basket Call and stochastic volatility models such as Heston's model. We give also an algorithm to compute derivatives for the Longstaff-Schwartz Monte Carlo method for American options. We also extend automatic differentiation for second order derivatives of options with non-twice differentiable payoff. 1. Introduction. Due to BASEL III regulations, banks are requested to evaluate the sensitivities of their portfolios every day (risk assessment). Some of these portfolios are huge and sensitivities are time consuming to compute accurately. Faced with the problem of building a software for this task and distrusting automatic differentiation for non-differentiable functions, we turned to an idea developed by Mike Giles called Vibrato. Vibrato at core is a differentiation of a combination of likelihood ratio method and pathwise evaluation. In Giles [12], [13], it is shown that the computing time, stability and precision are enhanced compared with numerical differentiation of the full Monte Carlo path. In many cases, double sensitivities, i.e. second derivatives with respect to parameters , are needed (e.g. gamma hedging). Finite difference approximation of sensitivities is a very simple method but its precision is hard to control because it relies on the appropriate choice of the increment. Automatic differentiation of computer programs bypass the difficulty and its computing cost is similar to finite difference, if not cheaper. But in finance the payoff is never twice differentiable and so generalized derivatives have to be used requiring approximations of Dirac functions of which the precision is also doubtful. The purpose of this paper is to investigate the feasibility of Vibrato for second and higher derivatives. We will first compare Vibrato applied twice with the analytic differentiation of Vibrato and show that it is equivalent. as the second is easier we propose the best compromise for second derivatives: Automatic Differentiation of Vibrato. In [8], Capriotti has recently investigated the coupling of different mathematical methods – namely pathwise and likelihood ratio methods – with an Automatic differ.
  • Sovereign risk modeling and applications.

    Jean francois, shanqiu LI, Jiao YING, Huyen PHAM, Gilles PAGES, Caroline HILLAIRET, Monique JEANBLANC, Idris KHARROUBI, Stephane CREPEY
    2016
    This thesis deals with the mathematical modeling of sovereign risk and its applications.In the first chapter, motivated by the Eurozone sovereign debt crisis, we propose a model of sovereign default risk. This model takes into account both the movement of sovereign creditworthiness and the impact of critical political events, adding an idiosyncratic credit risk. We focus on the probabilities of default occurring on the dates of critical political events, for which we obtain analytical formulas in a Markovian framework, where we carefully deal with some unusual features, among them the CEV model when the elasticity parameter β >1. We explicitly determine the compensating process of the defect and show that the intensity process does not exist, which contrasts our model with classical approaches. In the second chapter, by examining some hybrid models from the literature, we consider a class of random times with discontinuous conditional distributions for which the classical assumptions of filtrations magnification are not satisfied. We extend the density approach to a more general setting, where Jacod's assumption relaxes, afin order to deal with such random times in the universe of progressive magnification of filtrations. We also study classical problems: the computation of the compensator, the decomposition of the Azema surmartingale, and the characterization of martingales. The decomposition of martingales and semimartingales in the extended filtration affirms that the H' hypothesis remains valid in this generalized setting. In the third chapter, we present applications of the models proposed in the previous chapters. The most important application of the sovereign default model and the generalized density approach is the valuation of securities subject to default risk. The results explain the large negative jumps in the actuarial yield of the Greek long-term bond during the sovereign debt crisis. Greece's creditworthiness tends to worsen over the filter years and the bond yield has negative jumps during critical political events. In particular, the size of a jump depends on the severity of an exogenous shock, the time since the last political event, and the value of the recovery. The generalized density approach also makes it possible to model simultaneous defaults which, although rare, have a severe impact on the market.
  • CVaR hedging using quantization based stochastic approximation algorithm.

    G. PAGES, O. BARDOU, N. FRIKHA
    Mathematical Finance | 2016
    No summary available.
  • Recursive Marginal Quantization of the Euler Scheme of a Diffusion Process.

    G. PAGES, A. SAGNA
    Applied Mathematical Finance | 2015
    No summary available.
  • Functional quantization-based stratified sampling methods.

    Sylvain CORLAY, Gilles PAGES
    Monte Carlo Methods and Applications | 2015
    In this article, we propose several quantization-based stratified sampling methods to reduce the variance of a Monte Carlo simulation. Theoretical aspects of stratification lead to a strong link between optimal quadratic quantization and the variance reduction that can be achieved with stratified sampling. We first put the emphasis on the consistency of quantization for partitioning the state space in stratified sampling methods in both finite and infinite dimensional cases. We show that the proposed quantization-based strata design has uniform efficiency among the class of Lipschitz continuous functionals. Then a stratified sampling algorithm based on product functional quantization is proposed for path-dependent functionals of multi-factor diffusions. The method is also available for other Gaussian processes such as Brownian bridge or Ornstein-Uhlenbeck processes. We derive in detail the case of Ornstein-Uhlenbeck processes. We also study the balance between the algorithmic complexity of the simulation and the variance reduction factor.
  • Invariant measure of duplicated diffusions and application to Richardson–Romberg extrapolation.

    Vincent LEMAIRE, Gilles PAGES, Fabien PANLOUP
    Annales de l'Institut Henri Poincaré, Probabilités et Statistiques | 2015
    With a view to numerical applications we address the following question: given an ergodic Brownian diffusion with a unique invariant distribution, what are the invariant distributions of the duplicated system consisting of two trajectories? We mainly focus on the interesting case where the two trajectories are driven by the same Brownian path. Under this assumption, we first show that uniqueness of the invariant distribution (weak confluence) of the duplicated system is essentially always true in the one-dimensional case. In the multidimensional case, we begin by exhibiting explicit counter-examples. Then, we provide a series of weak confluence criterions (of integral type) and also of a.s. pathwise confluence, depending on the drift and diffusion coefficients through a non-infinitesimal Lyapunov exponent. As examples, we apply our criterions to some non-trivially confluent settings such as classes of gradient systems with non-convex potentials or diffusions where the confluence is generated by the diffusive component. We finally establish that the weak confluence property is connected with an optimal transport problem. As a main application, we apply our results to the optimization of the Richardson-Romberg extrapolation for the numerical approximation of the invariant measure of the initial ergodic Brownian diffusion.
  • Invariant measure of duplicated diffusions and application to Richardson–Romberg extrapolation.

    V. LEMAIRE, G. PAGES, Fabien PANLOUP
    Annales de l'IHP - Probabilités et Statistiques | 2015
    No summary available.
  • Introduction to vector quantization and its applications for numerics.

    Gilles PAGES
    ESAIM: Proceedings and Surveys | 2015
    We present an introductory survey to optimal vector quantization and its first applications to Numerical Probability and, to a lesser extent to Information Theory and Data Mining. Both theoretical results on the quantization rate of a random vector taking values in ℝd (equipped with the canonical Euclidean norm) and the learning procedures that allow to design optimal quantizers (CLVQ and Lloyd’s procedures) are presented. We also introduce and investigate the more recent notion of greedy quantization which may be seen as a sequential optimal quantization. A rate optimal result is established. A brief comparison with Quasi-Monte Carlo method is also carried out.
  • Sharp rate for the dual quantization problem.

    Gilles PAGES, Benedikt WILBERTZ
    2015
    In this paper we establish the sharp rate of the optimal dual quantization problem. The notion of dual quantization was recently introduced in the paper [8], where it was shown that, at least in an Euclidean setting, dual quantizers are based on a Delaunay triangulation, the dual counterpart of the Voronoi tessellation on which "regular" quantization relies. Moreover, this new approach shares an intrinsic stationarity property, which makes it very valuable for numerical applications. We establish in this paper the counterpart for dual quantization of the celebrated Zador theorem, which describes the sharp asymptotics for the quantization error when the quantizer size tends to infinity. The proof of this theorem relies among others on an extension of the so-called Pierce Lemma by means of a random quantization argument.
  • Recursive Marginal Quantization of the Euler Scheme of a Diffusion Process.

    Gilles PAGES, Abass SAGNA
    Applied Mathematical Finance | 2015
    We propose a new approach to quantize the marginals of the discrete Euler diffusion process. The method is built recursively and involves the conditional distribution of the marginals of the discrete Euler process. Analytically, the method raises several questions like the analysis of the induced quadratic quantization error between the marginals of the Euler process and the proposed quantizations. We show in particular that at every discretization step $t_k$ of the Euler scheme, this error is bounded by the cumulative quantization errors induced by the Euler operator, from times $t_0=0$ to time $t_k$. For numerics, we restrict our analysis to the one dimensional setting and show how to compute the optimal grids using a Newton-Raphson algorithm. We then propose a closed formula for the companion weights and the transition probabilities associated to the proposed quantizations. This allows us to quantize in particular diffusion processes in local volatility models by reducing dramatically the computational complexity of the search of optimal quantizers while increasing their computational precision with respect to the algorithms commonly proposed in this framework. Numerical tests are carried out for the Brownian motion and for the pricing of European options in a local volatility model. A comparison with the Monte Carlo simulations shows that the proposed method may sometimes be more efficient (w.r.t. both computational precision and time complexity) than the Monte Carlo method.
  • Greedy vector quantization.

    G. PAGES, H. LUSCHGY
    Journal of Approximation Theory | 2015
    No summary available.
  • Market finance.

    L. CARASSUS, G. PAGES
    2015
    No summary available.
  • Functional quantization-based stratified sampling methods.

    G. PAGES, S. CORLAY
    Monte Carlo Methods and Applications | 2015
    No summary available.
  • Study and modeling of stochastic differential equations.

    Clement REY, Aurelien ALFONSI, Gilles PAGES, Aurelien ALFONSI, Vlad BALLY, Emmanuel GOBET, Denis TALAY, Arnaud GLOTER
    2015
    During the last decades, the development of technological means and particularly computer science has allowed the emergence of numerical methods for the approximation of Stochastic Differential Equations (SDE) as well as for the estimation of their parameters. This thesis deals with these two aspects and is more specifically interested in the efficiency of these methods. The first part will be devoted to the approximation of SDEs by numerical schemes while the second part deals with the estimation of parameters. In the first part, we study approximation schemes for EDSs. We assume that these schemes are defined on a time grid of size $n$. We will say that the scheme $X^n$ converges weakly to the diffusion $X$ with order $h in mathbb{N}$ if for all $T>0$, $vert mathbb{E}[f(X_T)-f(X_T^n)] vertleqslant C_f /n^h$. Until now, except in some particular cases (Euler and Ninomiya Victoir schemes), the research on the subject imposes that $C_f$ depends on the infinite norm of $f$ but also on its derivatives. In other words $C_f =C sum_{green alpha green leqslant q} Green partial_{alpha} f Green_{ infty}$. Our goal is to show that if the scheme converges weakly with order $h$ for such $C_f$, then, under assumptions of nondegeneracy and regularity of the coefficients, we can obtain the same result with $C_f=C Green f Green_{infty}$. Thus, we prove that it is possible to estimate $mathbb{E}[f(X_T)]$ for $f$ measurable and bounded. We then say that the scheme converges in total variation to the diffusion with order $h$. We also prove that it is possible to approximate the density of $X_T$ and its derivatives by that $X_T^n$. In order to obtain this result, we will use an adaptive Malliavin method based on the random variables used in the scheme. The interest of our approach lies in the fact that we do not treat the case of a particular scheme. Thus our result applies to both Euler ($h=1$) and Ninomiya Victoir ($h=2$) schemes but also to a generic set of schemes. Moreover the random variables used in the scheme do not have imposed probability laws but belong to a set of laws which leads to consider our result as a principle of invariance. We will also illustrate this result in the case of a third order scheme for one-dimensional EDSs. The second part of this thesis deals with the estimation of the parameters of a DHS. Here, we will consider the particular case of the Maximum Likelihood Estimator (MLE) of the parameters that appear in the Wishart matrix model. This process is the multi-dimensional version of the Cox Ingersoll Ross process (CIR) and has the particularity of the presence of the square root function in the diffusion coefficient. Thus this model allows to generalize the Heston model to the case of a local covariance. In this thesis we construct the MLE of the Wishart parameters. We also give the convergence speed and the limit law for the ergodic case as well as for some non-ergodic cases. In order to prove these convergences, we will use various methods, in this case: ergodic theorems, time change methods, or the study of the joint Laplace transform of the Wishart and its mean. Moreover, in this last study, we extend the domain of definition of this joint transform.
  • Improved error bounds for quantization based numerical schemes for BSDE and nonlinear filtering.

    Gilles PAGES, Abass SAGNA
    2015
    We take advantage of recent (see~\cite{GraLusPag1, PagWil}) and new results on optimal quantization theory to improve the quadratic optimal quantization error bounds for backward stochastic differential equations (BSDE) and nonlinear filtering problems. For both problems, a first improvement relies on a Pythagoras like Theorem for quantized conditional expectation. While allowing for some locally Lipschitz functions conditional densities in nonlinear filtering, the analysis of the error brings into playing a new robustness result about optimal quantizers, the so-called distortion mismatch property: $L^r$-quadratic optimal quantizers of size $N$ behave in $L^s$ in term of mean error at the same rate $N^{-\frac 1d}$, $0
  • Order book dynamics: statistical analysis, modeling and forecasting.

    Weibing HUANG, Mathieu ROSENBAUM, Charles albert LEHALLE, Frederic ABERGEL, Robert ALMGREN, Aurelien ALFONSI, Bruno BOUCHARD, Gilles PAGES
    2015
    This thesis consists of two related parts, the first on the order book and the second on tick value effects. In the first part, we present our backlog modeling framework. The tail-reactive model is first introduced, in which we revise the traditional zero-intelligence approach by adding dependence on the order book state. An empirical study shows that this model is very realistic and reproduces many interesting microscopic features of the underlying asset such as the backlog distribution. We also show that it can be used as an efficient market simulator, allowing the evaluation of complex investment tactics. We then extend the tail-reactive model to a general Markovian framework. Ergodicity conditions are discussed in detail in this setting. In the second part of this thesis, we are interested in studying the role played by the tick value at two microscopic and macroscopic scales. First, an empirical study of the consequences of a change in tick value is performed using data from the Japanese 2014 tick size reduction pilot program. A prediction formula for the effects of a tick value change on transaction costs is derived. Then, a multi-agent model is introduced to explain the relationships between market volume, price dynamics, bid-ask spread, tick value and equilibrium order book state.
  • Modeling, optimization and estimation for the on-line control of trading algorithms in limit-order markets.

    Joaquin FERNANDEZ TAPIA, Gilles PAGES, Charles albert LEHALLE, Marc HOFFMANN, Mathieu ROSENBAUM, Emmanuel BACRY, Frederic ABERGEL
    2015
    The objective of this thesis is a quantitative study of the different mathematical problems that arise in algorithmic trading. Due to the strong applied character of this work, we are not only interested in the mathematical rigor of our results, but we also want to understand this research work in the context of the different steps that are part of the practical implementation of the tools that we develop. e.g. model interpretation, parameter estimation, computer implementation etc.From the scientific point of view, the core of our work is based on two techniques borrowed from the world of optimization and probability: stochastic control and stochastic approximation. In particular, we present original academic results for the high frequency market-making problem and the portfolio liquidation problem using limit-orders. Similarly, we solve the market-making problem using a forward optimization approach, which is innovative in the optimal trading literature as it opens the door to machine learning techniques. From a practical point of view, this thesis seeks to create a bridge between academic research and the financial industry. Our results are constantly considered from the perspective of their practical implementation. Thus, a large part of our work is focused on studying the different factors that are important to understand when transforming our quantitative techniques into industrial value: understanding the microstructure of the markets, stylized facts, data processing, model discussions, limitations of our scientific framework etc.
  • Urn Model-Based Adaptive Multi-arm Clinical Trials: A Stochastic Approximation Approach.

    Sophie LARUELLE, Gilles PAGES
    New Economic Windows | 2014
    This paper presents the link between stochastic approximation and multi-arm clinical trials based on randomized urn models investigated in Bai et al. (J. Multivar. Anal. 81(1):1–18, 2002) where the urn updating depends on the past performances of the treatments. We reformulate the dynamics of the urn composition, the assigned treatments and the successes of assigned treatments as standard stochastic approximation (SA) algorithms with remainder. Then, we derive the a.s. convergence of the normalized procedure under less stringent assumptions by calling upon the ODE and a new asymptotic normality result (Central Limit Theorem CLT) by calling upon the SDE methods.
  • Optimization and statistical methods for high frequency finance.

    Marc HOFFMANN, Mauricio LABADIE, Charles albert LEHALLE, Gilles PAGES, Huyen PHAM, Mathieu ROSENBAUM
    ESAIM: Proceedings and Surveys | 2014
    High Frequency finance has recently evolved from statistical modeling and analysis of financial data – where the initial goal was to reproduce stylized facts and develop appropriate inference tools – toward trading optimization, where an agent seeks to execute an order (or a series of orders) in a stochastic environment that may react to the trading algorithm of the agent (market impact, invoentory). This context poses new scientific challenges addressed by the minisymposium OPSTAHF.
  • Acceleration of the Monte Carlo method for diffusion processes and applications in Finance.

    Kaouther HAJJI, Ahmed KEBAIER, Mohamed BEN ALAYA, Gilles PAGES, Jean stephane DHERSIN, Gersende FORT, Yueyun HU, Denis TALAY, Bernard LAPEYRE
    2014
    In this thesis, we focus on the combination of variance reduction and complexity reduction methods of the Monte Carlo method. In a first part of this thesis, we consider a continuous diffusion model for which we build an adaptive algorithm by applying importance sampling to the Romberg Statistical method. We prove a Lindeberg Feller type central limit theorem for this algorithm. In this same framework and in the same spirit, we apply importance sampling to the Multilevel Monte Carlo method and we also prove a central theorem for the obtained adaptive algorithm. In the second part of this thesis, we develop the same type of algorithm for a non-continuous model, namely the Lévy processes. Similarly, we prove a central limit theorem of the Lindeberg Feller type. Numerical illustrations have been carried out for the different algorithms obtained in the two frameworks with jumps and without jumps.
  • Introduction to optimal vector quantization and its applications for numerics.

    Gilles PAGES
    2014
    We present an introductory survey to optimal vector quantization and its first applications to Numerical Probability and, to a lesser extent to Information Theory and Data Mining. Both theoretical results on the quantization rate of a random vector taking values in R^d (equipped with the canonical Euclidean norm) and the learning procedures that allow to design optimal quantizers (CLVQ and Lloyd's I procedures) are presented. We also introduce and investigate the more recent notion of {\em greedy quantization} which may be seen as a sequential optimal quantization. A rate optimal result is established. A brief comparison with Quasi-Monte Carlo method is also carried out.
  • Greedy vector quantization.

    Harald LUSCHGY, Gilles PAGES
    2014
    We investigate the greedy version of the L^p-optimal vector quantization problem for an R^d-valued random vector X\in L^p. We show the existence of a sequence (a_N) such that a_N minimizes a\mapsto\big \|\min_{1\le i\le N-1}|X-a_i|\wedge |X-a|\big\|_{p}: the L^p-mean quantization error at level N induced by (a_1,\ldots,a_{N-1},a). We show that this sequence produces L^p-rate optimal N-tuples a^{(N)}=(a_1,\ldots,a_{_N}): their L^p-mean quantization errors at level $N$ go to 0 at rate N^{-\frac 1d}. Greedy optimal sequences also satisfy, under natural additional assumptions, the distortion mismatch property: the N-tuples a^{(N)} remain rate optimal with respect to the L^q-norms, if p\le q.
  • Convex order for path-dependent derivatives: a dynamic programming approach.

    Gilles PAGES
    2014
    We investigate the (functional) convex order of for various continuous martingale processes, either with respect to their diffusions coefficients for Lévy-driven SDEs or their integrands for stochastic integrals. Main results are bordered by counterexamples. Various upper and lower bounds can be derived for path wise European option prices in local volatility models. In view of numerical applications, we adopt a systematic (and symmetric) methodology: (a) propagate the convexity in a {\em simulatable} dominating/dominated discrete time model through a backward induction (or linear dynamical principle). (b) Apply functional weak convergence results to numerical schemes/time discretizations of the continuous time martingale satisfying (a) in order to transfer the convex order properties. Various bounds are derived for European options written on convex pathwise dependent payoffs. We retrieve and extend former results obtains by several authors since the seminal 1985 paper by Hajek . In a second part, we extend this approach to Optimal Stopping problems using a that the Snell envelope satisfies (a') a Backward Dynamical Programming Principle to propagate convexity in discrete time. (b') satisfies abstract convergence results under non-degeneracy assumption on filtrations. Applications to the comparison of American option prices on convex pathwise payoff processes are given obtained by a purely probabilistic arguments.
  • A mixed-step algorithm for the approximation of the stationary regime of a diffusion.

    Gilles PAGES, Fabien PANLOUP
    Stochastic Processes and their Applications | 2014
    In some recent papers, some procedures based on some weighted empirical measures related to decreasing-step Euler schemes have been investigated to approximate the stationary regime of a diffusion (possibly with jumps) for a class of functionals of the process. This method is efficient but needs the computation of the function at each step. To reduce the complexity of the procedure (especially for functionals), we propose in this paper to study a new scheme, called mixed-step scheme where we only keep some regularly time-spaced values of the Euler scheme. Our main result is that, when the coefficients of the diffusion are smooth enough, this alternative does not change the order of the rate of convergence of the procedure. We also investigate a Richardson-Romberg method to speed up the convergence and show that the variance of the original algorithm can be preserved under a uniqueness assumption for the invariant distribution of the ''duplicated'' diffusion, condition which is extensively discussed in the paper. Finally, we end by giving some sufficient ''asymptotic confluence'' conditions for the existence of a smooth solution to a discrete version of the associated Poisson equation, condition which is required to ensure the rate of convergence results.
  • Recursive marginal quantization of the Euler scheme of a diffusion process.

    Gilles PAGES, Abass SAGNA
    2014
    We propose a new approach to quantize the marginals of the discrete Euler diffusion process. The method is built recursively and involves the conditional distribution of the marginals of the discrete Euler process. Analytically, the method raises several questions like the analysis of the induced quadratic quantization error between the marginals of the Euler process and the proposed quantizations. We show in particular that at every discretization step $t_k$ of the Euler scheme, this error is bounded by the cumulative quantization errors induced by the Euler operator, from times $t_0=0$ to time $t_k$. For numerics, we restrict our analysis to the one dimensional setting and show how to compute the optimal grids using a Newton-Raphson algorithm. We then propose a closed formula for the companion weights and the transition probabilities associated to the proposed quantizations. This allows us to quantize in particular diffusion processes in local volatility models by reducing dramatically the computational complexity of the search of optimal quantizers while increasing their computational precision with respect to the algorithms commonly proposed in this framework. Numerical tests are carried out for the Brownian motion and for the pricing of European options in a local volatility model. A comparison with the Monte Carlo simulations shows that the proposed method may sometimes be more efficient (w.r.t. both computational precision and time complexity) than the Monte Carlo method.
  • Optimal posting price of limit orders: learning by trading.

    Sophie LARUELLE, Charles albert LEHALLE, Gilles PAGES
    Mathematics and Financial Economics | 2013
    Considering that a trader or a trading algorithm interacting with markets during continuous auctions can be modeled by an iterating procedure adjusting the price at which he posts orders at a given rhythm, this paper proposes a procedure minimizing his costs. We prove the a.s. convergence of the algorithm under assumptions on the cost function and give some practical criteria on model parameters to ensure that the conditions to use the algorithm are fulfilled (using notably the co-monotony principle). We illustrate our results with numerical experiments on both simulated data and using a financial market dataset.
  • Randomized Urn Models revisited using Stochastic Approximation.

    Sophie LARUELLE, Gilles PAGES
    Annals of Applied Probability | 2013
    This paper presents the link between stochastic approximation and clinical trials based on randomized urn models investigated in Bai and Hu (1999,2005) and Bai, Hu and Shen (2002). We reformulate the dynamics of both the urn composition and the assigned treatments as standard stochastic approximation (SA) algorithms with remainder. Then, we derive the a.s. convergence and the asymptotic normality (CLT) of the normalized procedure under less stringent assumptions by calling upon the ODE and SDE methods. As a second step, we investigate a more involved family of models, known as multi-arm clinical trials, where the urn updating depends on the past performances of the treatments. By increasing the dimension of the state vector, our SA approach provides this time a new asymptotic normality result.
  • A mixed-step algorithm for the approximation of the stationary regime of a diffusion.

    G. PAGES, F. PANLOUP
    Stochastic Processes and their Applications | 2013
    No summary available.
  • Functional Co-monotony of Processes with Applications to Peacocks and Barrier Options.

    Gilles PAGES
    Séminaire de Probabilités XLV | 2013
    We show that several general classes of stochastic processes satisfy a functional co-monotony principle, including processes with independent increments, Brownian diffusions, Liouville processes. As a first application, we recover some recent results about peacock processes obtained by Hirsch et al. which were themselves motivated by a former work of Carr et al. about the sensitivity of Asian Call options with respect to their volatility and residual maturity (seniority). We also derive semi-universal bounds for various barrier options.
  • Optimal posting distance of limit orders: a stochastic algorithm approach.

    G. PAGES, S. LARUELLE, C. a. LEHALLE
    Mathematics and Financial Economics | 2013
    No summary available.
  • Multi-asset American options and parallel quantization.

    A. BRONSTEIN, G. PAGES, J. PORTES
    Methodology and Computing in Applied Probability | 2013
    No summary available.
  • CVa R HEDGING USING QUANTIZATION-BASED STOCHASTIC APPROXIMATION ALGORITHM.

    O. BARDOU, N. FRIKHA, G. PAGES
    Mathematical Finance | 2013
    In this paper, we investigate a method based on risk minimization to hedge observable but non-tradable source of risk on financial or energy markets. The optimal portfolio strategy is obtained by minimizing dynamically the Conditional Value-at-Risk (CVaR) using three main tools: stochastic approximation algorithm, optimal quantization and variance reduction techniques (importance sampling (IS) and linear control variable (LCV)) as the quantities of interest are naturally related to rare events. As a first step, we investigate the problem of CVaR regression, which corresponds to a static portfolio strategy where the number of units of each tradable assets is fixed at time 0 and remains unchanged till time $T$. We devise a stochastic approximation algorithm and study its a.s. convergence and rate of convergence. Then, we extend to the dynamic case under the assumption that the process modelling the non-tradable source of risk and financial assets prices are Markov. Finally, we illustrate our approach by considering several portfolios in the incomplete energy market.
  • Optimization and statistical methods for high frequency finance.

    G. PAGES, H. PHAM, M. ROSENBAUM, M. HOFFMANN, M. LABADIE, C. a. LEHALLE
    Congrès SMAI 2013 | 2013
    No summary available.
  • Pointwise convergence of the Lloyd algorithm in higher dimension.

    Gilles PAGES, Jun YU
    2013
    We establish the pointwise convergence of the iterative Lloyd algorithm, also known as $k$-means algorithm, when the quadratic quantization error of the starting grid (with size $N\ge 2$) is lower than the minimal quantization error with respect to the input distribution is lower at level $N-1$. Such a protocol is known as the splitting method and allows for convergence even when the input distribution has an unbounded support. We also show under very light assumption that the resulting limiting grid still has full size $N$. These results are obtained without continuity assumption on the input distribution. A variant of the procedure taking advantage of the asymptotic of the optimal quantizer radius is proposed which always guarantees the boundedness of the iterated grids.
  • Randomized urn models revisited using Stochastic Approximation.

    G. PAGES, S. LARUELLE
    Annals of Applied Probability | 2013
    No summary available.
  • Functional co-monotony of processes with applications to peacocks and barrier options.

    G. PAGES
    Séminaire de Probabilités XLV | 2013
    No summary available.
  • Stochastic control by quantization methods and applications to finance.

    Camille ILLAND, Gilles PAGES
    2012
    This thesis contains three parts that can be read independently. In the first part, we study the resolution of stochastic control problems by quantization methods. The quantization consists in finding the best approximation of continuous probability distribution by a discrete probability law with a number N of points supporting this distribution. We explicit a framework of “generic” dynamic programming which permits to resolve many stochastic control problems, such as optimal stopping time problems, maximization of utility, backward stochastic differential equations (BSDE), filter problems… In this context, we give three discretization schemes in space associated to the quantization of a Markov chain. In the second part, we present a numerical scheme for doubly reflected BSDEs. We consider a general framework which contains jumps and path-dependent progressive processes. We use a discrete time Euler-type approximation scheme. We prove the convergence of this scheme for BSDE when the time step number n tends to infinity. We also give the convergence speed for game options. In the third part, we focus on the replication of derivatives on realized variance. We suggest a robust hedging to the volatility model with dynamic positions on European options. Then, we extend this methodology to fund options and to jump process.
  • Estimates for hidden Markov models and particle approximations: Application to simultaneous mapping and localization.

    Sylvain LE CORFF, Eric MOULINES, Gersende FORT, Elisabeth GASSIAT, Jean michel MARIN, Arnaud DOUCET, Gilles PAGES
    2012
    In this thesis, we are interested in the estimation of parameters in hidden Markov chains. We first consider the problem of online estimation (without saving observations) in the maximum likelihood sense. We propose a new method based on the Expectation Maximization algorithm called Block Online Expectation Maximization (BOEM). This algorithm is defined for hidden Markov chains with general state space and observation space. In the case of general state spaces, the BOEM algorithm requires the introduction of sequential Monte Carlo methods to approximate expectations under smoothing laws. The convergence of the algorithm then requires a control of the norm Lp of the Monte Carlo approximation error explicit in the number of observations and particles. A second part of this thesis is devoted to obtaining such controls for several sequential Monte Carlo methods. Finally, we study applications of the BOEM algorithm to simultaneous mapping and localization problems. The last part of this thesis is related to nonparametric estimation in hidden Markov chains. The problem considered is addressed in a specific framework. We assume that (Xk) is a random walk whose law of increments is known to within a scale factor a. We assume that, for any k, Yk is an observation of f(Xk) in an additive Gaussian noise, where f is a function we seek to estimate. We establish the identifiability of the statistical model and propose an estimate of f and a from the pairwise likelihood of the observations.
  • Numerical methods for piecewise deterministic Markovian processes.

    Adrien BRANDEJSKY, Benoite, de SAPORTA, Francois DUFOUR, Oswaldo luiz do valle COSTA, A. o. charles ELEGBEDE, Bruno GAUJAL, Gilles PAGES
    2012
    Piecewise Deterministic Markovian Processes (PDMP) were introduced in the literature by M.H.A. Davis as a general class of non-diffusive stochastic models. PMDMs are hybrid processes characterized by deterministic trajectories interspersed with random jumps. In this thesis, we develop numerical methods adapted to PMDMs based on the quantization of a Markov chain underlying the PMDM. We successively address three problems: the approximation of function expectations of a PMDM, the approximation of the moments and distribution of an output time and the partially observed optimal stopping problem. In this last part, we also address the issue of filtering a PMDM and establish the dynamic programming equation of the optimal stopping problem. We prove the convergence of all our methods (with bounds on the convergence speed) and illustrate them with numerical examples.
  • Analysis of stochastic algorithms applied to finance.

    Sophie LARUELLE, Gilles PAGES
    2011
    This thesis deals with the analysis of stochastic algorithms and their application in Finance. The first part presents a convergence result for stochastic algorithms where innovations verify averaging assumptions with a certain speed. We apply it to different types of innovations and illustrate it on examples mainly motivated by Finance. We then establish a "universal" speed of convergence result in the framework of equi-separated innovations and compare our results with those obtained in the i-framework. I. D. The second part is devoted to applications. We first present an optimal allocation problem applied to dark pools. The execution of the maximum desired quantity leads to the construction of a constrained stochastic algorithm studied in the innovation framework i. I. D. and averaging innovations. The next chapter presents a constrained stochastic optimization algorithm with projection to find the best placement distance in an order book by minimizing the execution cost of a given quantity. We then study the implementation and calibration of parameters in financial models by stochastic algorithm and illustrate these 2 techniques with examples of application on Black-Scholes, Merton and pseudo-CEV models. The last chapter deals with the application of stochastic algorithms in the framework of random urn models used in clinical trials. Using the ODE and DHS methods, we recover the convergence and speed results of Bai and Hu under weaker assumptions on the generating matrices.
  • Some aspects of optimal quantization and applications to finance.

    Sylvain CORLAY, Gilles PAGES
    2011
    This thesis is devoted to the study of optimal quantization and its applications. We deal with theoretical, algorithmic and numerical aspects. It consists of five chapters. In the first part, we study the links between variance reduction by stratification and quadratic optimal quantization. In the case where the random variable considered is a Gaussian process, a simulation scheme of linear complexity is developed for the conditional distribution at one stratum of the process in question. The second chapter is devoted to the numerical evaluation of the Karhunen-Loève basis of a Gaussian process by the Nyström method. In the third part, we propose a new approach to the quantization of EDS solutions, whose convergence we study. These results lead to a new cubature scheme for the solutions of stochastic differential equations, which is developed in the fourth chapter, and which we test on option pricing problems. In the fifth chapter, we present a new fast nearest neighbor tree search algorithm, based on the quantization of the empirical law of the considered point cloud.
  • Contribution to the modeling and dynamic risk management of energy markets.

    Noufel FRIKHA, Gilles PAGES
    2010
    This thesis is devoted to probabilistic numerical problems related to modeling, control and risk management and motivated by applications in energy markets. The main tool used is the theory of stochastic algorithms and simulation methods. This thesis consists of three parts. The first part is devoted to the estimation of two risk measures of the L-distribution of losses in a portfolio: the Value-at-Risk (VaR) and the Conditional Value-at-Risk (CVaR). This estimation is performed using a stochastic algorithm combined with an adaptive variance reduction method. The first part of this chapter deals with the case of finite dimension, the second extends the first to the case of a function of the trajectory of a process and the last one deals with the case of sequences with low discrepancy. The second chapter is dedicated to methods for hedging risk in CVaR in an incomplete market operating in discrete time using stochastic algorithms and optimal vector quantization. Theoretical results on CVaR hedging are presented and then numerical aspects are discussed in a Markovian framework. The last part is devoted to the joint modeling of spot gas and electricity prices. The multi-factor model presented is based on stationary Ornstein processes with a parametric diffusion coefficient.
  • LNG portfolio optimization approach by stochastic programming technique.

    Zhihao CEN, Frederic BONNANS, Emmanuel GOBET, Pierre BONAMI, Thibault CHRISTEL, Michel DE LARA, Rene HENRION, Gilles PAGES
    2010
    No summary available.
  • Post-transcriptional and post-translational regulation of DUSP6, a phosphatase of ERK 1/2 MAP kinases.

    Olga BERMUDEZ, Clotilde GIMOND, Gilles PAGES
    2009
    MAP kinase phosphatases (MKPs) belong to the Dual-Specificity Phosphatases (DUSP) family and dephosphorylate the threonine and tyrosine residues of activated MAP kinases. DUSP6/MKP-3 is a cytoplasmic phosphatase that dephosphorylates and thus specifically inactivates ERK1/2 MAP kinases. DUSP6 has an important role during development, particularly in the regulation of FGF-induced signaling, and its absence causes major phenotypic effects in Drosophila, chicken, zebrafish and mouse. DUSP6 could also play an important role during tumor formation and development as its expression is altered in various cancers. For these reasons, I was interested in the molecular mechanisms involved in the regulation of its expression, both at the post-transcriptional and post-translational level. Previous data from the laboratory indicated that DUSP6 was phosphorylated and degraded after stimulation of cells with growth factors in a MEK/ERK-dependent manner (Marchetti et al. , 2005). In the first part of my thesis, I studied the role of other signaling pathways in the regulation of DUSP6. We have shown that another signaling pathway, the PI3K/mTOR pathway, is responsible for part of the phosphorylation and degradation of DUSP6 induced by growth factors (Bermudez et al. , 2008). However, basal MEK activity is required for mTOR-induced phosphorylation of DUSP6 to occur. Mutagenesis studies have shown that serine 159 is the residue phosphorylated by mTOR. The DUSP6 phosphatase could therefore be a new point of interaction between two major cell signaling pathways activated by growth factors, the MEK/ERK pathway and the PI3K/mTOR pathway. In the second part of my work, I focused on the regulation of dusp6 at the level of its mRNA. Other teams have shown that the MEK/ERK pathway plays a role in the transcriptional activation of dusp6. We confirmed that MEK/ERK inhibition strongly reduces dusp6 mRNA levels. To investigate the regulation of dusp6 mRNA stability, we cloned into an expression vector a luciferase reporter gene upstream of the 3'UTR non-coding region of dusp6, which contains consensus sites for different factors that destabilize/stabilize mRNAs. We found that the MEK/ERK pathway stabilizes dusp6 mRNA. Furthermore, hypoxic conditions, a characteristic of many tumors in vivo, induce an increase in dusp6 mRNA levels, which is dependent on HIF-1alpha. Finally, we identified two factors that destabilize dusp6 mRNA, TTP (tristetraprolin) and PUM2, a homolog of the Drosophila pumilio gene. The results presented in this thesis therefore show that the MEK/ERK pathway is involved in the regulation of DUSP6 at different levels, from the regulation of its mRNA to the post-translational level, in a feedback loop. The study of DUSP6 regulation provides additional elements for the understanding of the complex mechanisms involved in ERK1/2 activation within the MAPK signaling network, where positive and negative regulations contribute to a subtle control of ERKs MAP Kinases activation in space and time.
  • Financial models and price formation : applications to sport betting.

    Benoit JOTTREAU, Marie claire QUENEZ, Marie claire QUENEZ, Ruud KONING, Gilles PAGES, Monique JEANBLANC, Damien LAMBERTON, Bernard LAPEYRE, Huyen PHAM, Ruud KONING, Gilles PAGES
    2009
    This thesis is composed of four chapters. The first chapter deals with the valuation of financial products in a model with a jump for the risk asset. This jump represents the bankruptcy of the corresponding firm. We then study the valuation of option prices by utility indifference in an exponential utility framework. Using dynamic programming techniques, we show that the price of a Bond is a solution of a differential equation and the price of asset-dependent options is a solution of a Hamilton-Jacobi-Bellman partial drift equation. The jump in the dynamics of the risky asset induces differences with the Merton model that we try to quantify. The second chapter deals with a market with jumps: soccer betting. We recall the different families of models for a soccer game and introduce a complete model allowing us to evaluate the prices of the different products that have appeared on this market over the last ten years. The complexity of this model leads us to study a simplified model whose implications we study and calculate the prices obtained, which we compare with reality. We notice that the implicit calibration obtained generates very good results by producing prices very close to reality. The third chapter develops the problem of price setting by a monopolistic market maker in the binary betting market. This work is a direct extension of the problem introduced by Levitt [Lev04]. We generalize his work to the case of European bets and propose a method to estimate the pricing method used by the bookmaker. We show that two inextricable hypotheses can explain this price fixing. On the one hand, the public's uncertainty about the true value and on the other hand, the bookmaker's extreme risk-averse character. The fourth chapter extends this approach to the case of non-binary financial products. We examine different supply and demand models and derive, by dynamic programming techniques, partial differential equations dictating the formation of the buying and selling prices. We finally show that the spread between the buying and selling price does not depend on the position of the market maker in the asset under consideration. However, the average price depends strongly on the quantity held by the market maker. A simplified approach is finally proposed in the multidimensional case.
  • Optimal quantization methods with applications to finance.

    Abass SAGNA, Gilles PAGES
    2008
    This thesis is devoted to quantification with applications to finance. Chapter 1 recalls the basics of quantization and the methods for finding optimal quantizers. In chap. 2 we study the asymptotic behavior, in s, of the quantization error associated with a linear transformation of an optimal quantifier sequence in r. We show that such a transformation makes the transformed sequence l's rate optimal for all s, for a large family of probabilities. Chap. 3 studies the asymptotic behavior of the maximal radius sequence associated to an optimal l'r quantifier sequence. We show that as soon as supp(p) is unbounded this sequence tends to infinity. We give, for a large family of probabilities, the speed of convergence to infinity. Chapter 4 is devoted to the pricing of lookback and barrier options. We write these prices in a form that allows us to estimate them by monte carlo, by a hybrid monte carlo-quantification method and by pure quantization.
  • Recursive approximation of the stationary regime of a stochastic differential equation with jumps.

    Fabien PANLOUP, Gilles PAGES
    2006
    This thesis is mainly devoted to the construction and study of computer-implementable methods for approximating the stationary regime of a multidimensional ergodic process, solution of an EDS directed by a Lévy process. Based on an approach developed by Lamberton&Pagès and Lemaire in the framework of Brownian diffusions, our methods based on "exact" or "approximated" decreasing step Euler schemes allow to efficiently simulate the invariant probability but also the global law of such a process in stationary regime. This work has various theoretical and practical applications, some of which are developed here (TCL p. S. For stable laws, limit theorem for extreme values, option pricing for stationary stochastic volatility models. . ).
  • Optimal quantization methods for filtering and applications to finance.

    Afef SELLAMI, Gilles PAGES, Huyen PHAM
    2005
    We develop a numerical solution approach to grid filtering, using optimal quantization results for random variables. We implement two filter computation algorithms using 0-order and 1-order approximation techniques. We propose implementable versions of these algorithms and study the behavior of the error of the approximations as a function of the quantizer size based on the stationarity property of optimal quantizers. We position this grid approach in relation to the Monte Carlo particle approach through the comparison of the two methods and their experimentation on different state models. In a second part, we focus on the advantage of quantization for the preprocessing of offline data to develop a filtering algorithm by quantization of the observations (and the signal). The error is also studied and a convergence rate is established as a function of the quantizer size. Finally, the quantization of the filter as a random variable is studied in order to solve an American option pricing problem in a market with unobserved stochastic volatility. All results are illustrated through numerical examples.
  • Recursive estimation of the invariant measure of a diffusion process.

    Vincent LEMAIRE, Damien LAMBERTON, Gilles PAGES
    2005
    The purpose of this thesis is to study an algorithm, simple to implement and recursive, allowing to compute the integral of a function with respect to the invariant probability of a process solution of a finite dimensional stochastic differential equation. The main assumption on these solutions (diffusions) is the existence of a Lyapounov function guaranteeing a stability condition. By the ergodic theorem we know that empirical measures of diffusion converge to an invariant measure. We study a similar convergence when the scattering is discretized by an Euler scheme of decreasing pitch. We prove that the weighted empirical measures of this scheme converge to the invariant measure of the scattering, and that it is possible to integrate exponential functions when the scattering coefficient is sufficiently small. Moreover, for a more restricted class of diffusions, we prove the almost certain convergence in Lp of the Euler scheme to the diffusion. We obtain convergence speeds for the weighted empirical measures and give the parameters allowing an optimal speed. We finish the study of this scheme when there are multiple invariant measures. This study is done in dimension 1, and allows us to highlight a link between Feller classification and Lyapunov functions. In the last part, we present a new adaptive algorithm allowing to consider more general problems such as Hamiltonian systems or monotone systems. It consists in considering the empirical measures of an Euler scheme built from a sequence of adapted random steps dominated by a sequence decreasing to 0.
  • Stochastic optimization methods applied to engine tuning.

    Aurelien SCHMIED, Gilles PAGES
    2003
    This thesis deals with stochastic optimization methods applied to engine tuning, so that the engine consumes as little fuel as possible and respects the pollution standards in force. This thesis proposes two modofocations of the existing methodology at Renault and a new approach that breaks with the process currently used. The first modification consists in reformulating the optimization problem using fuzzy logic models and the second one uses a new stochastic optimization algorithm "Multistosch", for which different convergence results are demonstrated. The new approach is based on a dynamic test planning tool (trajectories in a plane), using functional quantization and, in particular, the one under constraints. In this framework, a new form of constrained distortion is presented, which we will try to minimize by stochastic optimization algorithms.
  • On some quantification problems.

    Pierre COHORT, Gilles PAGES
    2000
    No summary available.
Affiliations are detected from the signatures of publications identified in scanR. An author can therefore appear to be affiliated with several structures or supervisors according to these signatures. The dates displayed correspond only to the dates of the publications found. For more information, see https://scanr.enseignementsup-recherche.gouv.fr