All talks will take place in King's College London, Bush House, Lecture Theatre 2 (4.04), Strand, London WC2R 2LS, United Kingdom.

Monday 10 September 2018

         8.30 - 9.00     Registration

Morning session (Chair: Marcel Nutz)

         8.30 - 9.15      Registration
         9.15 - 9.30      Opening remarks
         9.30 - 10.10     Jim Gatheral: Diamonds: A quant's best friend [Slides] [Hide]
We use the Alós-Itô Decomposition Formula to express certain conditional expectations as exponentials of forests of trees. Each tree represents iterated applications of a new diamond operator. As one application, we compute an exact formal expression for the leverage swap for any stochastic volatility model expressed in forward variance form. As another, we show how to extend the Bergomi-Guyon expansion to all orders in volatility of volatility. Finally, we compute exact expressions under rough volatility, obtaining in particular the fractional Riccati equation for the rough Heston characteristic function. As a corollary, we compute a closed-form expression for the leverage swap in the rough Heston model.
         10.10 - 10.50   Emmanuel Bacry: Disentangling and quantifying market participant volatility contributions [Slides] [Hide]
Thanks to the access to labeled orders on the Cac40 index future provided by Euronext, we are able to quantify market participants contributions to the volatility in the diffusive limit. To achieve this result we leverage the branching properties of Hawkes point processes. We find that fast intermediaries (e.g., market maker type agents) have a smaller footprint on the volatility than slower, directional agents. The branching structure of Hawkes processes allows us to examine also the degree of endogeneity of each agent behavior. We find that high-frequency traders are more endogenously driven than other types of agents.

         10.50 - 11.20  COFFEE BREAK

         11.20 - 12.00   Peter Tankov: Mean field games of optimal stopping: a relaxed control approach [Slides] [Hide]
We consider the mean-field game where each agent determines the optimal time to exit the game by solving an optimal stopping problem where the reward function depends on the density of states of agents still present in the game. Placing ourselves in the framework of relaxed optimal stopping, we prove the existence and uniqueness of the mixed Nash equilibrium. Further, we provide a criterion under which the optimal strategies are pure strategies, and present a numerical method for computing the equilibrium. Applications to mathematical finance and financial economics will be briefly discussed.
This is joint work with Roxana Dumitrescu and Géraldine Bouveret.
         12.00 - 12.40   Yingying Li: Approaching Mean-Variance Efficiency for Large Portfolios [Slides] [Hide]
This paper introduces a new approach to constructing optimal mean-variance portfolios. The approach relies on a novel unconstrained regression representation of the mean-variance optimization problem combined with high-dimensional sparse-regression methods. Our estimated portfolio, under a mild sparsity assumption, controls the risk and attains the maximum expected return as both the numbers of assets and observations grow. The superior properties of our approach are demonstrated through comprehensive simulation and empirical analysis. Notably, we find that investing in individual stocks in addition to the Fama-French three factor portfolios using our strategy leads to substantially improved performance.
         12.40 - 13.10   Shri Rengasamy: Till Death do us part: How can we have a lasting relationship with our retirement savings? [Slides] [Hide]
With more countries shifting to the provision of defined contribution (DC) retirement benefits instead of defined benefits, the responsibility increasingly falls to individuals to manage their finances all throughout their life. Whilst the industry has evolved to help individuals in accumulating wealth, the road to de-cumulating wealth is less well-trodden. For the first time in known history, we will have entire generations of people retiring with just DC benefits. They will have to spread a pot of money over an unknown period of time and ensure it provides for ongoing income that is both adequate and sustainable. Yet, we have very little in the way of tools to help. It is an actuarial puzzle that we haven't yet solved for ourselves; so, how will individuals manage? This talk, from the perspective of an industry practitioner, discusses the challenges facing individuals at retirement and what we can do as an industry and within academia to help them achieve a lasting long-term relationship with their retirement savings.

         1.10 - 2.40   LUNCH BREAK

Afternoon session (Chair: Kostas Kardaras)

         2.40 - 3.20   Enrico Biffis: Optimal portfolio choice with path dependent labor income: The infinite horizon case [Slides] [Hide]
We consider an infinite horizon portfolio problem with borrowing constraints, in which an agent receives labor income which adjusts to financial market shocks in a path dependent way. This path-dependency is the novelty of the model, and leads to an infinite dimensional stochastic optimal control problem. We solve the problem completely, and find explicitly the optimal controls in feedback form. This is possible because we are able to find an explicit solution to the associated infinite dimensional Hamilton-Jacobi-Bellman (HJB) equation, even if state constraints are present. To the best of our knowledge, this is the first infinite dimensional generalization of Merton's optimal portfolio problem where explicit solutions can be found. The explicit solution allows us to study the properties of optimal strategies. In particular we show how wage rigidity and learning your earning can modulate the negative income hedging demand arising from the implicit holding of risky assets in human capital, leading to richer asset allocation predictions than in the case with persistent shocks. This is joint work with Fausto Gozzi and Cecilia Prosdocimi.
         3.20 - 4.00   Géraldine Bouveret: Optimal Control under Controlled Loss Constraints via Reachability Approach and Compactification [Hide]
We study an optimal control problem under a set of controlled loss constraints holding at different deterministic dates. It is well known that the characterization of the related value function by a Hamilton-Jacobi-Bellman equation usually requires additional strong assumptions involving an interplay between the set of constraints and the dynamics of the controlled system. To treat this problem in absence of these assumptions we first translate it into a state-constrained stochastic target problem and then apply a level-set approach to describe the reachable set. Using this approach the state constraints can be easily handled by an exact penalization. However, this target problem involves a new set of control variables that are unbounded. A `compactification' of the problem is then performed.
This is a joint work with Athena Picarelli.

         4.00 - 4.30   COFFEE BREAK

         4.30 - 5.10   Martin Larsson Short- and long-term relative arbitrage in stochastic portfolio theory [Slides] [Hide]
Stochastic Portfolio Theory is a mathematical framework for studying large equity markets, especially the performance of long-term investments. An important focus are universal features that only depend weakly on specific modeling assumptions. A basic result of this kind states that a mild nondegeneracy condition suffices to guarantee long-term relative arbitrage, that is, the possibility to outperform the market over sufficiently long time horizons. A longstanding open question has been whether short-term relative arbitrage is also implied. A qualitative answer, in the negative, was recently given by Fernholz, Karatzas & Ruf. In this work we settle the question by characterizing and explicitly computing the critical time horizon beyond which relative arbitrage always exists. The key tool is a previously unknown connection between existence of relative arbitrage and certain geometric PDE describing mean curvature flow.
         5.10 - 5.50   Xin Guo: The marriage of Bregman with Wasserstein, with applications to GANs and beyond [Slides] [Hide]
In this talk, I will review some well known distance measures that are widely used in statistics, machine learning and stochastic games. These include Bregman divergence, Wasserstein distance, and a recently proposed new divergence function names relaxed Wasserstein. I will review some of their important properties and implications in machine learning and optimization. We will then discuss the application to GANs, a central topic in machine learning, and its potential connection with mathematical finance.

         6.00-...   COCKTAIL RECEPTION AND POSTER PRESENTATIONS
                 Eduardo Abi Jaber (Dauphine University, Paris): Lifting the Heston model [Hide]
How to reconcile the classical Heston model with its rough counterpart? We introduce a lifted version of the Heston model with n multifactors sharing the same Brownian motion but mean reverting at different speeds. Our model nests as extreme cases the classical Heston model (when n=1) and the rough Heston model (when n goes to infinity). We show that the lifted model enjoys the best of both worlds: Markovianity and satisfactory fits of implied volatility smiles for short maturities. Further, our approach speeds up the calibration time and opens the door to time-efficient simulation schemes
                 Sergio Alvares Rodrigues Souza Maffra (King's College London): A simulation model for longevity risk management [Hide]
We describe a stochastic model for some of the most important risk factors affecting a typical pension insurer. The selection of the risk factors is motivated by the need to financially hedge defined benefit pension liabilities. On the asset side, we model the investment returns on equities and bonds. On the liability side, the risks are driven by future mortality developments as well as price and wage inflation. All the risk factors are described as a multivariate stochastic process (time series model) that captures the dynamics and the dependencies among the risk factors.
                 Davide De Santis (LSE): Mixed impulse/stopping nonzero-sum stochastic differential games [Hide]
We study a two-player nonzero-sum stochastic differential game in which one player is a controller who acts via impulse controls while the other one is a stopper. The main goal of this work is to implement a verification theorem that provides the appropriate system of quasi-variational inequalities for the Nash equilibrium payoff functions of the game and related strategies. Afterwards, we present an impulse/stopping game example with a one-dimensional state variable, which behaves as a scaled Brownian motion, and linear functionals for both players. The Nash equilibrium characterization is up to numerics.
                 Luting Li (London School of Economics): Capital allocation under Fundamental Review of Trading Book [Hide]
Facing the FRTB, banks need to allocate their capital to each of their business units to evaluate the capital efficiency of their strategies. This paper proposes two computationally efficient allocation methods which are weighted according to liquidity horizon, and are more stable and less negative under the FRTB than the current regulatory framework.
                 Douglas Machado Vieira (Imperial College): High-frequency options market making and the role of stochastic volatility [Hide]
From the seminal paper by Avellaneda and Stoikov (2008), optimal market making is commonly modelled as a strategy of posting bid and ask prices around an idealised price, called the fundamental or reference price. With the aid of small time asymptotics and representation theorems, we are able to investigate the local behaviour of the option reference price under general Itô processes, including the Heston, free boundary SABR and Bergomi models. We show that, locally, option prices indeed move with changes in volatility, even though the underlying price process behaves as if the volatility were constant. From this theoretical finding, we empirically estimate the impact of stochastic volatility changes in single-name stock option price movements. Finally, we derive a tractable optimal options market making strategy suitable for high-frequency markets.
                 Dean Markwick (UCL): Hierarchical nonparametric Hawkes processes with applications to Foreign Exchange markets [Hide]
Hawkes processes are used for modelling the arrival of events in situations where such events appear in clusters. One example is the prediction of trade volumes in financial markets. However in practice, we often find strong seasonal day-of-the-week effects in data. For example the behaviour of financial markets on a Monday is quite different to their behaviour on a Friday. To this end, we have developed a nonparametric extension of the Hawkes process which uses a hierarchical Dirichlet process to learn multiple day-of-the- week seasonality functions simultaneously, We show how applying a series of latent variable transformations to the model likelihood function results in computationally efficient MCMC inference for both the nonparametric seasonal parts of the model and the Hawkes process based clustering.
                 Jose Pedraza-Ramirez (London School of Economics): Predicting the last zero for spectrally negative Lévy processes [Hide]
Given a spectrally negative Lévy process drifting to infinity, we consider the last time g the process is below zero. We are interested in finding a stopping time which is as close as possible to g. In the L1 setting, we show that an optimal stopping time is given by a first passage time above a level based on the convolution with itself of the distribution function of minus the overall supremum of the process. The proof is based on a direct approach without the need to make use of stochastic calculus.
For some more general metrics the problem is more challenging and can be transformed into an optimal stopping problem for a three-dimensional Markov process involving the last passage time. We show that the solution of this optimal stopping problem is given by the first time that the Lévy process crosses a non-increasing, non-negative curve which depends on the time spent above zero.
This is a joint work with Erik Baurdoux.
                 Udomsak Rakwongwan (King's College London): Semi-static hedging under finite liquidity [Hide]
We develop a model for semi-static hedging in derivatives markets where price quotes have bid-ask spreads and finite quantities. The model quantifies the dependence of the prices and hedging portfolios on an investors beliefs, risk preferences and financial position as well as on the price quotes. Computational techniques of convex optimisation allow for fast computation of optimal hedging strategies as well as sensitivities with respect to model parameters


Tuesday 11 September 2018

Morning session (Chair: Damiano Brigo)

         9.30 - 10.10      Jean-Philippe Bouchaud: Price impact: a short review of recent theoretical and empirical results [Slides] [Hide]
Monitoring and controlling the impact of trades on prices has become one of the most active domains of research in quantitative finance since the mid-nineties. A large amount of empirical results has accumulated over the years concerning the dependence of impact on traded quantities, the time evolution of impact, the impact of metaorders, cross-impact, etc. Some of these results are in direct conflict with traditional approaches, such as the classic model of Kyle. In this review talk, I will present some of the most salient empirical findings, and a variety of theoretical ideas that have been proposed to rationalise them. Some remaining puzzles and open problems will be discussed as well.
         10.10 - 10.50    Mathieu Rosenbaum: Optimal make-take fees for market making regulation [Slides] [Hide]
We consider an exchange who wishes to set suitable make-take fees to attract liquidity on its platform. Using a principal-agent approach, we are able to describe in quasi-explicit form the optimal contract to propose to a market maker. This contract depends essentially on the market maker inventory trajectory and on the volatility of the asset. We also provide the optimal quotes that should be displayed by the market maker. The simplicity of our formulas allows us to analyze in details the effects of optimal contracting with an exchange, compared to a situation without contract. We show in particular that it leads to higher quality liquidity and lower trading costs for investors. This is joint work with Omar El Euch, Thibaut Mastrolia and Nizar Touzi.

         10.50 - 11.20   COFFEE BREAK

         11.20 - 12   Charles-Albert Lehalle: From optimal execution in front of a background noise to mean field games [Slides] [Hide]
A large number of mathematical frameworks are available to control optimally of the execution of a large order (see for instance "Optimal control of trading algorithms: a general impulse control approach" SIAM J. Financial Mathematics, 2:1, 404-438 by Bouchard, Dang, Lehalle in 2011 or "General intensity shapes in optimal liquidation" Mathematical Finance, 25:3, 457-495. Guéant and Lehalle, in 2015), and some frameworks are emerging to manage the life cycle of small orders in an orderbook (like in "Optimal liquidity-based trading tactics", by Lehalle, Mounjid, and Rosenbaum, arxiv 2018). In all these framework an isolated investor faces a background noise coming from the aggregation of other market participants' behaviours. With recent progresses in Mean Field Games (MFG), it is now possible to propose analyses of the same problems in a closed loop, going further than current isolated views. I will expose proposed approaches for both cases (see "Efficiency of the price formation process in presence of high frequency participants: a mean field game analysis" Mathematics and Financial Economics,10:3, 223-262, by Lachapelle, Lasry, Lehalle, Lions, 2016 for small orders and "Mean field game of controls and an application to trade crowding" Mathematics and Financial Economics, by Cardaliaguet and Lehalle, 2017 for large orders) and explain how MFG can answer to a lot of needs in modelling liquidity on financial markets.
         12 - 12.40   Almut Veraart: Modeling, simulation and inference for multivariate time series of counts using trawl processes [Slides] [Hide]
This talk presents a new continuous-time modeling framework for multivariate time series of counts which have an infinitely divisible marginal distribution. The model is based on a mixed moving average process driven by Levy noise, called a trawl process, where the serial correlation and the cross-sectional dependence are modeled independently of each other. Such processes can exhibit short or long memory. We derive a stochastic simulation algorithm and a statistical inference method for such processes. The new methodology is then applied to high frequency financial data, where we investigate the relationship between the number of limit order submissions and deletions in a limit order book.

         12.40 - 2.10   LUNCH BREAK

Afternoon session (Chair: Luitgard Veraart)

         2.10 - 2.50   Tobias Fissler: Elicitability and Identifiability of Measures of Systemic Risk [Slides] [Hide]
We establish elicitability and identifiability results for measures of systemic risk introduced in Feinstein, Rudloff and Weber (2017). These risk measures determine the set of all capital allocations that make a financial system acceptable. Hence, they take an ex ante angle, specifying those capital allocations that prevent the system from default. At the same time, they allow to capture the dependence structure of different financial firms.
The elicitability of a risk measure, or more generally, a statistical functional amounts to the existence of a strictly consistent scoring or loss function. That is a function in two arguments, a forecast and an observation, such that the expected score is minimised by the correctly specified functional value, thereby encouraging truthful forecasts. Prominent examples are the squared loss for the mean and the absolute loss for the median. Hence, the elicitability of a functional is crucial for meaningful forecast comparison and forecast ranking, but also opens the way to M-estimation and regression. An identification function is similar to a scoring function, however, the correctly specified forecast is the zero of the expected identification function rather than its minimiser, thus giving rise to Z-estimation and possibilities to assess the calibration of forecasts.
To allow for a rigorous treatment of elicitability of set-valued functionals, we introduce two modes of elicitability: a weak and a strong version. We show that these two modes are mutually exclusive and establish strong elicitability results for the systemic risk measures under consideration. That means we construct strictly consistent scoring functions taking sets as input arguments for forecasts.
The results turn out to be practically relevant since they open the way to comparative backtests of Diebold-Mariano type and regression with set-valued models. On the other hand, they constitute a novelty of theoretical interest on its own.
The talk is based on joint ongoing work with Jana Hlavinova and Birgit Rudloff.
         2.50 - 3.30   Paul Bilokon: From AI to ML, from logic to probability [Slides] [Hide]
Applications of Artificial Intelligence (AI) and Machine Learning (ML) are rapidly gaining steam in quantitative finace. These terms are often used interchangeably. However, the pioneering work on AI by participants of the Dartmouth Summer Research Project --- Marvin Minsky, Nathaniel Rochester, and Claude Shannon --- was more symbolic than numerical, and often used the language of logic. Recent advances in ML --- especially Deep Learning --- are more numerical than symbolic, and often use the language of probability. In this talk we shall show how to connect these two worldviews.

         3.30 - 4.00   COFFEE BREAK

         4.00 - 4.40   Rajeeva Karandikar: Portfolio optimization under value at risk constraints [Slides] [Hide]
         4.40 - 5.20   Paolo Guasoni: Options Portfolio Selection [Slides] [Hide]
We develop a new method to optimize portfolios of options in a market where European calls and puts are available with many exercise prices for each of several potentially correlated underlying assets. We identify the combination of asset-specific option payoffs that maximizes the Sharpe ratio of the overall portfolio: such payoffs are the unique solution to a system of integral equations, which reduce to a linear matrix equation under suitable representations of the underlying probabilities. Even when implied volatilities are all higher than historical volatilities, it can be optimal to sell options on some assets while buying options on others, as hedging demand outweighs demand for asset-specific returns.

         6   CONCLUDING REMARKS