AMF 2025 – Program
May 21–23, 2025
Location: FRIAS Institute, Lecture Hall (Hörsaal Anatomie) Albertstraße 19, Freiburg im
Breisgau, Germany
Online: The zoom link has been sent to you by mail.
| Time | Wednesday, May 21, 2025 | Time | Thursday, May 22, 2025 | Time | Friday, May 23, 2025 |
|---|---|---|---|---|---|
| 9:00 | 9:00 | M. Sørensen | |||
| 10:00 | Arrival & Registration | 9:45 | N. Yoshida | ||
| 10:30 | Welcome | 10:00 | A. Cerny | 10:30 | break |
| 11:00 | D. Madan | 10:45 | W. Runggaldier | 11:00 | A. Lipton (Online) |
| 11:45 | H. Geman | 11:30 | J. Teichmann (SmallData invited speaker) | 11:45 | J. Muhle-Karbe |
| 12:30 | lunch | 12:15 | lunch | 12:30 | end |
| 14:00 | C. Cuchiero | 14:00 | F. Delbaen | ||
| 14:45 | I. Kyriakou | 14:45 | F. Hubalek | ||
| 15:30 | break | 15:30 | break | ||
| 16:00 | J. Zubelli | 16:00 | Y. Kabanov | ||
| 16:45 | G. Stahl | 16:45 | T. Gehrig | ||
| 17:30 | no talk | 17:30 | Poster session + champagne, FRIAS |
Abstracts of the Talks
Name | Title | Abstract |
|---|---|---|
| Cerny, Ales | Dynamically Optimal Portfolios for Monotone Mean-Variance Preferences | Monotone mean-variance (MMV) utility is the minimal modification of the classical Markowitz utility that respects rational ordering of investment opportunities. This paper provides, for the first time, a complete characterization of optimal dynamic portfolio choice for the MMV utility in asset price models with independent returns. The task is performed under minimal assumptions, weaker than the existence of an equivalent martingale measure and with no restrictions on the moments of asset returns. We interpret the maximal MMV utility in terms of the monotone Sharpe ratio (MSR) and show that the global squared MSR arises as the nominal yield from continuously compounding at the rate equal to the maximal local squared MSR. The paper gives simple necessary and sufficient conditions for mean-variance (MV) efficient portfolios to be MMV efficient. Several illustrative examples contrasting the MV and MMV criteria are provided. Joint work with Johannes Ruf (LSE) and Martin Schweizer (ETHZ). |
| Cuchiero, Christa | Data-driven Heath-Jarrow-Morton models | We develop a data-driven version of Heath-Jarrow-Morton models in the context of interest rate modeling. We consider models driven by linear functionals of the yield curve, such as a vector of representative forward rates, possibly augmented by a set of economic factors, whose characteristics can be easily estimated from market data. We then parametrize the volatility function via neural networks, thus considering the framework of neural SPDEs. Their parameters are learned by calibrating the model to past market yield curves. This results in a data-driven arbitrage-free generation/prediction of yield curves. Our setup also allows for the possibility of expected jumps, a key feature in interest rate markets, which can arise due to monetary policy decisions. We illustrate our deep learning procedure by reconstructing and forecasting the Euro area yield curves. The talk is based on joint work with Claudio Fontana and Alessandro Gnoatto. |
| Delbaen, Freddy | Approximation of Martingales by Gaussian Martingales | We will show that for the Brownian filtration, every L2 martingale is the sum of an L2 converging series of Gaussian martingales. An immediate consequence is that every square integrable random variable X is the sum of an L2 converging series of Gaussian random variables Yn. Furthermore the square of the norm of X is the sum of the squares of the norms of Yn. This does not imply that the Gaussian variables are orthogonal. |
| Geman, Helyette | Crude Oil, Natural Gas and Electricity between Geopolitics and Climate | |
| Hubalek, Friedrich | On asymptotic expansions related to tail conditional expectation and other risk measures | A first asymptotic analysis of the effects of small changes on the computation of some risk measures is conected to expansions related to the central limit theorem. In particular we can use the classical Cornish-Fisher expansion for Value-at-Risk. This motivates a new generalization for Expected Shortfall, and possibly, to other coherent measures of risk. Our main results apply to losses from the domain of attraction of the normal distribution. A concrete example involving the hyperbolic distributions allows rather explicit computations andexpansions. We also discuss the problem for non-Gaussian stable distributions, when results look different and depend on the index of stability. |
| Gehrig, Thomas | Talk and the City: How Far to Trust Bankers (Not) Calling for Bailouts? | To assess a bank’s ability to cope with macroeconomic shocks or financial market stress, a banking authority often relies on private information of a peer institution or counterparty of that bank. Reliance on such private information can leave the authority in a precarious position, particularly when deciding on potentially welfare-enhancing bank bailouts. This is because the counterparty’s own interest are not clear. Against the backdrop of a canonical model of liquidity creation, we analyze communication between a counterparty, who is informed about a bank’s fundamentals, and a banking authority, who is not informed. Communication about the bank’s fundamentals is cheap; i. e., costless, non-binding and unverifiable. But communication between the authority and the informed counterparty is crucial since its consequences determine the outcome of a global game played between a potentially troubled bank and its depositors. Depending on the counterparty’s own liquidity position, truthful communication is sometimes but not always in its own interest. It may overstate or understate the financial support needed, even to the extent to not communicate unfavorable information at all. The key insights are threefold: for authorities, the risks in assessing the true financial position of a bank is endogenous; not necessarily continuous; and dependent on knowledge of the financial position of the informed counterparty |
| Jacod, Jean | Some stricking facts about volatility of volatility | tba |
| Kabanov, Youri | Distributional equations and the ruin problem for the Sparre Andersen model with investments | We study a model of an insurance company where the business activity is described by the Sparre Andersen model, that is, by a compound renewal process, while the price of the risky asset follows an independent Lévy process. The asymptotic of ruin probabilities for such models were studied in the case of upward jumps of the business process by Eberlein, Kabanov, and Schmidt and the non-life insurance version (downward jumps) and a mixture of both were treated by Kabanov and Promyslov. We discuss the looking simple case where the price process is of bounded variation with only positive or only negative jumps. Surprisingly, it happens to be rather delicate. We provide new sufficient conditions to ensure the asymptotic behavior of the ruin probability as was discovered in the previous works. Our results are based on the theory of distributional equations as presented in the book by Buraczewski, Damek, and Mikosch. This is a joint work with Danil Legenkiy and Platon Promyslov. |
| Kyriakou, Ioannis | Calibration risk under probabilistic parameter dependencies and model output effects | We propose a novel regression-based framework for modelling the calibration risk associated with asset price models. The models are traditionally calibrated to liquid contract quotes by minimizing an error functional. Calibration risk relates to the uncertainty in model parameter estimates (probabilistic inputs), which is transferred to other contracts (model outputs), yet it often does not receive the attention and action it warrants. We present a systematic way of detecting and alleviating the calibration risk and a coherent probabilistic approach to modelling this, considering parameter dependencies, and its effect to ultimate outputs. We study different contract values global sensitivity to model parameters, and we are enabled to rank parameters by the influence that their knowledge brings to maximizing the increase in the likelihood of profit or loss of an investor’s position. with Gianluca Fusai and Marina Marena. |
| Lipton, Alex | Old Financial Problems, Fresh Mathematical Insights: Stochastic Processes with Boundaries | This talk demonstrates how classical mathematical methods can be leveraged to address seemingly unrelated problems in applied and financial mathematics. Specifically, we use the method of heat potentials to tackle the following challenges: the semi-analytical description of the hitting time density for an Ornstein–Uhlenbeck process, the efficient determination of the default boundary in a structural default model, the semi-analytical solution of a mean-field problem for a system of interacting banks; and the development of hedging strategies for impermanent loss in Automated Market Makers. Our results highlight classical techniques’ versatility and enduring relevance in solving contemporary problems across diverse domains. |
| Madan, Dilip | Automated Financial Investment | The automated investment problem (AIP) asks for the determination of the amount to be invested in a risky asset. Convex cones of random outcomes conceptualize the set of arbitrarily acceptable risks for which the AIP has no solution. For risky outcomes that are not arbitrarily acceptable the acceptable risks are those with a positive expectation for which the AIP is solved using Disciplined Saddle Point (DSP) programming. Risks with a positive expectation under a base probability remain acceptable at a larger scale provided their expected losses under alternative and adverse probabilities are bounded by rebate levels associated with these probabilities. The economic value of random outcomes are given the amount that can be withdrawn while yet main- taining risk acceptability. The AIP solution maximizes economic value. Solutions are developed and implemented into automated trading strategies in both univariate and multivariate contexts. |
| Muhle-Karbe, Johannes | Volatility and order flow – a tale of two fractional Brownian motions? | Volatility and order flow are two commonly used measures of market volatility that are evidently closely related. However, smooth fractional Brownian motions have been proposed to capture the autocorrelation of the order flow, in contrast to the paradigm of rough volatility. We discuss how to resolve this apparent contradiction. Joint work in progress with Youssef Chahdi and Mathieu Rosenbaum |
| Runggaldier, Wolfgang | Exploratory randomization for discrete-time linear exponential quadratic (LEGQ) problems | One line of research concerning Reinforcement Learning in stochastic control is to consider an exploratory version with a randomized feedback control. In exchange, a penalty/regularization term, expressed by Shannon’s differential entropy of the randomized control, is included in the criterion function. While such an entropic penalization may be intuitively convincing, here we try to give a conceptual justification for it. We start from a risk-sensitive linear exponential of a quadratic randomized control problem of the investment management type which, via the duality existing between free energy and the Kullback-Leibler divergence entropy, can be reduced to a corresponding risk-neutral linear quadratic (LQG) control problem with a penalizing entropy term that is induced both by the randomized control and by the reduction from risk-sensitive to risk neutral. Based on Dynamic Programming, a solution to this LQG problem is derived in a parametrized form, making it possible to apply a policy-gradient approach. Based on a recent joint paper with Sebastien Lleo with the same title (see arXiv: 2501.06275) |
| Sørensen, Michael | Likelihood inference for stochastic differential equations: recent developments | The complexity of likelihood inference for stochastic differential equations based on discrete time samples often necessitates the use of approximations or computational techniques. New developments that extend the applicability of some of these methods to more challenging models are reviewed. Approximate likelihood methods for high frequency data have often been used in financial econometrics. They usually do not perform well for strongly nonlinear models, but a new likelihood method based on splitting schemes works well for such models too. Likelihood inference based on diffusion bridge methods works well at all sampling frequencies. It is discussed how to extend these methods in various directions such as non-synchronous data from multivariate diffusion processes, stochastic differential equations with jumps and models with random effects and measurement errors. The lecture is based on joint work with Mogens Bladt, Fernando Baltazar-Larios, Susanne Ditlevsen and Adeline Samson. |
| Stahl, Gerhard | Resilency, Regulation and Rigorous Mathematics – looking back and looking forward | With the Market Risk Amendment in 1996 the triumph march of stochastic models for regulatory purposes took place and heralded a new time. This stochastic approach is at risk to be transformed and the area is then to considered as historic. Wake-up call for this change is the recent decision of banking supervisors to melt down the role of models dramatically. A short review highlights the rationale in the 90-ties to opt regulatory for stochastic models. This includes the regulation of energy markets, where hyperbolic models allowed to employ a non-bureaucratic approach. Currently, systemic risks (political, climate, cyber,…) dominate the risk landscape and tail risks in particular. Resiliency is the magic word for promising solutions. How a resilient regulatory frame could look like is exemplified by the concept of Minimum Capital Requirements, which is currently used under Solvency II, merely as a legal construct – yet not as a stochastic concept. |
| Teichmann, Josef | Recent Advances in Affine Processes | We relate important classes of path dependent models with signature features to affine processes in an infinite dimensional sense. This surprising connection leads to novel representations of Fourier transforms for the whole path space measure. Joint work with Christa Cuchiero and Sara Svaluto-Ferro. |
| Yoshida, Nakahiro | Point processes applied to high frequency data: ratio model, deep learning and lead-lag analysis | We discuss an attempt to incorporate deep learning to point processes that is motivated by the studies on modeling of limit order book. In Muni Toke and Yoshida (QF2019), a parametric approach was taken with a Cox-type model (the ratio model) for relative intensities of order flows in the limit order book. The Cox-type model with a nuisance baseline hazard has an advantage to cancelling non-stationary intraday trends in the market data. They showed consistency and asymptotic normality of a quasi-likelihood estimator and validated the model selection criteria applied to the point processes, based on the quasi-likelihood analysis (Yoshida AISM2011, 2025). This method applies to real data from the Paris Stock Exchange and achieves accurate prediction of market order signals, outperforming the traditional Hawkes model. It is suggested that the selection of the covariates is crucial for prediction. Subsequently, the ratio model was extended to a marked ratio model to express the hierarchical structure in market orders (Muni Toke and Yoshida JJSDS2022). Each market order is categorized by Bid/Ask and then further classified as aggressive or non-aggressive depending on whether it causes price movements. The marked ratio model outperforms other intensity-based models like Hawkes-based models in predicting the sign and aggressiveness of market orders on financial markets. However, the trials of model selection in those studies suggest a possibility of taking more covariates in the model; the information criteria seem to prefer relatively large models among a large number of models generated by combinations of our proposal covariates. This motivates us to use deep learning to automatically generate more covariates and to enhance the power of expression of the model for more nonlinear dependencies behind the data. We investigate applications of deep neural networks to a point process having an intensity with mixing covariates processes as input. Our generic model includes Cox-type models and marked point processes as well as multivariate point processes. An oracle inequality giving a rate of convergence of the prediction error is derived. Simulation study shows that the marked point process can be superior to the simple multivariate model in prediction. We apply the marked ratio neural network model to real limit order book data. Motivated by estimating the lead-lag relationships in high-frequency financial data, we propose noisy bivariate Neyman-Scott point processes with gamma kernels (NBNSP-G). Our experiments suggest that NBNSP-G can explain the correlation of orders of two stocks well. A composite-type quasi-likelihood is employed to estimate the parameters of the model. In proof of asymptotic properties, NBNSP-G breaks the boundedness assumption on the moment density functions commonly assumed in the literature. We show consistency and asymptotic normality for the bivariate point process model under more relaxed conditions to accommodate NBNSP-G (Shiotani and Yoshida arViv2024). An application to real data is presented. This talk is based on the joint work with I. Muni Toke, Y. Gyotoku and T. Shiotani. |
| Zubelli, Jorge P. | On the Calibration of Jump-Diffusion Models with Local Volatility from Option Prices | Dupire’s local volatility model is extensively used and well-recognized for hedging and option pricing in financial markets. The PDE (inverse) problem consists in recovering the time and space varying diffusion coefficient in a parabolic equation from limited data. It is known that this corresponds to an ill-posed problem. Lévy processes have been advocated since the seminal work of Professor Eberlein and collaborators for modeling financial processes. One natural class of models consists to include jump-diffusion with local volatility. This talk concerns the calibration of Dupire’s model in the presence of jumps. This leads to an integro-differential equation whose parameters have to be calibrated so as to fit market data. We present a detailed analysis and implementation of a splitting strategy to identify simultaneously the local-volatility surface and the jump-size distribution from quoted European prices. The underlying model consists of a jump-diffusion driven asset with time and price dependent volatility. The ill-posed character of local volatility surface calibration from market prices requires the use of regularization techniques either implicitly or explicitly. Such regularization techniques have been widely studied for a while and are still a topic of intense research.We have employed convex regularization tools and recent inverse problem advances to deal with the local volatility calibration problem. Our approach uses a forward Dupire-type partial-integro-differential equation for the option prices to produce a parameter-to-solution map. The ill-posed inverse problem for such a map is then solved by means of a Tikhonov-type convex regularization. We present numerical examples that substantiate the robustness of the method both for synthetic and real data. This is joint work with Vinicius Albani (UFSC) |
Poster Presentation
Name | University or Company of Origin | Title | Abstract |
|---|---|---|---|
| Atchadé, Mintodê Nicodème | Carl von Ossietzky Universität Oldenburg | Modeling economics and finance data with the Arctan Marshall-Olkin Weibull distribution | The Arctan Marshall-Olkin family is introduced as an innovative and adaptable class of heavy-tailed distributions for modeling extreme events in financial and economic sciences. This family emerges from the combination of the Marshall-Olkin framework with the Arctan-X approach, which leverages the arctangent inverse trigonometric function. A particular case, the Arctan-Marshall-Olkin-Weibull (ATMOW) distribution, is explored in detail. Unlike the conventional two-parameter Weibull distribution, ATMOW incorporates an additional parameter, enhancing its adaptability to heavy-tailed data. Monte Carlo simulations confirm the efficiency of the maximum likelihood estimation for parameter inference. Furthermore, closed-form expressions for key actuarial risk measures, including value at risk and tail value at risk, are derived to assess extreme financial risks. Empirical applications to real financial and economic datasets demonstrate better performance of ATMOWcompared to several competing multiparameter distributions. |
| Brutsche, Johannes | University of Freiburg, Math.Department | The level of self-organized criticality in oscillating Brownian motion: n-consistency and stable Poisson-type convergence of the MLE | For some discretely observed path of oscillating Brownian motion with level of self-organized criticality ρ0, we prove in the infill asymptotics that the MLE is n-consistent, where n denotes the sample size, and derive its limit distribution with respect to stable convergence. As the transition density of this homogeneous Markov process is not even continuous in ρ0, the analysis is highly non-standard. Therefore, interesting and somewhat unexpected phenomena occur: The likelihood function splits into several components, each of them contributing very differently depending on how close the argument ρ is to ρ0. Correspondingly, the MLE is successively excluded to lay outside a compact set, a 1/√n-neighborhood and finally a 1/n-neigborhood of ρ0 asymptotically. The crucial argument to derive the stable convergence is to exploit the semimartingale structure of the sequential suitably rescaled local log-likelihood function (as a process in time). Both sequentially and as a process in ρ , it exhibits a bivariate Poissonian behavior in the stable limit with its intensity being a multiple of the local time at ρ0. |
| Gabin, Taibi | Bern University of Applied Sciences and University of Twente | Transformers for Financial Narrative Processing | This research introduces an innovative and holistic approach to financial narrative modeling, employing transformer architectures to explore the multifaceted nature of narratives. Our methodology goes beyond the conventional analysis of sentiment or topic trends by examining narratives through multiple dimensions, including their thematic depth, polarization, emotional consistency, and frequency. Leveraging a dataset spanning from 2010 to 2025 and comprising news articles from various financial newspapers, our approach integrates these diverse narrative elements, allowing for a richer, more detailed understanding of the dynamics that drive financial markets. This comprehensive analysis not only enhances our grasp of narrative impacts but also sets a new standard for financial data processing. |
| Knaust, Sven | University of Freiburg | Fast Bayesian calibration of option pricing models based on sequential Monte Carlo methods and deep learning. | Model calibration is a challenging yet fundamental task in financial engineering. Using sequential Monte Carlo methods, we reformulate the non-convex optimization problem as a Bayesian estimation task, constructing a sequence of distributions from the prior to the posterior. This allows us to compute any statistic of the estimated parameters, mitigating the strong dependence on starting points and avoiding the troublesome local minima, that often plague standard calibration methods. To accelerate computation, we incorporate Markov Chain Monte Carlo methods with delayed acceptance and a neural network-based option pricing approach that uses risk-neutral cumulants of log-returns as additional informative features. To demonstrate the strength of our approach, we calibrate an affine stochastic volatility model with price-volatility co-jumps to both simulated and real data. Our Bayesian algorithms significantly outperform the standard approach in terms of runtime, accuracy, and statistical fit. |
| Kormaz, Yaren | Yildiz Technical University | Markowitz portfolio optimisation with group constraints according to market trend | This study aims to grouping financial assets based on their Beta coefficients and develop a portfolio that investors can follow during market upturns and downturns. By integrating a group constraint into the traditional Markowitz mean-variance model, the objective is to devise a new portfolio model. The bi-objective portfolio optimization problem, which simultaneously minimizes portfolio risk and maximizes portfolio return under the established constraint, will be addressed using a heuristic algorithm. The study proposes a novel portfolio selection strategy for investors, utilizing the Beta coefficients of stocks to construct an optimal portfolio that ensures maximum return with minimum risk during both rising and falling market periods. |
| Kuissi Kamdem, Wilfried | University of Rwanda / AIMS Ghana | Linear pricing rule for power and Epstein-Zin utilities indifference valuation | In this paper we tackle the open problem on whether it is possible to obtain a linear pricing rule for utility indifference valuation in an incomplete market; meaning, the indifference price of λ units of the pay-off F is equal to λ times the price of one unit of the pay-off F. In a possibly non-Markovian setting, we obtain a linear pricing rule for Epstein-Zin utility as well as power utility. This is possible because we are able to provide an explicit solution to the fully coupled forward-backward stochastic differential equation derived by using a backward stochastic differential equation approach. We adopt a transformation new to problems associated to recursive utility. The pay-off can be a function of the entire path of both the traded assets and the non-traded financial indices on the time horizon. Our results apply to the pricing and hedging of European-type options. The indifference price obtained is an equilibrium price in the sense that the seller’s price and the buyer’s price coincide. Joint work with Olivier Menoukeu Pamen and Marcel Ndengo |
| Kunst, Philipp | Fraunhofer IAO | Transformer-based Deep Hedging (indicative) | tba (we are working on using transformers and PPO RL to solve hedging problems) |
| Massaria, Michele Domenico | Politecnico di Milano | The Additive Bachelier model with an application to the oil option market in the Covid period | In April 2020, the Chicago Mercantile Exchange (CME) temporarily switched the pricing formula for WTI options from the Black model to the Bachelier model. In this context, we introduce an Additive Bachelier model that provides a simple closed-form solution. This new Additive model exhibits several notable mathematical and financial properties. It ensures the no-arbitrage condition, a critical requirement in highly volatile markets, while also enabling a parsimonious synthesis of the volatility surface. The model features only three parameters, each one with a clear financial interpretation: the volatility term structure, vol-of-vol, and a parameter for modeling skew. The proposed model supports efficient pricing of path-dependent exotic options via Monte Carlo simulation, using a straightforward and computationally efficient approach. Its calibration process can follow a cascade calibration: first, it accurately replicates the term structures of forwards and At-The-Money (ATM) volatilities observed in the market; second, it fits the smile of the volatility surface. Overall this model provides a robust and parsimonious description of the oil option market during the exceptionally volatile first period of the Covid-19 pandemic. |
| Osei, Price | Bielefeld University | Fat-tailed distribution under the Smooth Ambiguity Model | The notion that asset returns are normally distributed has been increasingly questioned. We provide an alternative explanation for the fat tails observed in the return series using the smooth ambiguity model. Within this framework, we apply the exponential-power utility to capture the impact of ambiguity on asset return distributions. The inherent ambiguity in asset returns cannot be fully described by a single probability distribution. Instead, we consider a family of normal distributions with a known mean but an uncertain variance. This variance uncertainty disrupts the conditions required for the central limit theorem (CLT), leading to its failure. Our approach incorporates the gamma distribution to account for the uncertainty in variance. The model proposes the variance-gamma distribution as the unconditional law of asset returns. |
| Pavarana, Simone | University of Freiburg, Math.Department | Bayesian learning with set-valued observations | We introduce a novel Bayesian filtering framework for set-valued observation processes. Leveraging random set theory, we demonstrate that standard Bayesian formulas hold under mild conditions satisfied by a broad class of examples. Our approach is motivated by the need to incorporate expert opinions — represented as random sets, such as confidence intervals — into estimation problems. We illustrate potential applications in credit rating and propose a revised version of the Black-Litterman model. |
| Rennspies, Jasper | University of Freiburg | Efficient Sampling for Realized Variance Estimation in Time-Changed Diffusion Models | This paper analyzes the benefits of sampling intraday returns in intrinsic time for the realized variance (RV) estimator. We theoretically show in finite samples that depending on the permitted sampling information, the RV estimator is most efficient under either hitting time sampling that samples whenever the price changes by a pre-determined threshold, or under the new concept of realized business time that samples according to a combination of observed trades and estimated tick variance, or or hitting time. The analysis builds on the assumption that asset prices follow a diffusion that is time-changed with a jump process that separately models the transaction times, which allows for leverage specifications and Hawkes-type jump processes. This provides a flexible model that separately captures the empirically varying trading intensity and tick variance processes, which are particularly relevant for disentangling the driving forces of the sampling schemes. Extensive simulations confirm our theoretical results and show that low levels of noise, hitting time sampling remains superior while for increasing noise levels, realized business time becomes more efficient. This finding can be explained by the high noise-sensitivity of hitting time sampling that samples directly on the observed asset prices, whereas noise-robust intensity estimators can be employed for realized business time sampling. An application to stock data provides empirical evidence for the benefits of using these intrinsic sampling schemes to construct more efficient RV estimators as well as for an improved forecasting performance. |
| Richert, Ivo | Kiel University | Estimation of dynamically recalibrated affine models in finance | Dynamic recalibration of risk-neutral parameters in stochastic models to align with observed prices of financial derivatives is a widely used industry practice that, however, often lacks a tractable underlying mathematical framework. We address this gap by proposing a novel methodology wherein recalibrated parameters are treated as unobservable components embedded within a larger-scale affine model. The estimation of the dynamics of this background model then boils down to a two-step procedure, in which unobservable states are first calibrated to observed option prices using classic least-squares optimization techniques, and risk-neutral and physical model parameters are then jointly estimated to fit the trajectories of observed components alongside with the recalibrated states. We embed this joint estimation of both measures into the framework of Heyde-optimal estimating functions and establish weak consistency and asymptotic normality of the resulting estimators. Moreover, we derive explicitly computable expressions of the asymptotic estimator covariance matrix. |
| Rockel, Markus | Universität Freiburg | Closed-form Chatterjee’s xi for approximating copulas | We provide closed-form formulas for Chatterjee’s xi of multiple types of approximating copulas, which measure the strength of dependence between two random variables. In particular, we obtain lower bounds to the true Chatterjee’s xi value based on approximating checkerboard copulas. Lastly, we empirically study the performance of the resulting checkerboard based estimator for Chatterjee’s xi in a Gaussian setting. |
| Tambe Ndonfack, Felix Barrez | University of Freiburg, Math.Department | Filtering with Stochastic Discontinuities | In this work, we present a framework for filtering in stochastic systems with discontinuities. To address challenges arising from abrupt changes in observations, we model these phenomena using paired stochastic processes that represent unobserved signals and observable data streams. By leveraging a general random measure, our approach systematically characterizes discontinuities, offering a structured method for estimating hidden states in complex systems. Key theoretical contributions include the derivation of the Zakai and Kushner-Stratonovich equations, which enable precise computation of conditional distributions and enhance filtering accuracy in non-continuous dynamics. This framework has broad applications in domains where sudden state shifts are critical, such as financial risk assessment and advanced signal processing, providing a robust alternative to traditional continuous filtering models. |
| Valett, Henrik | Kiel University | Parameter estimation for polynomial models | Polynomial processes, which include affine processes as a subclass, are a class of Markov processes characterized by the property that their conditional polynomial moments can be computed in closed form. Due to their computational tractability, polynomial models are widely utilized in mathematical finance, with notable examples being the Heston model and Lévy-driven models. The aim of our research is to estimate the parameters of discretely observed polynomial models. In asymptotic statistics, the maximum likelihood estimator is highly desirable for its favorable properties, such as consistency, asymptotic normality, and minimal asymptotic error. However, in practice, the score function is often not available in closed form, which motivates the use of alternative approaches. We propose using martingale estimating functions in place of the score function, with a focus on the Heyde-optimal martingale estimating function, which minimizes the distance to the score function in an L2 sense within a specified class of estimating functions. Our framework constructs a specialized class of polynomial martingale estimating functions for general polynomial processes, requiring only the calculation of polynomial conditional moments. Specifically, we consider: where the maximum degree k is fixed, and the integrand λϑ , α(m), measurable with respect to Fm − 1, can be freely chosen. By applying ergodic theory for Markov processes, we establish both the consistency and asymptotic normality of these estimating functions. Additionally, we demonstrate how to explicitly compute the Heyde-optimal estimating function within this class. |
where the maximum degree k is fixed, and the integrand