Table of Contents

Dedication

Title Page

Copyright Page

Preface

List of Symbols

Chapter 1 - Fourier Pricing Methods

1.1 INTRODUCTION

1.2 A GENERAL REPRESENTATION OF OPTION PRICES

1.3 THE DYNAMICS OF ASSET PRICES

1.4 A GENERALIZED FUNCTION APPROACH TO FOURIER PRICING

1.5 HILBERT TRANSFORM

1.6 PRICING VIA FFT

1.7 RELATED LITERATURE

Chapter 2 - The Dynamics of Asset Prices

2.1 INTRODUCTION

2.2 EFFICIENT MARKETS AND LEVY PROCESSES

2.3 CONSTRUCTION OF LEVY MARKETS

2.4 PROPERTIES OF LÉVY PROCESSES

Chapter 3 - Non-stationary Market Dynamics

3.1 NON-STATIONARY PROCESSES

3.2 TIME CHANGES

3.3 SIMULATION OF LEVY PROCESSES

Chapter 4 - Arbitrage-Free Pricing

4.1 INTRODUCTION

4.2 EQUILIBRIUM AND ARBITRAGE

4.3 ARBITRAGE-FREE PRICING

4.4 DERIVATIVES

4.5 LEVY MARTINGALE PROCESSES

4.6 LEVY MARKETS

Chapter 5 - Generalized Functions

5.1 INTRODUCTION

5.2 THE VECTOR SPACE OF TEST FUNCTIONS

5.3 DISTRIBUTIONS

5.4 THE CALCULUS OF DISTRIBUTIONS

5.5 SLOW GROWTH DISTRIBUTIONS

5.6 FUNCTION CONVOLUTION

5.7 DISTRIBUTIONAL CONVOLUTION

5.8 THE CONVOLUTION OF DISTRIBUTIONS IN S

Chapter 6 - The Fourier Transform

6.1 INTRODUCTION

6.2 THE FOURIER TRANSFORMATION OF FUNCTIONS

6.3 FOURIER TRANSFORM AND OPTION PRICING

6.4 FOURIER TRANSFORM FOR GENERALIZED FUNCTIONS

6.5 EXERCISES

6.6 FOURIER OPTION PRICING WITH GENERALIZED FUNCTIONS

Chapter 7 - Fourier Transforms at Work

7.1 INTRODUCTION

7.2 THE BLACK-SCHOLES MODEL

7.3 FINITE ACTIVITY MODELS

7.4 INFINITE ACTIVITY MODELS

7.5 STOCHASTIC VOLATILITY

7.6 FFT AT WORK

Appendices

Bibliography

Index

For other titles in the Wiley Finance series please see www.wiley.com/finance

For a trader or an expert in finance, call him Mr Hyde, it is quite clear that a call or put spread is the derivative of an option and that a butterfly spread is the derivative of a call or put spread. Perhaps, he thinks, it should be approximately so. In fact, he knows that when a client asks for a digital option, he actually approximates that by taking large positions of opposite sign in European options with strikes as close as possible. So, for him a digital payoff is the limit of a call or put spread. He may also imagine what happens to the payoff of the butterfly spread as he increases the size of the positions and moves the strike prices closer and closer. He would get a tall spike with a tiny base, and, by iterating the process to infinity, he would get the *Dirac deltafunction.*So, gluing all the pieces together, Mr Hyde concludes that it is quite obvious that a Dirac delta function is the derivative of a digital payoff, which he knows is called the *Heaviside unit step function.*

For a mathematician, whose name could be Dr Jekyll, this conclusion is not so obvious, and for sure it is not rigorous. The digital payoff is a singular function, for which the derivative is not defined almost everywhere. In particular, it is not defined when it is most needed - that is, when the payoff jumps from zero to one, which is exactly where all the mass of the other singular function, Dirac’s delta, is concentrated. Anyway, after a first sense of natural disgust, Dr Jekyll recalls that there is a special setting in which this holds exactly true, and that is the theory of generalized functions. Then, disgust may leave the way to a sort of admiration for the trader, and will of cooperation. The mathematician proposes that one could actually consider to recover the price in the framework of generalized functions. In this setting, the Fourier transform of the payoff of a digital option is well defined. Working out the convolution of that with the density is not straightforward, but something can be done. One could then retrieve the price of the digital options for general densities, under very weak conditions, and in a totally consistent and, why not, elegant framework.

In this book we arranged a meeting and thorough discussion between Mr Hyde and Dr Jekyll. The idea is to deal with Fourier transform analysis in the framework of generalized functions. To the best of our knowledge, this is the first application of the idea to finance, and it delivers an original viewpoint on the subject, even though it reaches consistent results with the literature on the subject. The book is entirely devoted to the presentation of this idea, and it is not its ambition to provide a comprehensive and complete review of the literature, nor to address all the issues that may arise in the use of Fourier transform analysis in finance. The task is instead to develop the Fourier transform methodology in a setting that, in our judgement, may be the most appropriate for several reasons: not least, because there the intuition of Mr Hyde meets the rigor and elegance of Dr Jekyll.

For this reason, we also chose a non-standard structure for the book, which would have not been appropriate for a textbook or a review monograph. So, just as in many police stories, we decided to start from the murder scene, and then to develop the whole story in a flashback explaining how we got to that. We may reassure the reader that in this case the murder is a happy ending, and does not involve either Dr Jekyll or Mr Hyde, who are both alive and kicking and get along very well.

Chapter 1 collects the main results of the approach, along with frontier issues in the modelling of asset prices consistently with both time series dynamics and option prices. Expert readers are advised to read this chapter first. However, remember that even the authors had to go to the chapters written by the others to find out more. Chapter 2 proposes a review of the stochastic models applied to the dynamics of asset prices within the general assumption of market efficiency: the chapter opens with Bachelier at the beginning of the twentieth century and closes with CGMY at the beginning of the twenty-first. From the chapter, it clearly emerges why the concept of characteristic function has substituted that of density, shedding attention to Fourier transform methods. Chapter 3 extends the analysis to allow for non-stationary returns, introducing additive processes on one side, and time change techniques (based both on stochastic volatility and subordinators) on the other. Chapter 4 addresses the problem of pricing contingent claims in the most general setting, well suited to cases in which the dynamics of prices is represented in terms of characteristic functions. Chapter 5 introduces the theory of generalized functions, and shows how to compute distributions and convolutions of distributions in this setting; the chapter also specifies the setting that allows us to rigorously recover the original results presented in Chapter 1. Chapter 6 simply extends the analysis of the previous chapter to the case of Fourier transforms. Chapter 7 concludes by presenting a sensitivity analysis of option prices and smiles for the most famous models, and a calibration exercise is carried out in the current period of crisis.

That is the story of this book. Since it was born from the discussion between Dr Jekyll and Mr Hyde, the book is naturally targeted to two opposite kinds of audience. Necessarily, some reader will find parts of the book too basic and some will find them too complex, but we hope that in the balance the reader will enjoy going through it and will find an original presentation of the topic. Coming to the conclusions, we would like to thank, without implicating, Prof. Marc Yor for agreeing to read and discuss the first draft of the text. We conclude with warmest thanks to our families for their infinite patience while we were writing this book, and (not necessarily warm) thanks from each author to the other three for their finite patience. And, needless to say, Mr Edward Hyde is thankful to his master, Dr Henry Jekyll.

Bologna, 1 July 2009

U. Cherubini

G. Della Lunga

S. Mulinacci

P. Rossi

Symbol | Description |
---|---|

c.f. | characteristic function |

N(m, σ^{2}) | Normal distribution |

B(n,) | Binomial distribution |

Poi(λ) | Poisson distribution |

Γ(α, λ) | Gamma distribution |

ε(λ) | Exponential distribution |

r | continuously compounded short rate |

σ | scalar standard deviation |

W_{t} | Brownian process |

p.d.f. | Probability density function |

p.d.e. | Partial differential equation |

c.d.f. | Cumulative distribution function |

SDE | Stochastic differential equation |

P(x) | c.d.f (objective measure) |

Q(x) | c.d.f (risk-neutral measure) |

B(t, T) | Price at t of a risk-free coupon bond expiring at T |

S_{t} | Price at t of a risky asset |

O | European option (call or put) |

C | European call option |

P | European put |

CoN | Cash-or-Nothing (subscript) |

AoN | Asset-or-Nothing (subscript) |

a ∧ b | MIN(a, b) |

a ∨ b | MAX(a, b) |

θ(x) or H(x), | Heaviside unit step function |

δ(x) | Dirac delta function. |

F f(x) | Fourier transform |

(x) | Inverse Fourier transform |

ϕx) | Testing function |

fg | Convolution |

f(x)/x) dxor p.v. f(x | Principal value of function f(x) |

In recent years, Fourier transform methods have emerged as some of the major methodologies for the evaluation of derivative contracts. The main reason has been the need to strike a balance between the extension of existing pricing models beyond the traditional Black and Scholes setting and a parsimonious stance for the evaluation of prices consistently with the market quotes.

On the one hand, the end of the Black-Scholes world spurred more research on new models of the dynamics of asset prices and risk factors, beyond the traditional framework of normally distributed returns. On the other, restricting the search to the set of processes with independent increments pointed to the use of Fourier transform as a natural tool, mainly because it was directly linked to the characteristic functions identifying such processes.

This book is devoted to the use of Fourier transform methods in option pricing. With respect to the rest of the literature on this topic, we propose a new approach, based on generalized functions. The main idea is that the price of the fundamental securities in an economy - that is, digital options and Arrow-Debreu securities - may be represented as the convolution of two generalized functions, one representing the payoff and the other the pricing kernel.

In this chapter we present the main results of the book. The remaining chapters will then lead the reader through a sort of flashback story over the main steps needed to understand the rationale of Fourier transform pricing methods and the tools needed for implementation.

The market crash of 19 October 1987 may be taken as the date marking the end of the Black-Scholes era. Even though the debate on evidence that market returns were not normally distributed can be traced back much further in the past, from the end of the 1980s departures from normality have become the usual market environments, and exploiting these departures has even suggested new business ideas for traders. Strategies bound to gain from changes in the skew or higher moments have become the usual tools in every dealing room, and concerns about exposures to changes in volatility and correlation have become a major focus for risk managers.

On the one hand, the need to address the issue of non-Gaussian returns started the quest for new models that could provide a better representation of asset price dynamics; and, on the other, that same need led to the rediscovery of an old idea. According to a model going back to Breeden and Litzenberger (1978), one may recover the risk-neutral probability from the prices of options quoted in the market. Notice that this finding only depends on the requirement to rule out arbitrage opportunities and must hold in full generality for all risk-neutral probability distributions. The idea is that the risk-neutral density can be computed as the second derivative of the price of options with respect to the strike. More precisely, we have that
where *P*(*S*_{t}; *K, T*) denotes the put option and *B*(*t*, *T*) is the risk-free discount factor - that is, the value at time *t* of earning a unit of cash for sure at future time *T.* This is true of all option pricing models. Notice that the no-arbitrage condition immediately leads to characterize *f*_{t,T}(*x*) as a density. First, if one assumes to have bought a product paying a unit of cash if (*S*_{T} ∈ *dx)* and zero otherwise, the price of this product cannot be negative. Second, if one assumes to have bought a set of products paying one unit of cash if (*S*_{T} ∈ *dx)* in such a way as to cover the all-positive real line [0, ∞], then one must earn one unit of cash for sure, so that we have

Computing option prices amounts to an evaluation of the integrals of the density above, when it exists. Namely, consider the price of an option paying 1 unit of cash if the value of the underlying asset is lower than *K* at time *T.* The price of this option, which is called a digital *cash-or-nothing put* option, is

Now consider a similar product delivering one unit of asset *S* in the event *S*_{T} ≤ *K.* This product is called an *asset-or-nothing put* option. Likewise, its price will be
where(*x*) denotes the conditional expectation taken under probability measure Q with respect to the information available at time *t.* Consider now the portfolio of a short position on an *asset-or-nothing* put and a long position in *K cash-or-nothing put* options, with same strike price *K* and same maturity *T.* Then, at time *T* the value of such a portfolio will be
which is the payoff of a European put option. The no-arbitrage assumption then requires that the value of the put option at any time *t < T* should be equal to

It is easy to check that the no-arbitrage assumption requires that a digital option paying one unit of cash if, at time *T,* the underlying asset is worth more than *K (cash-or-nothingcall)* must have the same value as that of a long position in the risk-free asset and a short position in a *cash-or-nothing put* option. Namely, we must have
where *C*_{CoN}denotes the *cash-or-nothing call* option. By the same token, and *asset-or-nothing call* option can be replicated by buying a unit of the underlying asset spot while going short the *asset-or-nothing put*

Notice that the value of an *asset-or-nothing call* option must also be equal to
so that we have

This defines the main property of the probability measure Q. Under this measure, the asset *S,* and every other asset in the economy, is expected to earn the risk-free rate. For this reason, this measure is called *risk-neutral.* Alternatively, if one defines a new variable *Z*_{t} ≡ *S*_{t}*/B(t*, *T*), it is evident that under measure Q we have
and the price of the asset *S,* and every other asset, turns out to be a martingale when measured using the risk-free asset as the numeraire. For this reason, this measure is also called an *equivalent martingale measure* (EMM), where equivalent means that it gives zero measure to the events that have zero measure under the historical measure, and only to those.

Notice that just as for the put option, the price of a call option can be written as a long position in an *asset-or-nothing call* option and a short position in *K cash-or-nothing call* options. Formally,

Notice that by applying a change of numeraire, namely using *S*_{t}*,* we can rewrite the *asset-or-nothing* option in the form
where Q* is a new probability measure. So, European options can be written in full generality as a function of two probability measures, one denoting the price of a *cash-or-nothing* option and the other pricing the *asset-or-nothing* one. For call options we have then
and for put options

So, the risk-neutral density completely specifies the price of options for all strikes and maturities.

From the discussion above, pricing derivatives in an arbitrage-free setting amounts to selecting a measure endowed with the martingale property. In a complete market, only one measure is sufficient to fit all prices exactly. This implies that all financial products can be exactly replicated by a dynamic trading strategy (all assets are *attainable).* In incomplete markets, the measure must be chosen according to auxiliary concepts, such as mean-variance optimization or the expected utility framework. Concerning this choice, the current presence of liquid option markets with different strike prices and maturities has added more opportunities to replicate derivative contracts and, at the same time, more information on the shape of the risk-neutral distribution. This has brought about the problem of selection and comparison of the models with the whole set of prices observed on the market-that is, the issue of calibration to market data.

By and large, two main strategies are available. One could try models with a limited number of parameters, but a sufficient number of degrees of freedom to represent the dynamics of assets as consistently as possible with the prices of options. The advantage of this route is that it allows a parsimonious arbitrage-free representation of financial prices and it directly provides dynamic replication strategies for contingent claims. This has to be weighted against the risk of model mis-specification. On the other hand, one could try to give a non-parametric representation of the dynamics, based on portfolios of cash positions and derivative contracts held to maturity. This approach is known as *static replication* and it has the advantage of providing the best possible fit to observed prices. The risk is that some products used for static replication may be illiquid, and their prices inconsistent with the no-arbitrage requirement.

This book is devoted to the first strategy, that is the selection of a convenient fully specified dynamics for the prices of assets. The models reviewed in this book are based on two assumptions that jointly determine what is called the *Efficient Market Hypothesis.* The first is that prices are Markovian, meaning that all information needed to predict future price changes is included in the price currently observed, so that past information cannot produce any improvement in the forecast. The second assumption is that such forecasts are centred around zero, so that price changes are not predictable.

The above framework directly leads to modelling the dynamics of asset prices as processes with *independent increments.* The price, or more precisely the logarithm of it, is assumed to move according to a sequence of shocks such that no shock can be predicted from a previous shock. If one adds that all these shocks have the same distribution - that is, are identically distributed, and finite variance - the standard result, called, the central limit theorem, predicts that these log-changes, when aggregated over a reasonable number of shocks, should be normally distributed, so that the prices should be log-normally distributed. This is the standard model used throughout most of the last century, and named the Black-Scholes model after the famous option pricing formula that is recovered under this assumption.

In the Black-Scholes setting, the logarithm of each asset is then assumed to be driven by a Brownian motion with constant diffusion and drift parameters. Formally, if we denote *X*_{t} ≡ ln(*S*_{t}) we have
where o is the diffusion parameter, *r* is the instantaneous risk-free rate of return and *W*_{t} is a *Wienerprocess.*The dynamics of price *S* is then represented by a *geometric Brownian motion.* Notice that this model predicts that all options traded on the market should be consistent with the same volatility figure σ, for all strike and maturity dates. As discussed before, this prediction is clearly at odds with the empirical evidence gathered from option market prices. In many option markets, prices of *at-the-money* options are consistent with volatility levels different from those implied by *out-of-the-money* and *in-the-money* option prices. Namely, in markets such as foreign exchange and interest rate options, the volatility of both *in* and *out of* the money options is higher than that of *at-the-money* options, producing a phenomenon called the *smile effect,* after the scatter of the relationship between volatility and moneyness that resembles the image of a smiling mouth. In other markets, such as that of equity options, this relationship is instead generally negative, and it is called skew, recalling the empirical regularity that volatility tends to increase in low price scenarios. Moreover, volatility also tends to vary across maturities, generating *term structures of volatility* typical of every market.

The quest for a more flexible representation of the asset price dynamics, consistent with smiles and term structures of volatility, has brought us to dropping either of the two assumptions underlying the Black-Scholes framework. The first is that the assets follow a diffusion process, and the second is the stationarity of the increments of log-prices. So, more general models could be constructed allowing for the presence of jumps in asset price dynamics and for changes in the volatility and the probability of such jumps - that is, intensity. If we stick to processes with independent stationary increments, this defines a class of processes called *Levy processes.* An effective way to describe these processes is to resort to their *characteristic function.* We recall that the characteristic function of a variable *X*_{t} is defined as

A general result holding for all Levy processes is that this characteristic function may be written as
where the function *ψ*(λ) is called the *characteristic exponent* of the process. Notice that stationarity of increments implies that the characteristic exponent is multiplied by the time *t* so that increments of the process over time intervals of the same length have the same characteristic function and the same distribution. A fundamental result is that such a characteristic exponent can be represented in full generality using the so-called *Lévy-Khintchine formula.*

Every Levy process can then be represented by a triplet {*a*, σ, ν}, which uniquely defines the characteristic exponent. The first two parameters define the diffusion part of the dynamics, namely drift and diffusion. The last parameter is called the *Levy measure* and refers to jumps in the process. Loosely speaking, the Levy measure provides a synthetic representation of the contribution of jumps by the product of the instantaneous probability of such jumps, the intensity, and the probability density function of the dimension of jumps. Intuitively, keeping this measure finite requires that relatively large jumps must have finite intensity, while jumps with infinite intensity must have infinitesimal length. The former kind of jumps are denoted as *finite activity,* while the latter are called *infinite activity* and describe a kind of dynamics similar to that of diffusion processes. For further generalization, positive and negative jumps may also be endowed with different specifications.

Stationarity may be a limit for Levy processes. As a matter of fact, this would imply that the distribution oflog-returns on assets over holding periods of the same length should be the same, while in the market we usually see changes in their distribution: typically, we see periods of very huge movements followed by periods of relative calm, a phenomenon which is known as *clustering* of volatility. An intuitive way of moving beyond stationary increments is to assume that both the volatility of the diffusive part and the intensity of jumps change randomly as time elapses. Even the economic rationale for that goes back to a very old stream of literature of the 1970s. Clark (1973) proposed a model to explain the joint dynamics of trading volume and asset prices using subordinated processes. In the field of probability theory, Monroe (1978) proved that all semi-martingale processes can be represented as Brownian processes evaluated at stochastic times. Heuristically, this means that one can always represent any general process by sampling a Brownian motion at random times. Several *stochastic clocks* may be used to switch from the non-Gaussian process observed at *calendar time* to a Brownian motion. If the stochastic clock is taken to be a continuous process, then the required change of time is its quadratic variation. As an alternative, a stochastic clock can be constructed by any strictly increasing Levy process: these processes are called *subordinators.* One could also use other variables as proxies for this level of activity of the market. The main idea is in fact to model the process of information arrival to the market: in periods in which the market is hectic and plenty of information flows to the market, business time is moving more quickly, but when the market is illiquid or closed, the pace of time slows down.

In the time change approach, the characteristic function is obtained by a composition of the characteristic exponent of the stochastic clock process and that of the subordinated process. The result follows directly from the assumption that the subordinator is independent of the time-changed process. As an alternative approach, it is possible to remain within the realm of stochastic processes with independent increments by extending the Lévy-Khintchine representation. In this case, the characteristic function becomes
with characteristic exponent

Notice that, unlike the case of Levy processes, *ψ*_{t}(λ) is no longer linear in *t.* Technical requirements must be imposed on the process governing volatility and the Levy measure (heuristically, they must not decrease with the time horizon).

From what we have seen above, a pricing system can be completely represented by a *pricing kernel,* which is the price of a set of digital options at each time t. We now formally define the payoff of such options, for all maturities *T* > *t.* We start by denoting *m* ≡ (*B*(*t*, *T*)*K*)*/S*_{t} the scaled value of the strike price, where the forward price is used as the scaling variable. This is a natural measure of moneyness of the option. Now, define *k* ≡ ln(*m*) as our key variable representing the strike. We omit the subscript *t* to the strike for ease of convenience, but notice that at time *T, k* = ln(*K/S*_{T}*).* Let *X*_{t} = ln(*S*_{t}*/B(t*, T)). Then, the *Heaviside function* θ(*ω*(*X*_{T} *—X*_{t} *—k*)), where *ω* =—1, defines the event {*S*_{T} *≤ K*} and *ω* = 1 refers to the complementary event. So, in what follows we will refer to the probability measure of the variable *X*_{T} *- X*_{t}, that is, the increment of the process between time *t* and time *T,* rather than its level at the terminal date. Anyway, since we are concerned with pricing a set of contingent claims at time *t,* when *X*_{t} is observed, this will only amount to a rescaling by a known constant.

As for the function θ(*x*), we recall its formal definition as

In financial terms, the *cash-or-nothing* product can be considered as the limit of a sequence of bull/bear spreads. This limit leads to the derivative of the call option pricing formula with respect to the strike price. It is also easy to check that - in financial terms—just as the digital option is the limit of a sequence of call spreads, the derivative of this option is the limit of a sequence of *butterfly spreads.* In fact, it may be verified by heuristic arguments that the payoff of such a product is a *Dirac delta* function assigning infinite value to the case *S*_{T}*= K* and zero to all other events. Not surprisingly, the price of such a limit product, computed as the expected value under the equivalent martingale measure, is the density, when it exists, of the *pricing kernel,* and it is considered to be the equivalent of Arrow-Debreu prices for asset prices that are continuous variables.

Then, from a financial viewpoint, it is quite natural to consider the *Dirac delta* function as the derivative of the *Heaviside step* function. It is not so from a mathematical viewpoint, unless we introduce the concept of *generalized functions.* Loosely speaking, a generalized function may be defined as a linear functional from an assigned set of functions, called *testingfunctions* to the set of complex numbers. This set of functions is chosen to be infinitely smooth and with compact support, or with some particular regularity condition on their speed of descent. Formally, if we denote *ϕ*(*x*) to be a testing function, a generalized function *f (x)* is defined through the operator assigning a complex number to the function

Notice that by the main property of the *Dirac delta* function we have that

Furthermore, by a straightforward application of integration by parts, one may prove that the derivative of the distribution *f*(*x*) is

Now notice what happens if we compute the derivative of the *Heaviside step function θ*(*x*). We have
where we have used bounded support or the rapid descent property of the testing functions. We have then that
and the conjecture based on financial arguments is rigorously proved: in the realm of generalized functions, the derivative of the *Heaviside step function* is actually the *Dirac delta function.*

The strategy followed throughout this book is to remain in the realm of a generalized function to consistently recover the price of options in terms of Fourier transforms.

The starting point of our approach is to recover the Fourier transform of the payoff of digital options. This is clearly not defined if the Fourier transform is applied to functions, but it is well defined in the setting of generalized functions.

For a start, we will denote by *F*the Fourier transform operator, and byits inverse, and write
following the convention:

We report here the main result concerning the Fourier transform of the digital option that is fully developed and explained in Chapter 5. Let us introduce
where

We are now going to show that= *θ,* from which *F*[θ] = δ^{+}. Since
it follows that:

Now, it is possible to compute that the distributional value of *g*^{+}(*x*) is p.v. 1/x *- iπδ*(*x*) (see Example 5.4.3), so that we conclude
where p.v. denotes the principal value and δ is the *Dirac delta* function.

We are now going to recover the price of digital *cash-or-nothing* options. We shall treat both the probability distribution Q and the payoff as generalized functions, and the pricing formula as a convolution of distributions. In this setting, we have already computed the Fourier transform of the payoff. As for the distribution, we assume that we only know its *characteristic function,* which we redefine in a slightly different way, which is useful for computational purposes:

(1.1)

Notice that with respect to the usual definition we have simply multiplied the exponent by 2π.

The maths concerning these assumptions is thoroughly discussed in the main body of this book, namely Chapters 5 and 6, so here we stick to essential definitions for the reader who is already familiar with the technique.

Let *‘f*’, and *‘g’* be two generalized functions. The convolution will be denoted as:

If Q is a (probability) measure, we shall write:

We are interested in the convolution, in a generalized function sense, of the density and the digital payoff function θ(*x*)*.*

(1.2)

Notice that the main pillar of our approach is the requirement that this convolution of generalized functions be well defined. In Chapter 5, section 8, we give a proof under very weak conditions, which amount to the existence of the first moment of the probability distribution. We now apply the Fourier transform to the convolution and obtain:

(1.3)

and
We now use equation (1.3) to compute (1.2 ):

(1.4)

Replacing the value for δ^{+} in equation (1.4) and applying a result that may be found in Chapter 5, Example 5.4.2, we end up with

(1.5)

The above formula is certainly not new (see, for example, Kendall and Stuart, 1977, vol. III). It provides the relationship between the characteristic function and the cumulative probability distribution, which in our case is the pricing kernel of the economy.

The value of a *cash-or-nothing put* option is then given by

(1.6)

It is now immediate to obtain the price of the corresponding cash-or-nothing call option. Namely, we have

(1.7)

and we immediately obtain(1.8)

We now extend the analysis to *asset-or-nothing* options. The whole analysis above would of course lead to a result analogous to that obtained for *cash-or-nothing* options. As a matter of fact, we saw before that the two prices are linked by a change of measure. Namely,

Under our notation, which is based on the forward price rescaled with respect to the price at time *t* (that is, *S*_{t} = 1), the Radon-Nikodym derivative linking the two measures is *S*_{T}, so that we may write

(1.9)

We may now denote the characteristic function of measure Q* as

(1.10)

and a straightforward computation gives the relationship between the characteristic function of measure Q* and that of measure Q:(1.11)

(1.12)

for call options and(1.13)

for put options.It is now possible to derive a general pricing formula for European options that will be used to calibrate pricing models to market data. Notice that all the information content concerning the dynamics of the risk factor *S,* the underlying asset of our options, is summarized in the function

(1.14)

We call this function the **characteristic integral** of asset *S.* The probability distribution used in the pricing of all *cash-or-nothing* and *asset-or-nothing* options for all maturities can be synthetically reported with the common notation:

(1.15)

Clearly the *cash-or-nothing* case corresponds to α = 0 while the *asset-or-nothing* case is covered by α = *i/2π.* Furthermore, as stated before, ω = 1 denotes call options, while ω =—1 denotes put.

Adopting this notation for European options, the prices for call or put can be written as:**characteristic integral.** In order to highlight that, the European option pricing formula can be rewritten as*m* ≡ *B(t, T*)*K/S*_{t} denotes *moneyness* (in the forward price sense). Notice that the **characteristic integral** enters the formula with the same sign for both call and put options. The shape of the smile could then be recovered by using the statistics*P* denote call and put options as usual.

(1.16)

which only depends on the (1.17)

where we recall that (1.18)

where C and Finally, notice that for the *at-the-money forward* option *(m* = 1) we have

(1.19)

which may be useful to calibrate the term structure of volatility around the most liquid option quotes.With this general structure we are then ready not only to price options but also to use option prices to back out in a synthetic way all relevant information concerning the dynamics of the underlying assets.

We are now going to show that the **characteristic integral** defined above can be represented in an alternative way, resorting to what is known as the *Hilbert transform.* This technique was recently applied to the option pricing problem by Feng and Linetsky (2008).

The Hilbert transform *Hf* of a function *f* is obtained by performing the convolution of the function with the distribution p.v.1/*x*, in formula:

If we call *h(x)* the tempered distribution:
we may define the Hilbert transform by the alternative notation:

We can immediately see that the characteristic integral defined above, and yielding the prices of options, can be written in terms of the Hilbert transform

(1.20)

In order to compute Hilbert transforms of the quantities in which we are interested in the development of this chapter, we anticipate some relations that will be presented in Chapter 5:

We then get:
as a general rule to compute the Hilbert transform.

Adopting the usual *hat* notation for the Fourier transform we can write:

(1.21)

where
A result that will be needed in the development is the Fourier transform of p.v.(1/u).

We now provide a set of examples that should (a) illustrate how to compute the Hilbert transform of functions and (b) lead to a formula that will be paramount in the development of the numerical implementation.

We can exploit the linearity of the Hilbert transform and the result in the example above to recover the transform of trigonometric functions.