Qedit — Credit Risk Analysis

Estimating risk measures using QAE with a quadratic speed-up over classical Monte Carlo simulation.

Alex K
13 min readJun 14, 2021

Financial Crisis

As a consequence of the global financial crisis, American households lost an estimated $16 trillion in net worth between late 2007 and early 2009. In the worst economic downturn since the Great Depression, roughly 8.7 million jobs were shed from February 2008 to 2010, and approximately 3.8 million Americans lost their homes to foreclosures.

A homeless man, perhaps hit hard by the global financial crisis, sleeps on a sidewalk in Los Angeles, California.

This global financial crash took virtually every economist and notable policymaker by surprise. In the run-up to the crash, the International Monetary Fund concluded that “global economic risks [had] declined” since September 2006, and that “the overall U.S. economy is holding up well”. Indeed, just three days before the crash started, J.P Morgan projected that the US GDP growth rate would accelerate in the Q1 & Q2 of 2009.

“The best Wall Street minds and their best risk-management tools failed to see the crash coming” − New York Times, January 2, 2019

Fundamentally, the cause of the crash can be attributed to risk management failures, many of which originate from an over-reliance on unrealistically simplistic risk models — although expeditious, these shortcut-ridden models weren’t equipped to deal with the complexities of structured credit products.

The objective of these risk assessment models is to determine a borrower’s ability to meet their debt obligations, as well as the expected loss incurred by the financier should debtors renege on their debt payments. Simplistically, this credit default risk can be defined as the product of the probability of the debtor failing to fulfil their loan obligations, the total amount the lender was supposed to have been paid (typically amount borrowed + interest), and the proportion of the total amount that is not recoverable, as depicted below:

Indeed, in the case of the Great Recession, financial practitioners underestimated both the default probability and the loss rate, and by consequence, they undervalued the credit risk they were facing.

Value at Risk

In practice, more complicated measurement techniques are used to gauge the level of financial risk; chief among them is the Value at Risk — acronymized as ‘VaR’, and dubbed the “new science of risk management”. In a nutshell, since we don’t know a portfolio’s future value ¹P, we can’t know its loss ¹L. However, as both are random variables, ¹P can be assigned a probability distribution, enabling the calculation of the desired quantile of ¹L.

This is illustrated below for a 90% VaR metric — the amount of money such that there is a 90% probability of the portfolio losing less than that amount of money. The 90% quantile of ¹L can thus be calculated as the portfolio’s current value ⁰p minus the 10% quantile of ¹P, as seen here:

The components common to all practical Value at Risk measures are depicted in the following exhibit. The measure accepts two inputs (historical data and the portfolio’s holdings), which are processed separately by inference and mapping procedures respectively within the Value at Risk measure — the resulting outputs correspond to the two components of risk (uncertainty and exposure), and are combined to give a single measure of the portfolio’s market risk via the use of a transformation procedure:

Conditional Value at Risk

In spite of VaR’s commendable generality and reliability, it suffers from a very serious shortcoming: it provides no handle on the extent to which losses might be suffered beyond a certain cut-off point; that is, it provides a lowest bound for losses in the tail of the loss distribution, giving the measure an inherently optimistic disposition — an undesirable faculty in the domain of risk management.

This limitation is resolved through the use of an alternative risk assessment measure — the Conditional Value at Risk (CVaR) — which does quantify a portfolio’s tail risk. Simply put, the CVaR is an average of the values that fall beyond the VaR. Accordingly, calculating the CVaR is relatively trivial once the VaR has been ascertained, with the following formula being used (where p(x) dx is the probability density of obtaining a return with value x, c is the VaR cut-off point, and VaR is the previously calculated VaR level):

This definition can also be depicted graphically, as seen in the diagram below:

Monte Carlo simulation

Monte Carlo (MC) simulation is widely used to evaluate these 2 risk measures (VaR and CVaR) — the simple “variance-covariance” solution relies on a potentially erroneous assumption that a portfolio’s value varies linearly or quadratically as market risk factors fluctuate, while the analytical formulae of quantities used in loss models may not always be available.

This computerized mathematical technique builds models of possible results by substituting random sets of values from the probability functions for any factor that has inherent uncertainty; this process is repeated — and the result is thus recalculated — thousands of times, using a completely different set of random numbers in a fixed range each time, thereby producing a distribution of a large number of possible outcome values, as seen here:

Quantum Amplitude Estimation

Crucially, MC simulation — especially for rare-event simulation problems, such as credit risk estimation — is quite computationally intensive. Indeed, today’s classical computers struggle to run the trillions of simulations mandated by even relatively simple credit risk assessment scenarios (with, say, a just a dozen input parameters). The advent and development of quantum computers has accordingly opened up novel ways of addressing computational tasks, including the analysis of financial risk measures.

In particular, the applicability of the Quantum Amplitude Estimation (QAE) algorithm in the field of Quantum Finance has been recently demonstrated, notably including the use of QAE to price financial derivatives with the Black-Scholes model. The main advantage of this algorithm is that it yields a quadratic speed-up compared to the O(M to the power of −1/2) convergence rate of classical Monte-Carlo simulation.

The quantum circuit used by QAE, shown here with m ancilla qubits and n + 1 state qubits.

This project therefore leverages QAE in the context of estimating credit risk measures. The resulting model, named Qedit, is to compute both the VaR and CVaR of a two-asset portfolio by conducting simulations on IBM’s Qiskit.

Special Thanks

This project is inspired by a Qiskit tutorial, to which I’m grateful for providing the code used and informing much of the corresponding theory.

Problem Definition

Rather than using a dataset containing “real” data (such as, say, US Treasury debt), we mathematically define a portfolio of a portfolio of K assets, in which the default probability of every asset k follows a Gaussian Conditional Independence model, as given by the following relationship, where F denotes the cumulative distribution function of Z, p⁰ subscript k is the default probability of asset k for z = 0, and p subscript k refers to the sensitivity of the default probability of asset k with respect to Z):

Our ambition is to analyze risk measures of the total loss, which is mathematically defined as follows, where λ subscript k denotes the loss given default of asset k, and given Z, (Z)X subscript k denotes a Bernoulli variable (the simplest random variable) representing the default event of asset k:

More precisely, we are interested in finding the Value at Risk and Conditional Value at Risk of L, with confidence level a ∈ [0, 1], as defined below:

The following code defines the problem parameters, including the number of qubits used to represent the model, the truncation value for Z, the base default probabilities and their sensitivities for every asset, the loss given default for asset k, and the confidence level for VaR and CVaR:

Uncertainty Model

Next, we construct a quantum circuit in order to load the uncertainty model. This is achieved by creating a quantum state in a register of n subscript z, representing a standard normal distribution of Z, which is used to control single-qubit rotations about the y-axis on a second qubit register of K qubits, where a |1⟩ state of qubit k corresponds to the default event of asset k. This quantum state,|Ψ⟩ is written as follows, where we denote z subscript i by the i-th value of discretized and truncated latent random variable Z:

We subsequently instruct the program to draw the circuit that constructs|Ψ⟩:

This simulator can then be used to validate the circuit’s construction, as well as to compute exact values for the expected loss E[L], the probability density function PDF and cumulative distribution function CDF of L, the Value at Risk VaR(L) alongside the corresponding probability, and finally the Conditional Value at Risk CVaR(L), as presented below:

Expected Loss E[L]:                0.6409                        Value at Risk VaR[L]:              2.0000                              P[L <= VaR[L]]:                    0.9591                               Conditional Value at Risk CVaR[L]: 3.0000

We can then plot the loss distribution, depicting the expected loss, exact Value at Risk and exact Conditional Value at Risk as green, yellow, and red vertical dashed lines respectively.

Moreover, we can also plot the distribution of variable Z, as shown here:

Finally, we can also graph the results for the individual probabilities of the default event of asset k, producing the following distribution:

Expected Loss

Our next priority is to produce an estimate for the expected loss. Our first step to this end is to apply a weighted sum operator, which adds up all the individual losses, giving the total loss, as described mathematically below:

Indeed, the following expression describes the number of qubits needed to represent the above result:

The total loss distribution in a quantum register L ∈ {0, …, (2 to the power of n subscript S) − 1} can subsequently be mapped to the amplitude of an objective qubit by the following operator:

This enables the execution of a Quantum Amplitude Estimation algorithm to evaluate the originally sought-after expected loss. First, though, we create a state preparation circuit with which we are to soon run QAE:

We must then validate this quantum circuit representing the objective function; we do so by simulating it directly, and analyzing the probability of the objective qubit being in a |1⟩ state — the value QAE will ultimately approximate. The resulting state vector is evaluated with the following code:

The following results are accordingly produced and printed:

Exact Expected Loss:   0.6409
Exact Operator Value: 0.3906
Mapped Operator value: 0.6640

Finally, we run QAE to estimate the expected loss by leveraging a quadratic speed-up over classical Monte Carlo simulation. The following lines of code set the target precision and confidence level, before constructing amplitude estimation:

The resulting estimate for the expected loss and the confidence interval to which the approximation was made, alongside the exact loss value, are subsequently outputted:

Exact value:    	0.6409
Estimated value: 0.6913
Confidence interval: [0.6224, 0.7601]

Cumulative Distribution Function

The cumulative distribution function (CDF) of the loss, rather than the total expected loss, can also be efficiently estimated via the use of QAE; classically, by contrast, this typically involves evaluating all the possible combinations for which assets are defaulted, or generating many samples in classical Monte Carlo simulation — slow processes relative to the benefits yielded by QAE.

In estimating the CDF — that is, the probability P[Lx] —we make use of the weighted sum operator defined previously to add up individual losses to total loss, before applying a comparator that acts as follows for a given value x:

The resulting quantum state (assuming the totalled loss values and corresponding probabilities) can subsequently be expressed as follows:

Since the CDF(x) is equal to the probability of obtaining a|1⟩ state in the objective qubit, we are able to directly estimate the CDF with QAE. First, we instruct the program to draw the aforementioned comparator:

We subsequently define the quantum circuit with which we will run QAE, remembering to validate it via quantum simulation. The following lines of code define the circuit, load the random variable, define the comparator objective function post-aggregation, and lastly “uncompute” aggregation:

The simple mention of state_preparation.draw() results in the materialization of this quantum circuit, which is illustrated below:

Akin to the procedure followed earlier, we evaluate the resulting state vector, with the following code (slightly different from the first application):

As expected, the Operator’s evaluation of the CDF and the exact value are equal, each with a value of 0.9591, as seen here:

Operator CDF(2) = 0.9591
Exact CDF(2) = 0.9591

Finally, we run QAE to estimate the CDF for a given value x, as originally specified; the results of this process are outlined below:

Exact value:    	0.9591
Estimated value: 0.9595
Confidence interval: [0.9584, 0.9605]

Value at Risk

In order to evaluate the CDF to estimate the VaR, we use QAE alongside a bisection search — a logarithmic search algorithm that finds the position of a particular element within a pre-sorted search list. Accordingly, after constructing amplitude estimation as usual, we start a bisection search for our target value, which begins by checking whether low and high values are given (if not, the algorithm evaluates them itself).

The process continues by checking if the low value satisfies the condition specified, as well as if the high value is above the target. The bisection search itself is then performed, returning a high value after its completion — in this case, we run the algorithm to determine the Value at Risk, which produces the following output:

Finally, we print out the results, which indicate the model’s capability of finding the VaR and it’s associated probability with great precision:

Estimated Value at Risk:  2
Exact Value at Risk: 2
Estimated Probability: 0.960
Exact Probability: 0.959

Conditional Value at Risk

Our final task is the computation of the CVaR — that is, the expected value of the loss conditional to it being greater than or equal to the VaR. In order to do this, we evaluate a piecewise linear objective function f(L), which is dependent on the total loss L — this function is defined as follows:

First, we define the linear objective, before subtracting the VaR and adding it to the later estimate. The program is subsequently instructed to create this now-defined linear objective, as shown below:

After our routinely validation of the circuit with quantum simulation, we evaluate and normalize the resulting state vector, before adding the VaR to the estimate. Before we run QAE, our results are as follows:

Estimated CVaR: 3.3121
Exact CVaR: 3.0000

After running QAE to estimate the CVaR in a manner almost identical to our prior executions of the algorithm, we print our final set of post-QAE results:

Exact CVaR:     3.0000 
Estimated CVaR: 3.2474

As shown above, Amplitude Estimation has brought our estimated CVaR 2.16% closer to its exact value than the pre-QAE approximation. Moreover, the model has about a 91.7% Accuracy in evaluating the CVaR post-QAE.

Discussion

In summary, a quantum algorithm was developed to estimate credit risk, leveraging a quadratic speed-up over classical Monte Carlo methods. Although the retrieved results do indicate a huge potential for quantum computing in this domain, much further hardware development will be necessary before commercial use; in particular, quantum systems with a greater number of qubits are needed to model and cope with the complexities of real-world scenarios. Moreover, the error rate will also need to be reduced, and qubit coherence will have to be increased by a significant degree, in order for a truly practical quantum advantage to be realized.

With such a rich interest in the development of quantum systems, it is reasonable to expect rapid progress toward these improvements in the coming decade. Indeed, quantum algorithms for credit risk analysis, upon commercialization, will be game-changing, insofar as they will significantly reduce the likelihood of financial practitioners underestimating credit risk, and by consequence, the probability of disastrous financial crashes which have the potential to destroy millions of lives. Qedit has the potential to revolutionize credit risk management, providing greater computational efficiency and calculation speed than today’s classical solutions.

Special thanks once again to the team at IBM’s Qiskit for their code and informative tutorial!

--

--

Alex K

17 y/o researcher in Machine Learning & Computational Biology.