Bocconi Students Capital Markets
  • ABOUT US
  • Team
  • ARTICLES
    • Americas
    • APAC
    • EMEA
    • BSCM Analyses
  • Events
  • Alumni
  • JOIN US

Statistical Arbitrage with MGARCH Model



Introduction


The modeling of financial time series is crucial for understanding market dynamics, risk management, and portfolio optimization. Exchange rates, in particular, exhibit complex dependencies and time-varying volatilities, making them an ideal subject for advanced econometric models. Traditional univariate models, such as the ARCH and GARCH frameworks, effectively capture time-varying volatility within individual assets. However, real-world financial markets involve multiple interconnected assets, where volatility spillovers and correlations play a critical role in pricing, risk assessment, and investment decision-making. To address these interdependencies, multivariate GARCH (MGARCH) models provide a more sophisticated approach by incorporating both volatility and co-volatility structures across multiple assets.


Among the various MGARCH specifications, the Dynamic Conditional Correlation Multivariate Generalized Autoregressive Conditional Heteroskedasticity (DCC-MGARCH) model stands out for its ability to estimate time-varying correlations while maintaining computational feasibility. Unlike the Constant Conditional Correlation (CCC) model, which assumes static relationships between assets, the DCC model allows correlations to evolve dynamically over time. This flexibility makes it a valuable tool for analyzing exchange rate dynamics, particularly in currency pairs where relationships fluctuate due to macroeconomic events, monetary policy shifts, and geopolitical influences.


In this study, we apply the DCC-MGARCH model to examine the joint volatility structure of three major currency pairs: EUR/USD, GBP/USD, and USD/JPY. Using historical time series data sourced from Bloomberg, we estimate time-varying volatilities and correlations among these exchange rates. Our goal involves: first, to identify the most appropriate distribution for modeling returns, as normality assumptions often fail to capture the empirical skewness and kurtosis observed in financial data; second, to utilize the estimated correlations for predictive trading strategies. Specifically, we explore a mean-reversion approach when correlations deviate below expected levels and a momentum strategy when correlations exceed anticipated thresholds.


The remainder of this research is structured as follows: We first review the theoretical foundation of MGARCH models, highlighting the advantages of the DCC framework. Next, we present the mathematical formulation of the model, detailing its estimation procedure and parameterization. Following this, we conduct an empirical analysis using historical exchange rate data. Finally, we conclude with insights on the effectiveness of DCC-MGARCH in exchange rate modeling.




Literature Review


Exchange rate volatility and co-movement have been key topics in financial econometrics for decades. Engle (1982) and Bollerslev (1986) introduced ARCH and GARCH models to capture time-varying volatility, but these did not account for cross-asset dependencies, leading to MGARCH frameworks (Bollerslev et al., 1988).


During crises like the 2007–2009 financial downturn and the European debt crisis, market interdependence became more apparent, raising concerns among policymakers and investors about volatility linkages across countries and asset classes. MGARCH models have since been widely used to analyze risk transmission through variance impulse response functions, parameter restrictions, and forecast error variance decompositions.


The CCC model (Bollerslev, 1990) assumed static correlation structures, but Engle (2002) improved this with the DCC model, allowing correlations to adjust dynamically—an essential feature for forex markets where relationships shift due to global economic conditions.


The DCC-MGARCH model effectively captures both time-varying volatility and dynamic correlations between currency pairs, addressing limitations of univariate models. Exchange rates are influenced by macroeconomic factors, monetary policies, and geopolitical events, leading to evolving interdependencies. DCC-MGARCH allows correlation structures to adjust dynamically, making it a powerful tool for forex market analysis.


A key advantage is its ability to capture volatility spillovers across currency pairs. Exchange rate movements are interconnected shocks in one currency often propagate to others. For instance, U.S. Federal Reserve policy changes impact not only USD-based pairs like EUR/USD and GBP/USD but also influence non-USD currency pairs through global financial linkages. By modeling these spillovers, DCC-MGARCH provides a comprehensive view of risk transmission and systemic volatility in forex markets.


Another strength is its flexibility in modeling time-varying correlations, crucial for currency relationships. Unlike CCC, which assumes static dependencies, DCC-MGARCH allows correlations to fluctuate, reflecting economic cycles, central bank actions, and geopolitical shifts. For example, EUR/USD and GBP/USD correlations strengthen during European economic stability but weaken during Brexit negotiations or diverging ECB and Bank of England policies. Capturing these variations enhances risk assessments and investment decisions.


Additionally, the model is robust against non-normal return distributions, a critical aspect of financial modeling. Exchange rate returns often exhibit fat tails and asymmetry, experiencing extreme fluctuations that standard normal distribution models fail to capture. Incorporating alternative distributions like the skew-t improves real-world accuracy, especially during financial turmoil when sharp currency depreciations or appreciations occur. This makes DCC-MGARCH a valuable tool for central banks, multinational corporations, and hedge funds managing forex exposure.



Recent Studies


Recent studies highlight the model’s relevance. Aielli (2013) proposed a corrected DCC estimator, while Kenourgios & Samitas (2011) examined financial contagion in currency markets, demonstrating the importance of time-varying correlation models for risk transmission and diversification in forex portfolios.


Building on this, our research applies DCC-MGARCH to major currency pairs and explores its implications for predictive trading strategies. By incorporating alternative return distributions and visualizing correlation structures over time, we aim to provide a robust framework for exchange rate modeling and risk management. The model remains a crucial tool for understanding volatility interdependencies in global currency markets, offering valuable insights for risk management, portfolio optimization, and strategic forex decision-making.




Mathematical Framework


The most relevant applications in real-life finance are multivariate exercises, meaning that they involve several assets and securities that might influence each other over time. Consider, for instance, a portfolio of N assets. It is possible to construct an N × 1 vector Rt ≡ [R1t, R2t, …] which collects the returns of the securities in the portfolio. When selecting and analyzing these assets, a crucial step is considering both the first moment, i.e. the expected value E[R], and the second moment, i.e. Var[R], which in this case consists of a matrix of the variances and covariances of the selected assets. These second statistical moments provide a measure to quantify the volatilities of financial instruments and their relationships. In particular, given that empirical data exhibit time-varying characteristics, variances and covariances are often modeled as conditional: their forecasts are not constant but depend on past information. Therefore, we need to develop a multivariate time series method to model conditional second moments, including dynamic variances, covariances, and correlations. After evaluating different models, our research will ultimately adopt the multivariate Dynamic Conditional Correlation (DCC) GARCH model.


In order to understand the mathematical framework behind the DCC GARCH model, we proceed as follows. Firstly, we focus on the univariate case considering the volatility of a single variable, and show how, after considering another method, the univariate GARCH model provides the best forecasts. Secondly, we extend the tools used for modeling conditional volatility to conditional covariances, showing the characteristics of the multivariate relations and their time-varying nature.


The first model that we consider when estimating the conditional variance of the residuals of some conditional mean model is the ARCH model (Autoregressive Conditional Heteroskedasticity). As mentioned before, conditional forecasts are generally superior to unconditional ones, as they respond to recent market fluctuations and consider relevant past information. This explains the autoregressive (AR) nature of volatility: its current value is assumed to be a function of past information. Furthermore, since empirical data show patterns of time-varying variance, this phenomenon is referred to as conditional heteroskedasticity (CH).


In the ARCH(p) model for conditional variance, forecasts depend on a weighted sum of past squared residuals:

\[ \sigma_{t|t-1}^2 = \alpha_0 + \sum_{i=1}^{p} \alpha_i \epsilon_{t-i}^2 \]

The model assumes that tomorrow’s expected volatility \(\sigma_{t|t-1}\) depends on past shocks, which are mathematically defined as the squared residual errors \(\epsilon_t^2\), i.e. the squared deviations from the expected value of a given conditional mean model. From an economic perspective, such shocks are unexpected changes in financial time-series and are often driven by new information entering the market. In particular, large shocks typically lead to an increase in volatility, which is subsequently followed by future larger volatility, a phenomenon known as volatility clustering.

In this framework, the ARCH(p) variance predictions only depend on a finite number of past squared shocks, meaning that the model might fail to capture the long-term volatility persistency. Considering only recent \(\epsilon_t^2\) shocks implies that the model might overreact to temporary spikes and then revert to lower estimates of volatility, thus showing unrealistic sudden bursts.

A more efficient way of modelling the conditional volatility of a variable is provided by the GARCH(p, q) model (Generalized ARCH). Instead of focusing only on past square residuals, this model introduces an ARMA-type structure combining a weighted sum of past squared residuals for \(p\) periods (as in ARCH) and past variance forecasts for \(q\) periods:

\[ \sigma_{t|t-1}^2 = \alpha_0 + \sum_{i=1}^{p} \alpha_i \epsilon_{t-i}^2 + \sum_{j=1}^{q} \beta_j \sigma_{t-j|t-j}^2 \]

Where:

\[\sum_{i=1}^{p} \alpha_i + \sum_{j=1}^{q} \beta_j < 1\]

is a necessary and sufficient condition for ensuring the stationarity of the model, meaning that in the long-term, the model’s conditional variance converges to the average unconditional variance \(\sigma^2\):

\[ \sigma^2 = \frac{\alpha_0}{1 - \sum_{i=1}^{p} \alpha_i - \sum_{j=1}^{q} \beta_j} \]

It follows that the GARCH model can be considered an improvement of the ARCH, allowing the variance predictions to be dynamically influenced not only by market shocks, but also by their previous volatility levels. Indeed, by incorporating past predicted variances (third term of Equation (3), the GARCH model effectively captures the persistent nature of volatility, a feature commonly observed in empirical data.


Assigning higher \( \alpha_i \) values to give greater weight to new information (\( \epsilon_{t-i} \)), or by setting larger \( \beta_i \) values, leading to a more significant memory of past volatility forecasts.


After introducing and explaining the univariate structure of conditional volatility, we now transition to more realistic multivariate methods. In this framework, we analyse multiple time-series, modelling not only their volatilities but also the dynamic relationships between variables, primarily through conditional covariances and correlations.


In order to do so, we introduce the DCC GARCH model, a multivariate extension of the GARCH model previously analysed. This model is estimated in two steps: first, it applies the univariate GARCH to estimate the conditional variances of each asset; second, it employs time-varying conditional correlations between standardized residuals \( z_{t,i} \) to construct the conditional covariance matrix \( \Sigma_t \).

In the first step, for each variable \( i \) (where \( i = 1, 2, \dots, N \)) we compute the expected conditional variance today \( \sigma_{t,i}^2 \) using the univariate GARCH model (Equation (2) applied for predicting today’s conditional variance):

\[ \sigma_{t,i}^2 = \alpha_0 + \alpha_1 \epsilon_{t,i}^2 + \beta_1 \sigma_{t-1,i}^2 \]

After estimating all \( N \) univariate GARCH models, the time series of conditional standard deviation is obtained by:

\[ \sigma_{t,i} = \sqrt{\sigma_{t,i}^2} \]

In the second step, we model the Dynamic Conditional Correlation to capture the time-varying correlation structure between the standardized residuals \( z_{t,i} \), defined as:

\[ z_{t,i} = \frac{\epsilon_{t,i}}{\sigma_{t,i}} \]

As mentioned before, a key element of the DCC model is the conditional covariance matrix \( \Sigma_t \), which allows us to assess how volatilities and correlations between assets change over time. This matrix is mathematically decomposed as follows:

\[ \Sigma_t = D_t R_t D_t \]

Where \( D_t = \text{diag}(\sigma_{t,1}, \sigma_{t,2}, \dots, \sigma_{t,N}) \) is the diagonal matrix defined by the time-varying standard deviations obtained in the first step using the univariate GARCH and stores information on volatility. While \( R_t \) is the time-varying correlation matrix, whose estimation is reported below. This process ensures that the estimated covariance matrix \( \Sigma_t \) is always positive semi-definite (PSD), which is a necessary condition for its economic meaningfulness.

The DCC model constructs the dynamic correlation matrix \( R_t \) by first defining an intermediate covariance matrix \( Q_t \). This covariance matrix is modeled using a process similar to the GARCH model, we define:

\[ Q_t = (1 - \alpha - \beta) Q_0 + \alpha (z_{t-1} z_{t-1}^T) + \beta Q_{t-1} \]

Where:

  • \( Q_0 \) is the unconditional correlation matrix of the standardized residuals.
  • \( z_{t-1} z_{t-1}^T \) is an \( N \times N \) matrix containing the outer products of standardized residuals, allowing correlation to change dynamically.
  • \( Q_{t-1} \) ensures the persistence of past correlations.
  • \( \alpha \) and \( \beta \) are the parameters that respectively weight the recent correlation shocks and long-term dynamics of correlation.

Introducing \( Q_t \) is a fundamental step to ensure the boundedness of correlations between [-1, 1]. However, \( Q_t \) is not necessarily a valid correlation matrix (i.e., its diagonal entries are not necessarily equal to 1). Hence, we compute the dynamic correlation matrix \( R_t \) by standardizing the covariances in \( Q_t \), thus ensuring that \( R_t \) has unit variances on the diagonal and reflects the correlations among variables:

\[ R_t = \text{diag}(Q_t)^{-1/2} Q_t \text{diag}(Q_t)^{-1/2} \]

Finally, having modeled the dynamic correlation matrix \( R_t \), we now define the covariance matrix \( \Sigma_t \). Recall Equation (4):

\[ \Sigma_t = D_t R_t D_t \]

Then:

Hence it follows that:

The final covariance matrix \( \Sigma_t \) incorporates both volatility effects (\( D_t \)) and correlation dynamics (\( R_t \)). The diagonal entries (\( \sigma_{t,i}^2 \)) consist of the time-varying variances of individual assets found in the first step, whereas the off-diagonal elements (\( \sigma_{t,i} \sigma_{t,j} \rho_{t,i,j} \)) are the dynamic covariances, which show how variables co-move over time.




Model Overview


Two main classes were needed, UGARCH and DCC-GARCH, that can be taken from the mvgarch library. We used them to fit univariate and dynamic conditional correlation GARCH models, respectively.

The first objective is to obtain three separate GARCH(1,1) models, so currency pairs’ returns are taken individually first to fit them into the model. Subsequently, by employing the DCC-GARCH class, we specify the model using the three fitted UGARCH models (which describe how the volatility of each individual asset evolves over time) and provide the full dataset of returns. In this way, the model can analyze how the residuals from the individual UGARCHs are correlated and describe their dynamic correlation. Finally, we fit the full DCC-GARCH model and estimate how the correlations between the exchange rates evolve over time.


The following models compare two important risk and dependency measures in financial time series: historical (rolling) and conditional (model-based, e.g., DCC-GARCH) correlations and volatilities for major currency pairs. The analysis spans from 2015 to 2025 and focuses on EUR/USD, USD/JPY, and EUR/GBP returns. The algorithm calculates and visualizes correlations between asset returns:

  • Historical correlation: This is computed using a rolling window (e.g., 20 days), representing the simple trailing correlation between two assets.
  • Conditional correlation: This reflects time-varying correlations that adapt to market conditions.

For each currency pair, the two correlations are plotted against each other to witness their evolution over time. The analysis then moves on focusing on volatility. The model then quantifies the following:

  • Historical volatility: This is calculated as the rolling standard deviation of returns.
  • Conditional volatility: This captures time-varying risk.

Each asset's historical and conditional volatility series are plotted together.



Graphical Results Interpretation


Correlation


Figure 1: Conditional vs Historical Correlation between: EUR/USD, USD/JPY, EUR/GBP daily returns.


Figure 1 shows that the DCC-MGARCH model performs well in capturing the direction of dynamic correlation between currency pairs, but it systematically underestimates the magnitude of correlation spikes observed in historical data. Indeed, the model identifies correctly whether correlations strengthen or weaken over time but falls short in fully reflecting the intensity of sudden shifts.


This gives rise to two pitfalls in the application of the model. First, the model leads to a consistent underestimation (overestimation) of the risk of an FX position, as it forecasts a correlation level consistently lower (higher) than realized. Second, this bias becomes more persistent during high correlation events. Indeed, the model fails in capturing correlation clustering – the empirical tendency for correlations to remain persistently high or low over extended periods. To enhance the responsiveness of the model, we might consider switching from a DCC to an Asymmetric DCC (ADCC) framework. A standard DCC treats positive and negative shocks symmetrically, while an ADCC model allows negative shocks to have a stronger effect on correlations than positive shocks. This captures empirical evidence observed in financial markets – negative returns (bad news) tend to increase correlations between assets more than positive returns (good news) of the same size.


Volatility

Figure 2: Conditional vs Historical Volatility of the currency pairs: EUR/USD, USD/JPY, EUR/GBP daily returns (2015 – 2025).


Figure 2 shows the DCC-MGARCH model fits overall well the observed volatility patterns of the EUR/USD, USD/JPY, and EUR/GBP currency pairs. First, the model captures well volatility clustering events, whereby high volatility events tend to be followed by persisting high volatility, and similarly, small volatility events tend to be followed by persisting low volatility. Second, the model demonstrates a high degree of responsiveness and timeliness – conditional volatility estimates adjust rapidly in response to new market information, closely mirroring the timing of realized volatility spikes.


Representation

Here, we have a 3D visualization of standardized residuals for three major currency pairs (EUR/USD, USD/JPY, EUR/GBP), along with their marginal probability density functions (PDFs) estimated via Kernel Density Estimation (KDE).

First comes the part of making data accessible from the model. Standardized residuals are extracted from the model fit and they are then stacked for a joint representation of each currency pair. The code then uses a Gaussian distribution to estimate the joint density of the three residual series. The density values are used to color the scatter plot, highlighting regions of higher or lower joint probability.

Separate KDEs are then computed for each residual series to estimate their marginal distributions. These are plotted as colored lines on the corresponding axes of the 3D plot.

Clustering around the origin and well-behaved marginal KDEs suggest that the residuals may be approximately Gaussian and independent, as desired after proper model fitting. Additionally, points far from the center (with low-density coloring) may indicate outliers or periods of market stress.

Lastly, we analyzed if the normal distribution was a plausible approximation for the residuals or if an alternative distribution could be a better representation. By calculating the excess kurtosis that is also noted in the legend of the graph, we find values of kurtosis not compatible with a normal distribution. Here’s a graph that shows how else the graph could be constructed using a T-student distribution that accounts for fatter tails in the distribution. Real-life implementation of the strategy should account for these deviations.



Statistical Arbitrage


In the early 2000s, the first forms of Statistical arbitrage started to be implemented. Later in time, consistent academic literature was written reporting how hedge funds in the previous years performed various types of statistical analyses to harvest market’s inefficiencies given by imperfect information, overreaction, and computational underperformance. Empirical papers like Statistical Arbitrage Pairs Trading Strategies, C. Krauss (2016), and Statistical Arbitrage Pairs Trading with High-frequency Data, J. Stubinger (2017). We can show an overview of the principle behind statistical arbitrage and reproduce a simulation of what was going on in the early 2000s when funds were trading based on these strategies.

As shown above in the previous section, conditional volatilities often deviate from realized ones, and the idea is to profit from these discrepancies assuming there will be a conversion effect. Here we can show in green every point when the strategy predicts that the volatility should be higher than what the market is showing and in red when it was lower instead.


We discuss here a statistical arbitrage strategy to capitalize on the convergence between conditional and realized volatility. The strategy involves setting up a long volatility position when realized volatility is significantly below conditional volatility, and a short volatility position vice versa. We measure the distance between conditional and realized volatility through z-scores, defined as:

\[ z = \frac{\sigma_{\text{realized}} - \sigma_{\text{conditional}}}{\text{std}(\sigma_{\text{realized}}- \sigma_{\text{conditional}})} \]


Entry points of the strategy are defined as follows:

  • Z-score > 2: open a volatility long position on the currency pair.
  • Z-score < 2: open a volatility short position on the currency pair.

The most efficient way to trade volatility is through options. We will briefly introduce how volatility can be traded without being exposed to the market's directional movements. To measure how option prices react to various variables, we use Greeks, which are the derivatives of the option prices with respect to other variables. To find every Greek, we start by deriving the Black-Scholes Equation:

\[ V(S, t) = S N(d_1) - K e^{-r(T-t)} N(d_2) \]

Where:

\[ d_1 = \frac{\ln(S/K) + (r + \sigma^2/2)(T - t)}{\sigma \sqrt{T - t}} \quad d_2 = d_1 - \sigma \sqrt{T - t} \]

Our first objective with the strategy is to remain neutral to directional movements of the market prices (In this case, the currency pair). To measure how an option price varies with the underlying price, we use Delta, computed as:

\[ \Delta = \frac{\partial V}{\partial S} \]

We find that:

\[ \Delta = N(d_1) \]

This is the equation for a call option, but for a put option, it is exactly \( -N(d_1) \). We can simply buy both a put and a call to stay neutral to Delta.


This is not a static process. In fact, delta changes relative to the underlying price, and because this is a cumulative distribution function, it allows us to say that \( 0 < N(d_1) < 1 \), and that:

\[ \lim_{S \to \infty} N(d_1) = 1 \quad \text{and} \quad \lim_{S \to 0} N(d_1) = 0 \]

We also know that at-the-money (ATM), when the strike price matches the underlying price, because we are at the center of our normal distribution, delta equals 0.5.

Using the most basic volatility strategy, we enter a Long Straddle if we predict volatility is going to be higher than the market is currently pricing, and a Short Straddle if the model signals volatility is going to be lower than expected. A long straddle involves buying a Long Call and a Long Put ATM:


Because, as we said before, when both options are ATM their Delta cancels out exactly, initially we simply buy a Call and a Put.

Now, if the currency pair goes up, the Call option is going to be ITM (In The Money) and its delta will go closer to one, while the Put option will be OTM (Out of The Money), and its delta will move from -0.5 closer to 0. Therefore, our delta position will not be perfectly offset, and we will have to hedge it manually by buying the underlying when needed. Imagining the call went up to 0.6, the put up to -0.4, we would have to buy 20% of a lot of the currency pair.


The reason why this strategy is long volatility, apart from seeing from the graph above that we will profit when large movements of the underlying price take place, is that we are long on both options. In fact, the first order volatility Greek, Vega, is determined by the position taken on the derivative, whether long or short. In fact, we can prove that both for a long call and a long put, Vega is equal to:

\[ \text{Vega} = K e^{-r(T - t)} \phi(d_1) \sqrt{T - t} \]

Apart from the meaning of this formula, this goes to show that for this Greek, the two options' positions don’t offset each other but rather sum. On a Long Straddle, the resulting position will be Vega positive, and on a Short Straddle, Vega will be negative.


The hedging dynamic has to be adjusted to minimize the cost of hedging given by the amount of transactions that have to take place to stay delta neutral. The bigger the changes in delta, the higher our transaction cost. An estimate of our costs can also be found in Gamma, the second-order derivative of the option price with respect to the underlying, which reaches its peak when an option is ATM (At The Money). Being this Greek also dependent on the position, both Call and Put will contribute to its value, making the cost of hedging when entering a position the highest in time.



Conclusion


We have discussed the mathematical framework behind the construction of the model and the practical applications of a DCC-MGARCH framework to model conditional volatilities and correlations of currency pairs. The DCC GARCH model provides a sound structure for modeling the joint behavior of several time series, capturing both the fluctuating volatilities of individual assets and their evolving correlations. To summarize, in the first step, the univariate GARCH model is employed to estimate the volatility of each asset based on its previous values. Secondly, a time-varying correlation matrix is built from the residual errors of these univariate models using a GARCH-type structure. It follows that the conditional covariance matrix incorporates the single volatilities and dynamic correlations, thus guaranteeing a positive semi-definite, and economically meaningful representation of market risk.


We have applied the model to the pairs: EUR/USD, GBP/USD, and USD/JPY. First, the results indicate a robust fit to historical volatility patterns, allowing us to introduce a statistical arbitrage strategy to capitalize on the convergence of conditional and realized volatility. The strategy involves buying an ATM straddle when realized volatility is two standard deviations below its conditional estimate and selling an ATM straddle vice versa. Within this context, we successfully harvested micro-inefficiencies as trading signals for L/S volatility strategies through options trading.

Second, the results show a weak fit to correlation dynamics. The model captures correctly the direction of dynamic correlation between currency pairs, but it systematically underestimates the magnitude of correlation spikes observed in historical data and fails to capture correlation clustering. This leads to a consistent underestimation (overestimation) of the risk of an FX position, as the model forecasts a correlation level consistently lower (higher) than realized – especially during high correlation events.

Third, the residuals of the model are not compatible with a normal distribution. Excess kurtosis is 1.79 for EUR/USD, 2.98 for USD/JPY, and 2.96 for EUR/GBP, suggesting that a T-student would be a closer fit to the leptokurtosis of the residuals.

The model could be adjusted to account for some discrepancies between predicted and realized data. First, we consider switching from a DCC to an Asymmetric DCC (ADCC) framework to better capture correlation dynamics. An ADCC model allows negative shocks to have a stronger effect on correlations than positive shocks, reflecting the “leverage effect” observed in financial markets. Secondly, further improvement on the model would be achieved by using t-students distribution to account for heavier tails given the kurtosis results on the distribution of residuals.




Written by Giulio Losano, Nem Giuseppe Marra, Tommaso Delfino, Gauri Gupta and Boaz Lister

Contact us at [email protected]
Made by Bocconi Students Capital Markets
  • ABOUT US
  • Team
  • ARTICLES
    • Americas
    • APAC
    • EMEA
    • BSCM Analyses
  • Events
  • Alumni
  • JOIN US