Monday, 18 May 2015

Marginal and Component Value-at-Risk: A Python Example

Value-at-risk (VaR), despite its drawbacks, is a solid basis to understand the risk characteristics of the portfolio. There are many approaches to calculate VaR (historical simulation, variance-covariance, simulation). Marginal VaR is defined as the additional risk that a new position adds to the portfolio. The practical reading of the marginal VaR, as very nicely stated in Quant at risk http://www.quantatrisk.com/2015/01/18/applied-portfolio-value-at-risk-decomposition-1-marginal-and-component-var/ is: the highest the marginal VaR, the exposure to the respective asset should be reduced to lower the portfolio VaR. The component VaR shows the reduction of the portfolio value-at-risk resulting from removal of a position. The sum of component VaR of the shares in the portfolio equals the portfolio diversified VaR, while the sum of the individual VaR presents the undiversified portfolio VaR, assuming perfect correlation between assets in the portfolio.

Also it is worth noting that based on simple mathematics, the equations can be changed, i.e. there are several ways to calculate the component VaR.

We take the following steps in Python to come up with the marginal VaR of a 3-asset portfolio:
(This works for Python 2.7 for Python 3.4 -> change in print statements in the codes. Additionally, to some of the lines some formatting can be applied - for instance returns are not presented in percentages.)

import numpy as np
import pandas as pd
import pandas.io.data as web
from scipy.stats import norm

Value=1e6 # $1,000,000
CI=0.99 # set the confidence interval

tickers =['AAPL', 'MSFT', 'YHOO']
numbers=len(tickers)

data=pd.DataFrame()
for share in tickers:
    data[share]=web.DataReader(share, data_source='yahoo', start='2011-01-01', end='2015-05-15')['Adj Close']
data.columns=tickers

ret=data/data.shift(1)-1 # calculate the simple returns
ret.mean()*252 #annualize the returns
covariances=ret.cov()*252 #gives the annualized covariance of returns
variances=np.diag(covariances) #extracts variances of the individual shares from covariance matrix
volatility=np.sqrt(variances) #gives standard deviation


weights=np.random.random(numbers)
weights/=np.sum(weights) # simulating random percentage of exposure of each share that sum up to 1; if we want to plug in our own weights use: weights=np.array([xx,xx,xx])

Pf_ret=np.sum(ret.mean()*weights)*252 #Portfolio return

Pf_variance=np.dot(weights.T,np.dot(ret.cov()*252,weights)) #Portfolio variance
Pf_volatility=np.sqrt(Pf_variance) #Portfolio standard deviation

USDvariance=np.square(Value)*Pf_variance
USDvolatility=np.sqrt(USDvariance)

covariance_asset_portfolio=np.dot(weights.T,covariances)
covUSD=np.multiply(covariance_asset_portfolio,Value)
beta=np.divide(covariance_asset_portfolio,Pf_variance)

def VaR():
    # this code calculates Portfolio Value-at-risk (Pf_VaR) in USD-terms and Individual Value-at-risk (IndividualVaR) of shares in portfolio.  
    Pf_VaR=norm.ppf(CI)*USDvolatility
    IndividualVaR=np.multiply(volatility,Value*weights)*norm.ppf(CI)
    IndividualVaR = [ '$%.2f' % elem for elem in IndividualVaR ]
    print 'Portfolio VaR: ', '$%0.2f' %Pf_VaR
    print 'Individual VaR: ',[[tickers[i], IndividualVaR[i]] for i in range (min(len(tickers), len(IndividualVaR)))]

VaR() #call the function to get portfolio VaR and Individual VaR in USD-terms

def marginal_component_VaR():
     # this code calculates Marginal Value-at-risk in USD-terms and Component Value-at-risk of shares in portfolio. 
    marginalVaR=np.divide(covUSD,USDvolatility)*norm.ppf(CI)
    componentVaR=np.multiply(weights,beta)*USDvolatility*norm.ppf(CI)
    marginalVaR = [ '%.3f' % elem for elem in marginalVaR ]
    componentVaR=[ '$%.2f' % elem for elem in componentVaR ]
    print 'Marginal VaR:', [[tickers[i], marginalVaR[i]] for i in range (min(len(tickers), len(marginalVaR)))]
    print 'Component VaR: ', [[tickers[i], componentVaR[i]] for i in range (min(len(tickers), len(componentVaR)))]

marginal_component_VaR(): #call the function 

13 comments:

  1. Hi, first off all I ask you excuse my poor english.
    some questions:
    1 - why correlations between shares are not included? Are they be included?( I dont know )

    2 - could you explain me this command line: variances=np.diag(covariances) please? Why the variances of shares are the diagonal from covariancex matrix?

    3 - If I planning to use this python code to get portfolio VaR to my companhy do I need to change some steps, except those that care with loading datas?

    thank you so much!

    ReplyDelete
    Replies
    1. Hi, sorry for my late reply! So:

      (1) Correlations and Covariances are very close concepts in the probability theory, you know that correlation can be calculated using covariances (correlation i,j=covariance i,j/(standard deviation i X standard deviation j). So I basically decided to use covariances (not to make the code too heavy), but also correlations can be added.

      (2) regarding the command on variances: I used the diagonal to extract the variances, since in the covariance matrix you have in the diagonal covariance of the respective company with itself, which is exactly the variance of the respective company. Covariance matrix is also known as variance-covariance matrix. (As a reminder, standard deviation is the squared root of variance)

      (3) You can use the code with no changes (except of course the portfolio value and confidence interval).

      As a final, here https://www.bionicturtle.com/forum/threads/component-versus-incremental-value-at-risk-var-level-2.4961/ is an excel file the may help a lot. Also note there are many ways to approach the problem (note that the covariance and correlation can be substituted each other).

      Hope this helps! Best regards!

      Delete
    2. Hi, thanks for this. . .is this procedure what is known as Parametric VAR?

      thanks much for the effort on this :-)

      Delete
    3. Hi! Yes, this is based on parametric (variance-covariance) VaR.

      Delete
    4. thanks so much Elena. It was very helpfull!!!

      Delete
  2. Thanks for the reply, your work has helped me quite a bit :-)

    I have one other question, it is very common to apply a decay factor (for example, 97%) where the most recent data is given a higher weight than further out data. Can I ask, how would you implement this factor in your model? Any ideas?

    Thanks again for the great work!!!!
    John

    ReplyDelete
    Replies
    1. Hi John, thank you for the comment!
      On the decay factor - yes, I agree it is a good idea. I'll modify the code to take into account the decay.

      Best regards,
      Elena

      Delete
  3. then your a lot smarter than I am . . .i tried and just could not figure out how to code it :-(

    thanks again

    ReplyDelete
  4. Hi Elena,
    Not sure if you had a look at the decay factor, but any pointers you could give would be greatly appreciated. I am trying to reproduce the decay process and just not finding a pythonic way around to it. I have been pulling my hair out on this. . .

    ReplyDelete
    Replies
    1. Hi John, I did nit have enough time to focus on the issue, but here is something that may serve as a starting pont: namely the Riskmetrics approach for variances. Variance = decay factor X Prior period Variance +(1-dacay factor) X Prior period Squared Return.

      Here is a flash code, where the fist variance is the variance calculated from all returns, and the second variance is calculated from teh formula above:

      def riskmetrics_variance(decay):
      riskmetrics_variance=[]
      for i in range(len(ret)):
      if i==0:
      riskmetrics_variance.append(ret.var())
      else:
      riskmetrics_variance.append(riskmetrics_variance[-1]*decay+(1-decay)*ret.loc[i]**2)
      print riskmetrics_variance


      Also have you looked at pandas.ewma (http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.ewma.html)? Where you can claulate from decay the "center of mass" parameter, i.e. center of mass=(1-decay)/decay. Formula should be sth like that: ewma = pd.ewma(price, com=(1.0-a)/a, adjust=False)

      Hope this helps a bit!

      Best,
      Elena

      Delete
    2. This comment has been removed by the author.

      Delete
    3. This comment has been removed by the author.

      Delete
  5. Hi Elena,
    Great post, how do you adjust your code for monthly data rather than daily? I guess when you annualise the returns you effectively make the Portfolio VaR number an annual amount rather than daily?

    ReplyDelete