Wednesday, 3 June 2015

Moving Average Crossover in Python

Developing a successful algorithmic trade strategy is a challenging task, and not just because it is difficult to predict price movements based on past behavior (this is close to what we all have been taught and probably learned in our day-to-day lives). The underlying idea behind algo trading is to exploit market patterns. But market patterns do change, so fast reaction is crucial.
I was trying recently to work on some algo trade codes on Python (Python 2.7) and I ended up for a simple moving average crossover code (15-day moving average and 50-day moving average). The example is for Microsoft shares (ticker: MSFT). Moving average crossover signal is: (i) buy when 15-day moving average crossed above the 50-day moving average and (ii) sell when the 15-day moving average crossed below the 50-day moving average.

Inspired by Aaron Ash’s post “Fun With Python” (http://aaronash.com.au/posts/fun_with_python/) here is what I get:

import pandas as pd
from pandas import *
import StringIO
import urllib
import datetime
import matplotlib.pyplot as plt

%matplotlib inline

def get_data(ticker):
    base_url = 'http://ichart.finance.yahoo.com/table.csv?'
    search_query = 's={}'.format(ticker)
    search_url = '{}{}'.format(base_url, search_query)
    read_url=urllib.urlopen(search_url).read()
    return read_url

# for Python 3.4: read_url=urllib.request.urlopen(search_url).read(); for Python 2.7: read_url=urllib.urlopen(search_url).read()

if __name__ == '__main__':
    ticker = 'MSFT' #select ticker

data=get_data(ticker)

lines = data.split('\n')
lines.reverse()
lines[0] = lines[-1]
cleaned_data = "\n".join(lines[0:-1])
data = read_csv(StringIO.StringIO(cleaned_data))
data.tail() # to see how the data looks like

# calculate the 15-day and 50-day simple moving average (MA)
data['MA_15'] = rolling_mean(data.Close, 15)
data['MA_50'] = rolling_mean(data.Close, 50)
previous_15 = data['MA_15'].shift(1)
previous_50 = data['MA_50'].shift(1)

crossing = (((data['MA_15'] <= data['MA_50']) & (previous_15 >= previous_50))
| ((data['MA_15'] >= data['MA_50']) & (previous_15 <= previous_50)))
crossing_dates = data.loc[crossing, 'Date']
print(crossing_dates)
#see where are the crossing points, triggering action

final=DataFrame(data, columns=['Date', 'Close', 'MA_15', 'MA_50'])
final['Date'] = pd.to_datetime(final['Date'])
for_chart=final[final['Date'] >= datetime.date(2013,1,1)]  #to see the results since Jan 1, 2013

# plot the results since Jan 1, 2013
for_chart.plot('Date', 'Close')
for_chart.plot('Date', 'MA_15')
for_chart.plot('Date', 'MA_50')
plt.legend(['Close', 'MA_15', 'MA_50'], loc='upper left')

To see the crossover dates triggers (buy or sell):

crossing_dates=pd.DataFrame(crossing_dates, columns=['Date'])
final=pd.DataFrame(data, columns=['Date', 'Close', 'MA_15', 'MA_50'])
result=crossing_dates.merge(final, on='Date')
result

result['Strategy'] = 0
condition = result['MA_15'] > result['MA_50']
result.loc[condition, 'Strategy'] = 'Buy'
result.loc[~condition, 'Strategy'] = 'Sell'
result


Monday, 18 May 2015

Marginal and Component Value-at-Risk: A Python Example

Value-at-risk (VaR), despite its drawbacks, is a solid basis to understand the risk characteristics of the portfolio. There are many approaches to calculate VaR (historical simulation, variance-covariance, simulation). Marginal VaR is defined as the additional risk that a new position adds to the portfolio. The practical reading of the marginal VaR, as very nicely stated in Quant at risk http://www.quantatrisk.com/2015/01/18/applied-portfolio-value-at-risk-decomposition-1-marginal-and-component-var/ is: the highest the marginal VaR, the exposure to the respective asset should be reduced to lower the portfolio VaR. The component VaR shows the reduction of the portfolio value-at-risk resulting from removal of a position. The sum of component VaR of the shares in the portfolio equals the portfolio diversified VaR, while the sum of the individual VaR presents the undiversified portfolio VaR, assuming perfect correlation between assets in the portfolio.

Also it is worth noting that based on simple mathematics, the equations can be changed, i.e. there are several ways to calculate the component VaR.

We take the following steps in Python to come up with the marginal VaR of a 3-asset portfolio:
(This works for Python 2.7 for Python 3.4 -> change in print statements in the codes. Additionally, to some of the lines some formatting can be applied - for instance returns are not presented in percentages.)

import numpy as np
import pandas as pd
import pandas.io.data as web
from scipy.stats import norm

Value=1e6 # $1,000,000
CI=0.99 # set the confidence interval

tickers =['AAPL', 'MSFT', 'YHOO']
numbers=len(tickers)

data=pd.DataFrame()
for share in tickers:
    data[share]=web.DataReader(share, data_source='yahoo', start='2011-01-01', end='2015-05-15')['Adj Close']
data.columns=tickers

ret=data/data.shift(1)-1 # calculate the simple returns
ret.mean()*252 #annualize the returns
covariances=ret.cov()*252 #gives the annualized covariance of returns
variances=np.diag(covariances) #extracts variances of the individual shares from covariance matrix
volatility=np.sqrt(variances) #gives standard deviation


weights=np.random.random(numbers)
weights/=np.sum(weights) # simulating random percentage of exposure of each share that sum up to 1; if we want to plug in our own weights use: weights=np.array([xx,xx,xx])

Pf_ret=np.sum(ret.mean()*weights)*252 #Portfolio return

Pf_variance=np.dot(weights.T,np.dot(ret.cov()*252,weights)) #Portfolio variance
Pf_volatility=np.sqrt(Pf_variance) #Portfolio standard deviation

USDvariance=np.square(Value)*Pf_variance
USDvolatility=np.sqrt(USDvariance)

covariance_asset_portfolio=np.dot(weights.T,covariances)
covUSD=np.multiply(covariance_asset_portfolio,Value)
beta=np.divide(covariance_asset_portfolio,Pf_variance)

def VaR():
    # this code calculates Portfolio Value-at-risk (Pf_VaR) in USD-terms and Individual Value-at-risk (IndividualVaR) of shares in portfolio.  
    Pf_VaR=norm.ppf(CI)*USDvolatility
    IndividualVaR=np.multiply(volatility,Value*weights)*norm.ppf(CI)
    IndividualVaR = [ '$%.2f' % elem for elem in IndividualVaR ]
    print 'Portfolio VaR: ', '$%0.2f' %Pf_VaR
    print 'Individual VaR: ',[[tickers[i], IndividualVaR[i]] for i in range (min(len(tickers), len(IndividualVaR)))]

VaR() #call the function to get portfolio VaR and Individual VaR in USD-terms

def marginal_component_VaR():
     # this code calculates Marginal Value-at-risk in USD-terms and Component Value-at-risk of shares in portfolio. 
    marginalVaR=np.divide(covUSD,USDvolatility)*norm.ppf(CI)
    componentVaR=np.multiply(weights,beta)*USDvolatility*norm.ppf(CI)
    marginalVaR = [ '%.3f' % elem for elem in marginalVaR ]
    componentVaR=[ '$%.2f' % elem for elem in componentVaR ]
    print 'Marginal VaR:', [[tickers[i], marginalVaR[i]] for i in range (min(len(tickers), len(marginalVaR)))]
    print 'Component VaR: ', [[tickers[i], componentVaR[i]] for i in range (min(len(tickers), len(componentVaR)))]

marginal_component_VaR(): #call the function 

Friday, 8 May 2015

How to Get High Frequency Stock Prices with Python (2): Netfonds Example

Getting high frequency and also real-time stock prices data (for US and Europe stocks in particular) with Python could be very simple indeed! Just use the http://hopey.netfonds.no site. Here is a simple code for generating the url-address (depending on ticker and date), reading and saving the data. Additionally, historical daily high-frequency data is available back for several days (about 15 days). 

The  url- format for the high-frequency data is (the example uses OMV shares, traded on Vienna Stock Exchange): http://hopey.netfonds.no/tradedump.php?date=20150507&paper=E-OMVV.BTSE&csv_format=txt. While for the depth the url-format is http://hopey.netfonds.no/posdump.php?date=20150507&paper=E-OMVV.BTSE&csv_format=txt. However, it seems that depth data is not available for US shares. 

Here is the code and enjoy the data:


import urllib.request
# urllib.request for Python 3.4 and urllib for Python 2.7, respectively change the def get_data(date,ticker,exchange) code below in read_url line

def get_data(date,ticker,exchange):
    base_url = 'http://hopey.netfonds.no/tradedump.php?'
    search_query = 'date={}&paper={}.{}&csv_format=txt'.format(date,ticker,exchange)
    search_url = '{}{}'.format(base_url, search_query)
    read_url=urllib.request.urlopen(search_url).read()
    return read_url

# for Python 3.4: read_url=urllib.request.urlopen(search_url).read(); for Python 2.7: read_url=urllib.urlopen(search_url).read()

if __name__ == '__main__':
    ticker = 'AAPL' # the code-format for US stocks is using directly the ticker, for instance 'AAPL' is Apple, 'MSFT' is Micrrosoft; the format for European shares is 'E-XXXX', for instance 'E-BAYND' - Bayer, 'E-TKAV' -  Telekom Austria, 'E-LLOYL' - Lloyds Banking Group
    date=20150507 # date format is YYYYMMDD
    exchange='O' # 'O' - US shares, 'BTSE' - for European shares


data=get_data(date,ticker,exchange)

with open ("trades.txt", "wb") as f:
    f.write(data)

Tuesday, 5 May 2015

Generating Correlated Random Numbers: Cholesky Decomposition

When it comes to generate random numbers most of the people see this exercise as a black box. In order to add some colour to this. There are several approaches to generate correlated random numbers, of which two are mainly used, namely Cholesky decomposition and spectral (eigenvalue) decomposition. These types of decomposition are used in copula estimates.

Cholesky decomposition takes the correlation (or covariance) matrix along with randomly generated numbers and correlates them. Cholesky decomposition has two, let’s call it, versions: lower and upper triangular. The Cholesky decomposition can be done in Python via Numpy and SciPy linear algebra (linalg) libraries:

(1) np.linalg.cholesky(A) # using numply linear algebra library and

(2) scipy.linalg.cholesky(A, lower=True) # using SciPy linear algebra library with lower=True indicating we want lower triangular, if we want upper triangular: lower=False


I prefer to use the lower triangular matrix. I also prefer working with array instead of matrix.  

Put into practice and using Python, the things look like this (we work with 3 variables):

import numpy as np
from scipy.linalg import cholesky
from scipy.stats import norm

A=np.array([[1,0.1558, 0.2227], [0.1558,1,0.5118], [0.2227,0.5118,1]]) # set the correlation matrix
B= norm.rvs(size=(3, 10000)) # 10,000 simulations to generate 3 random normally distributed variables

Cholesky = cholesky(A, lower=True) # Cholesky decomposition - lower triangular matrix
result= np.dot(Cholesky, B) # generates correlated variables


correlation=np.corrcoef(result) #check the correlation of the simulated variables

By the way, checking if the initial matrix is positive definite is also quite easy on Python:

from scipy.linalg import eigh
eigh(A) # the first array is eigenvalues and the second is eigenvectors

In our example, the correlation matrix is positive definite, meaning that all its eigenvalues are positive.  The eigenvalues of the above correlation matrix are: 0.4832, 0.8903, 1.6265.



Cholesky decomposition can also be used in the opposite case - to uncorrelate variables that are correlated.  


Monday, 27 April 2015

How to Get High Frequency Stock Prices with Python: Prague Stock Exchange Example

From time to time high frequency stock prices data is needed even for less developed CEE stock markets. I needed high frequency prices for CEZ (a Prague Stock Exchange listed company). With this task and having Python I wrote a brief code for extracting the data (the final piece of code is saving the data):

import urllib

def create_url(ticker):
    base_url = 'http://www.pse.cz/XML/ProduktKontinualJS.aspx?'
    search_query = 'cnpa={}'.format(ticker)
    search_url = '{}{}'.format(base_url, search_query)
    return search_url

def clean_url(url):
    cleaned=urllib.urlopen(url).read().split("function createOnlineChart(divName)")[0]
    for character in ['k:', 'o:', 'd:new Date', '{', '}', '\r\nvar chartDataOL =', '[', ']', '(', ')', ';', ' ']:
        if character in cleaned:
            cleaned=cleaned.replace(character,'')
    return cleaned

if __name__ == '__main__':
    ticker='4169' # '4169' CEZ, '6407' Erste Group Bank, '4171' Komercni Banka, '4174' O2, '6816' NWR, '4254' Philip Morris CR

url=create_url(ticker)
cleanurl=clean_url(url)
with open ("trades.txt", "w") as f:
    f.write(cleanurl)

Wednesday, 15 April 2015

Copula Application in Euro Area Credit Default Swaps

Summary 
In the research we presented copula as measure of dependence that is particularly useful for identification of tail dependence between variables. We tested different copula functions for credit default swap (CDS) changes of selected euro area countries. We reached the conclusion that the Student’s t copula describes best the dependence structure of the variables. The results support the opinion that the Gaussian copula is not a suitable tool despite its widespread use. Student’s t copula has tail dependence and hence it is more useful than Gaussian copula (no tail dependence) to simulate events like joint defaults and stock market crashes. Another interesting result of the research is that in some cases we proved that certain types of Archimedean copulas provide second best fit to data (implying that the asymmetric tail dependence is also a good fit).We use the best fit copula to model market risk of CDS. 

Thursday, 8 January 2015

Principal Components Analysis of the Romanian Equity Market

Principal Components Analysis (PCA) is a dimension reduction technique that aims at identifying communalities between the initial number of variables and scale down to the number to few meaningful components that are able to explain certain amount of variance. In other words, the underlying idea behind the PCA is to extract common features of the shares’ returns, i.e. instead of looking at all shares the PCA concentrates on communalities and extracts several components that are able to explain certain amount of the variability of all data. What remains unexplained by the factors is the specific risk.  

PCA extracts the factors but does not name these factors, i.e. it is only doing a partial job. But nevertheless, it is useful to identify the specific and systematic risks of the listed shares.

There are several key features that make PCA quite a sensitive technique (but as a bottom-line this is valid to all kind of techniques that deal with identifying key meaningful factors):

(1)    Identifying the number of components – a number of stopping rules apply – as the graphical analysis - scree plot (the point at which the line levels out), Kaiser’s stopping rule (only components with eigenvalue over 1.0 should be taken into consideration), etc.
(2)    Defining the options for the PCA – namely analysis of correlation or covariance and rotation method.  Generally, the covariance matrix is preferred when the variables have similar scales (for instance logarithmic stock returns) or when the variables have been transformed, while the correlation matrix is used when variables are on different scales (and correlation standardized the data). On rotation method, seems more widely accepted to use direct oblimin that allows components to be correlated (non-orthogonal solution), the alternative solution is varimax rotation (orthogonal solution – components are not correlated).

The PCA analysis of 14 Romanian listed shares for the period Sept 3, 2012 – Dec 31, 2014 (excluding several blue chips that arrived on the market after Sept 2012 – Romgaz, Electrica, Nuclearelectrica) is based on 581 daily log-return series for each company. Before conducting the PCA analysis two preliminary tests were taken into consideration – Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO) that has a value of 0.892 (threshold value of 0.6 is considered as a minimum; closer to 1.0 the better) and Bartlett's Test of Sphericity.


The results:
(1)      Percent of variance of returns series of the 14 Romanian shares explained by the first component – 30.3% and by the second component – 7.8%; so cumulatively the first two components are able to explain as much as 38.1% of the variance of the returns. And together with the third component the % of the variance explained is 44.2%.  The first component can be interpreted as “market wide” component.

(2)    Proportion of variance of the 14 Romanian shares that can be explained by the 2 components (can be defined as the sum of the squared factor loadings) - high for SIF3, SIF1 and Transelectrica but rather low for the pharma shares – Biofarm (BIO), Antibiotice (ATB) and Fondul Proprietatea (FP).

(3)    How the companies can be combined into groups with respect to how they respond to the 2 components? Answer to this question provides the “pattern matrix”. So, in the first group we have: Transelectrica, Bucharest Stock Exchange, BRD, Banca Transilvania, Transgaz and Petrom. Almost all the large caps fall in one group (intuitive, isn’t; but this also means in case the first component is the general market we cannot rely on having sort of safe havens and really defensive stocks). Also interesting is the second group – all the SIFs (negative loadings to the second component). The SIFs have low loadings with respect to the first component (market component) – so there is opportunity for diversification between the 2 group of stocks identified. The correlation between the 2 components is -0.519.


This publication is for information purposes only and should not be construed as a solicitation or an offer to buy or sell any securities.

Wednesday, 31 December 2014

Chebyshev’s Inequality and Tail Risk


“The sacred geometry of chance
The hidden law of a probable outcome”
Sting, “Shape of My Heart”

Tail risk is commonly defined as probability of rare events (technically speaking tail risk is risk of an asset moving more than three standard deviations away from the average).  From the graphical presentation of the returns of most of the financial assets easily can be seen that the tails of the distribution are fatter than the normal distribution. Earlier posts in this blog placed an equal focus on daily returns of certain CEE indices as well as the standardized returns (number of standard deviations from the average return) in an effort to grasp more value-added information from data available.

The assumptions of the underlying distribution predetermine the results and generally this presents problem to analysts (for instance, VaR under normal distribution assumption has significant flaws). In some previous posts in this blog two Extreme Value Theory approaches have been used with relation to Romanian and Bulgarian stock exchanges. For the sake of completeness, another measure needs to be revealed as well – Chebyshev’s inequality. One advantage of Chebyshev’s inequality is that it is valid in any distribution. The drawback is that it is too general and can provide too high probability of extreme values. So, as the normal distribution underestimates extreme values probabilities, Chebyshev’s inequality overestimates the extreme values probabilities.


For a given mean and standard deviation, Chebyshev’s inequality states that:
for any t>0.
Stated in an equivalent form:
However, t needs to be greater than 1.

Chebyshev’s inequality says that no more than 1/t^2 of the values can be more than t standard deviations away from the mean (stated as a maximum), or put in an alternative way, at least 1−1/t^2 of the values are within t standard deviations of the mean (stated as a minimum).


Chebyshev’s inequality results in higher probability of extreme cases compared to the normal distribution. For instance, if we want to know what is the probability of having cases at 3 standard deviations from the mean (t=+/-3 in the Chebyshev’s inequality formula). Chebyshev’s inequality states that at least 88.8% of values must lie within +/-3 standard deviations from the mean (or equivalently, no more than 11% of the values can be more than 3 standard deviations from the mean). The result for normal distribution is 99.7% (this is also known as empirical rule 68–95–99.7 or the three-sigma rule that briefly states that extreme values are barely possible). The chart and table below present four cases for probabilities to have more than t standard deviations from the mean: Chebyshev’s inequality, one-tailed version of Chebyshev’s inequality (also known as Cantelli’s inequality) and Standard normal distribution probabilities – one-tailed and two-tailed. Cantelli’s inequality is helpful in identifying the worst confidence level for heavily skewed or leptokurtic distributions (Lleo and Ziemba, 2014).


Wednesday, 17 December 2014

The Power of Prior Probabilities: The Monty Hall Problem



I cannot find an example that can describe in a more vivid way the importance of prior probability than the famous Monty Hall problem (having the TV game “Let’s make a deal” as its basis)! There are many explanations how the Monty Hall’s “switch the choice” strategy works. I remember a case when I explained how this works to couple of people and I needed to simulate the game and count the winning cases.

The reason why the result of this game may sound counterintuitive is the fact that the prior probabilities are neglected when processing new information. And while it seems that given chance is fair, in fact it is not. The second catchy  issue in this problem is accounting the choice of the host.

In a nutshell, the game is as follows: the player has a choice of three doors, behind one door is the prize and the remaining two doors are empty. The player chooses one door in a 1/3 probability to win. The host, who knows where the prize is, opens one of the two doors and gives a chance to the player either to switch or not to switch. It is widely known that at the second choice the probability is not 1/2 but 2/3 as the prior probability is also having its place in the game. How come?
Let’s start with the Bayes Theorem:
 
which we read as Probability of event A happens given the happening of even B in the context of event C. For the Monty Hall problem solving the reading would be “probability of having the prize at door A, given the host opened door B in the context the player choice is door C”. Or in a more concise manner: probability of having the prize at the door that has not been initially selected by the player.

Using these fundamentals to the Monty Hall problem:





The first part of the right side is equal to 1 since the host knows where the prize is and will not open the door with the prize.
The second part, the probability of having the Prize in door A given the player chooses door C, is 1/3 (this is the a priori probability).
And the final part of the right side (the denominator in the equation) is 1/2 since there are two doors the host can open – given that one door has already been selected by the player.
 
Using similar statements for probabilities, it can easily be found that the probability of winning the prize in the context of keeping the initial selection unchanged is 1/3.
Briefly, Monty Hall problem states the importance of the prior information, prior decisions, prior probabilities into your current decision-making.
I also made a simple R program for the game (it is very detailed and can be done in a much shorter way). The result of winning the game when changing the initially selected door is 67% (it varies, depends on simulations).

Monty_hall<-function() {
  doors<-1:3
  first_choice<-sample(doors,1) ## randomly select one door, probability to win is 1/3
  prize<-sample(doors,1) ## randomly put the prize behind one door
  if (first_choice==prize) {
    door_for_open=sample(doors[-first_choice],1)
  } else {
    door_for_open=doors[c(-first_choice,- prize)]
  } ## host opens one door that is different than the door with prize and already selected door
  door_switch<-doors[c(-first_choice ,-door_for_open)]
  decision<-0:1 ## 0 is keep original choice; 1 is changing choice
  keep_or_switch<-sample(decision,1)
  if(keep_or_switch==0) {
    choice=first_choice
  } else {
    choice=door_switch
  } ## the player has a choice to select among the two doors remaining after the host opened one door
  if ((keep_or_switch==1) | (choice==prize)) {
    result=1 ## 1 is win, given switch; 0 is lose
  } else {
    result=0
  }
}
 
This can be run – say 10,000 times –for instance using the following codes:
game<-replicate(10000, Monty_hall())
game_win<-game[game==1]  ## we want only the “winning” cases
length(game_win)

Saturday, 13 December 2014

What Do VIX and VIX Futures Say

Look at the level of VIX and VIX Futures between 2 dates– Dec 1, 2014 and Dec 11, 2014. The chart below shows that the VIX term structure changed its shape – from  contango on Dec 1, 2014 (i.e. spot VIX below the VIX futures level) to moderate backwardation compared to spot VIX level for the near-term expiring futures and flat for the longer term expirations on Dec 11, 2014. It is also interesting to note that the Dec 11, 2014 term structure has its values oscillating around the long-term historical average (2004-2014 period) of VIX of 19.6. So the expectations of the 30-day volatility, implied in the VIX futures on Dec 11, 2014, are for moderately lower volatility, while at the beginning of December futures levels clearly have indicated increase of volatility. Near-term maturing futures as well as the VIX itself changes were more palpable than the longer term maturing VIX futures (see the gap between Dec 1 and Dec 11, 2014 for the near-term maturity and longer-term maturity). While players are generally paying a premium for future volatility insurance (Dec 1, 2014 in our case), this was not the case on Dec 11, 2014.



So what? Is this a signal that difficulties for the markets will continue in the short-run? Well, depends.


It is assumed that VIX is mean-reverting, i.e. tend to revert to the long-term average level. Is it so? An approach to check this is to use rescaled range analysis (explained in a previous post) on log-changes of VIX for the period Jan 2, 1990-Dec 12, 2014. Applying the same methodology as used for checking selected CEE stock indices state, we get value for the H-exponent  (the slope coefficient of the linear regression) of 0.3679. This result shows that VIX index does really exhibit a mean-reverting pattern. 
 While term structure of VIX-VIX Futures has been experiencing backwardation pattern in the past (2008, 2010, 2011) when markets faced with trouble, spot VIX was above the long-term average level. Currently, this is not the case. So, in my opinion, the reading of the recent VIX spike does not necessarily imply  the current hard times for the markets to continue (moreover, VIX futures expiring in Jul and Aug 2015 trade at higher levels than spot VIX).