One common technique in quantitative finance is that of ranking stocks in some way. This ranking can be whatever you come up with, but will often be a combination of fundamental factors and price-based signals. One example could be the following
These ranking systems can be used to construct long-short equity strategies. The Long-Short Equity Lecture is recommended reading before this Lecture.
In order to develop a good ranking system, we need to first understand how to evaluate ranking systems. We will show a demo here.
This notebook does analysis over thousands of equities and hundreds of timepoints. The resulting memory usage can crash the research server if you are running other notebooks. Please shut down other notebooks in the main research menu before running this notebook. You can tell if other notebooks are running by checking the color of the notebook symbol. Green indicates running, grey indicates not.
In [1]:
import numpy as np
import statsmodels.api as sm
import scipy.stats as stats
import scipy
from statsmodels import regression
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
In [2]:
start_date = '2012-01-01'
end_date = '2015-01-01'
# Get market cap and book-to-price for all assets in universe
fundamentals = init_fundamentals()
# WARNING: The following line will take a while to run, as it is fetching a large amount of data.
pe_data = get_fundamentals(query(fundamentals.valuation_ratios.pe_ratio), end_date, range_specifier='36m')
pe_data = pe_data['pe_ratio'].T.dropna()
cap_data = get_fundamentals(query(fundamentals.valuation.market_cap), end_date, range_specifier='36m')
cap_data = cap_data['market_cap'].T.dropna()
Let's take a look at the data to get a quick sense of what we have.
In [3]:
pe_data.head(5)
Out[3]:
The next step is to figure out which equities we have data for. Data sources are never perfect, and stocks go in and out of existence with Mergers, Acquisitions, and Bankruptcies. We'll make a list of the stocks common to both our factor data sets and then filter down both to just those stocks.
We'll also get the daily pricing of all stocks over the same time period. Finally, we'll filter down one more time to just look at stocks which have data from all three sources.
In [4]:
# There will be some equities for which we had no data, so look at the set for which we have data
common_equities = cap_data.index.intersection(pe_data.index)
# Get the prices on only the common equities
# WARNING: The following line will take a while to run, as it is fetching a large amount of data.
prices = get_pricing(common_equities, start_date=start_date, end_date=end_date, frequency='daily')
prices = prices['price']
# Drop any that have no price data
prices = prices.T.dropna().T
common_equities = prices.T.index
# Filter the fundamental data down to only the equities that have price data
cap_data_filtered = cap_data.T[common_equities]
pe_data_filtered = pe_data.T[common_equities]
Now we want to compute the forward 30 day returns for each month. We do this by dividing the price on the last day of the month by the price on the first day of the month. Pandas has a nice function groupby
, which can accomplish this for us fairly elegantly.
In [5]:
monthly_prices = prices.groupby((prices.index.year, prices.index.month))
month_forward_returns = monthly_prices.last() / monthly_prices.first() - 1
Let's take a look to see what we have.
In [6]:
month_forward_returns.T.head(5)
Out[6]:
Let's take a look at the market cap data.
In [7]:
cap_data_filtered.head(5)
Out[7]:
Because we're dealing with ranking systems, at several points we're going to want to rank our data. Let's check how our data looks when ranked to get a sense for this.
In [8]:
cap_data_filtered.rank().head(5)
Out[8]:
Now that we have the data, let's do something with it. Our first analysis will be to measure the monthly Spearman rank correlation coefficient between Market Cap and month-forward returns. In other words, how predictive of 30-day returns is ranking your universe by market cap.
In [9]:
scores = np.zeros(36)
pvalues = np.zeros(36)
for i in range(36):
score, pvalue = stats.spearmanr(cap_data_filtered.iloc[i], month_forward_returns.iloc[i])
pvalues[i] = pvalue
scores[i] = score
plt.bar(range(1,37),scores)
plt.hlines(np.mean(scores), 1, 37, colors='r', linestyles='dashed')
plt.xlabel('Month')
plt.xlim((1, 37))
plt.legend(['Mean Correlation over All Months', 'Monthly Rank Correlation'])
plt.ylabel('Rank correlation between Market Cap and 30-day forward returns');
We can see that the average correlation is positive, but varies a lot from month to month.
Let's look at the same analysis, but with PE Ratio.
In [10]:
scores = np.zeros(36)
pvalues = np.zeros(36)
for i in range(36):
score, pvalue = stats.spearmanr(pe_data_filtered.iloc[i], month_forward_returns.iloc[i])
pvalues[i] = pvalue
scores[i] = score
plt.bar(range(1,37),scores)
plt.hlines(np.mean(scores), 1, 37, colors='r', linestyles='dashed')
plt.xlabel('Month')
plt.xlim((1, 37))
plt.legend(['Mean Correlation over All Months', 'Monthly Rank Correlation'])
plt.ylabel('Rank correlation between PE Ratio and 30-day forward returns');
The correlation of PE Ratio and 30-day returns seems to be near 0 on average. It's important to note that this monthly and between 2012 and 2015. Different factors are predictive on differen timeframes and frequencies, so the fact that PE Ratio doesn't appear predictive here is not necessary throwing it out as a useful factor. Beyond it's usefulness in predicting returns, it can be used for risk exposure analysis as discussed in the Factor Risk Exposure Lecture.
The next step is to compute the returns of baskets taken out of our ranking. If we rank all equities and then split them into $n$ groups, what would the mean return be of each group? We can answer this question in the following way. The first step is to create a function that will give us the mean return in each basket in a given the month and a ranking factor.
In [11]:
def compute_basket_returns(factor_data, forward_returns, number_of_baskets, month):
data = pd.DataFrame(factor_data.iloc[month-1]).join(forward_returns.iloc[month-1])
# Rank the equities on the factor values
data.columns = ['Factor Value', 'Month Forward Returns']
data.sort('Factor Value', inplace=True)
# How many equities per basket
equities_per_basket = np.floor(len(data.index) / number_of_baskets)
basket_returns = np.zeros(number_of_baskets)
# Compute the returns of each basket
for i in range(number_of_baskets):
start = i * equities_per_basket
if i == number_of_baskets - 1:
# Handle having a few extra in the last basket when our number of equities doesn't divide well
end = len(data.index) - 1
else:
end = i * equities_per_basket + equities_per_basket
# Actually compute the mean returns for each basket
basket_returns[i] = data.iloc[start:end]['Month Forward Returns'].mean()
return basket_returns
The first thing we'll do with this function is compute this for each month and then average. This should give us a sense of the relationship over a long timeframe. We can see that there appears to be a spread induced by ranking on Market Cap. Specifically that smaller cap stocks tend to have higher returns. This is a well known phenomenon in quantitative finance.
In [12]:
number_of_baskets = 10
mean_basket_returns = np.zeros(number_of_baskets)
for m in range(1, 37):
basket_returns = compute_basket_returns(cap_data_filtered, month_forward_returns, number_of_baskets, m)
mean_basket_returns += basket_returns
mean_basket_returns /= 36
# Plot the returns of each basket
plt.bar(range(number_of_baskets), mean_basket_returns)
plt.ylabel('Returns')
plt.xlabel('Basket')
plt.legend(['Returns of Each Basket']);
Of course, that's just the average relationship. To get a sense of how consistent this is, and whether or not we would want to trade on it, we should look at it over time. Here we'll look at the monthly spreads for the first year. We can see a lot of variation, and further analysis should be done to determine whether Market Cap is tradeable.
In [13]:
f, axarr = plt.subplots(3, 4)
for month in range(1, 13):
basket_returns = compute_basket_returns(cap_data_filtered, month_forward_returns, 10, month)
r = np.floor((month-1) / 4)
c = (month-1) % 4
axarr[r, c].bar(range(number_of_baskets), basket_returns)
axarr[r, c].xaxis.set_visible(False) # Hide the axis lables so the plots aren't super messy
axarr[r, c].set_title('Month ' + str(month))
We'll repeat the same analysis for PE Ratio.
In [14]:
number_of_baskets = 10
mean_basket_returns = np.zeros(number_of_baskets)
for m in range(1, 37):
basket_returns = compute_basket_returns(pe_data_filtered, month_forward_returns, number_of_baskets, m)
mean_basket_returns += basket_returns
mean_basket_returns /= 36
# Plot the returns of each basket
plt.bar(range(number_of_baskets), mean_basket_returns)
plt.ylabel('Returns')
plt.xlabel('Basket')
plt.legend(['Returns of Each Basket']);
In [19]:
f, axarr = plt.subplots(3, 4)
for month in range(1, 13):
basket_returns = compute_basket_returns(pe_data_filtered, month_forward_returns, 10, month)
r = np.floor((month-1) / 4)
c = (month-1) % 4
axarr[r, c].bar(range(10), basket_returns)
axarr[r, c].xaxis.set_visible(False) # Hide the axis lables so the plots aren't super messy
axarr[r, c].set_title('Month ' + str(month))
Often times a new factor will be discovered that seems to induce spread, but it turns out that it is just a new and potentially more complicated way to compute a well known factor. Consider for instance the case in which you have poured tons of resources into developing a new factor, it looks great, but how do you know it's not just another factor in disguise?
To check for this, there are many analyses that can be done.
One of the most intuitive ways is to check what the correlation of the factors is over time. We'll plot that here.
In [16]:
scores = np.zeros(36)
pvalues = np.zeros(36)
for i in range(36):
score, pvalue = stats.spearmanr(cap_data_filtered.iloc[i], pe_data_filtered.iloc[i])
pvalues[i] = pvalue
scores[i] = score
plt.bar(range(1,37),scores)
plt.hlines(np.mean(scores), 1, 37, colors='r', linestyles='dashed')
plt.xlabel('Month')
plt.xlim((1, 37))
plt.legend(['Mean Correlation over All Months', 'Monthly Rank Correlation'])
plt.ylabel('Rank correlation between Market Cap and PE Ratio');
And also the p-values because the correlations may not be that meaningful by themselves.
In [17]:
scores = np.zeros(36)
pvalues = np.zeros(36)
for i in range(36):
score, pvalue = stats.spearmanr(cap_data_filtered.iloc[i], pe_data_filtered.iloc[i])
pvalues[i] = pvalue
scores[i] = score
plt.bar(range(1,37),pvalues)
plt.xlabel('Month')
plt.xlim((1, 37))
plt.legend(['Mean Correlation over All Months', 'Monthly Rank Correlation'])
plt.ylabel('Rank correlation between Market Cap and PE Ratio');
There is interesting behavior, and further analysis would be needed to determine whether a relationship existed.
In [18]:
pe_dataframe = pd.DataFrame(pe_data_filtered.iloc[0])
pe_dataframe.columns = ['F1']
cap_dataframe = pd.DataFrame(cap_data_filtered.iloc[0])
cap_dataframe.columns = ['F2']
returns_dataframe = pd.DataFrame(month_forward_returns.iloc[0])
returns_dataframe.columns = ['Returns']
data = pe_dataframe.join(cap_dataframe).join(returns_dataframe)
data = data.rank(method='first')
heat = np.zeros((len(data), len(data)))
for e in data.index:
F1 = data.loc[e]['F1']
F2 = data.loc[e]['F2']
R = data.loc[e]['Returns']
heat[F1-1, F2-1] += R
heat = scipy.signal.decimate(heat, 40)
heat = scipy.signal.decimate(heat.T, 40).T
p = sns.heatmap(heat, xticklabels=[], yticklabels=[])
# p.xaxis.set_ticks([])
# p.yaxis.set_ticks([])
p.xaxis.set_label_text('F1 Rank')
p.yaxis.set_label_text('F2 Rank')
p.set_title('Sum Rank of Returns vs Factor Ranking');
The ranking system is the secret sauce of many strategies. Choosing a good ranking system, or factor, is not easy and the subject of much research. We'll discuss a few starting points here.
Choose one that is commonly discussed and see if you can modify it slightly to gain back an edge. Often times factors that are public will have no signal left as they have been completely arbitraged out of the market. However, sometimes they lead you in the right direction of where to go.
Any model that predicts future returns can be a factor. The future return predicted is now that factor, and can be used to rank your universe. You can take any complicated pricing model and transform it into a ranking.
Price based factors take information about the historical price of each equity and use it to generate the factor value. Examples could be 30-day momentum, or volatility measures.
It's important to note that some factors bet that prices, once moving in a direction, will continue to do so. Some factors bet the opposite. Both are valid models on different time horizons and assets, and it's important to investigate whether the underlying behavior is momentum or reversion based.
This is using combinations of fundamental values as we discussed today. Fundamental values contain information that is tied to real world facts about a company, so in many ways can be more robust than prices.
Ultimately, developing predictive factors is an arms race in which you are trying to stay one step ahead. Factors get arbitraged out of markets and have a lifespan, so it's important that you are constantly doing work to determine how much decay your factors are experiencing, and what new factors might be used to take their place.