DEFICITS FOR LIVING - © Federico Carlini, May 4th 2016

ABSTRACT
Most people are happy to hear about a federal surplus and scared to hear about federal deficit, just as most people tend to prefer deflation rather than inflation. In this paper, I’ll show how federal deficits are not only necessary, but how everything around us comes from them. I will subsequently build a trading model in Python based on these findings.

INTRODUCTION AND FRAMEWORK
We have been told for all of our lives that our taxes are necessary to finance public spending. While this is true at the local level, state level and some Countries tied in monetary unions like in the EU, it is completely wrong in most Countries with monetary sovereignty based on FIAT currency, like the US, Japan, UK etc.
To make this clear, let’s start from year 0 in a hypothetical economy that has been based on barter with no currency. There is a public sector (Central Bank, Treasury etc.) and a private sector. If it’s true that our taxes finance public spending, how can the loop start from us? Would everyone just go to their garage to print fresh money so that the government can finance public spending next year?
By logic the loop must start from the public sector, which is the issuer of the currency, and the private sector is the user and cannot create it. This means that we can pay taxes only after we have received money from an entity that can create it, since we can’t.
Since public sector and private sector balances should net to zero, in order for the private sector to have a surplus (money in our pockets), the public sector has to be in deficit. This is called sectorial balances. My deficit is the surplus of someone else, just like my expense is the revenue of someone else and so on.
Therefore, going back to our framework, we need the public sector to create, spend and leave money for the private sector, in order for the population to consume, invest, save and pay taxes, which are needed for other reasons. If we ran a balanced budget, we would have a public sector that spends 100 and taxes 100. Nothing would be left to the private sector. How can this be acceptable? If we started doing this from year zero, we would have a net financial asset value of 0 at the end of the first year. Whatever has been spent, it has been taxed.
I won’t go into the accounting details since explaining the accounting of public spending, taxing and issuing bonds would take several pages and it involves private banks, reserve accounts, Tax and Loan accounts, the central bank itself and the Treasury. I’m just going to point out that federal taxes do end up in the Treasury General Account at the Fed and the Treasury can spend by draining funds from this account, which must have a positive balance, no overdraft allowed. This seems to be consistent with the idea that our taxes are in some way “recycled” to finance public spending, but it doesn’t change the fact that the federal government does not need to tax or borrow to finance public spending. This happens because the federal government has the power to create currency out of thin air. In fact, we all know that when the Fed credited private banks reserves for hundreds of billions of dollars in 2008, the government didn’t tax anyone. They just credited their accounts at the Fed and if there were a limit, it would be self-constrained. Also, the funds used by the private sector to pay taxes and purchase Treasury bonds are funds that must have been created at some point through deficit spending by the Fed.
Therefore, funds always come from the Fed first (through spending or lending), not from our taxes. It is important to note that banks' reserve accounts (used to pay federal taxes) and the Treasury General Account (where taxes end up) are on the same side of the Fed's balance sheet. When a tax payment is made, the Fed will mark one account down and the other one up. When federal spending is approved, the Fed will mark the TGA down and the bank's reserve account up. Those dollars, that can electronically bounce between these accounts, must have come from somewhere, and it can't be us. The Fed is basically in charge of this giant spreadsheet called banking system. The required positive balance in the TGA is just a self constraint, not a technical limit. If they had not introduced that law, the Fed would be able to clear any check and allow an overdraft. In addition, the federal government does not need to issue bonds to finance its spending. It needs them for other reasons, such as to regulate excess reverses and interest rates, and also to provide income for the private sector through interest payments.
When a bank buys a bond, its reserve (checking) account at the Fed goes down and its bond (savings) account goes up. When the principal is paid back at the end, the Fed would debit the bond (savings) account and credit the reserve (checking) account. As a consequence, public debt, defined as outstanding debt held by both private and public sector, could actually be brought down to 0 through this simple swap.
Also, this explains why QE was never supposed to work out. For people who understand the above principles of macroeconomics, it was clear from the beginning that QE was nothing more than a swap, debiting bond (savings) accounts and crediting reserve (checking) accounts, leaving the private sector with an incredible amount of excess reserves that earn no income and that never reached the working economy. In 2015 the Fed earned about 100 billion dollars from interest payments that originated from bonds in its balance sheet; interest that was supposed to be earned by the private sector. Also, it lowered rates on the remaining bonds. This is why QE never worked. This is why the Dollar appreciated. This is why QE actually reverses the process of the issuance of bonds from the Treasury. This is why QE is actually a tax.
Public debt in the USA is now more than 19 trillion dollars. It is considered by most people absurd and dangerous but, again, they make a mistake in considering the government a household that is revenue constrained and limited in its spending power. They don’t realize what we would have with no public deficit. The same people that in 2008 were screaming when they saw public debt at $9 trillion are screaming now that we are at 19 trillion, but with inflation so low that regulators are trying anything they can to push it up but apparently it is not that simple to create monetary inflation. We can see a similar situation in Japan, which has a public debt of 230% of GDP and inflation has been low for decades, mostly around 0%.
The level of public debt and public deficits should not be considered on a nominal value, but should be regulated by looking at the size and conditions of the economy. To be more precise, public deficits (which when added give public debt) should be raised in order to reach full employment and price stability, whatever their amount is going to be. When too much money has been poured into the economy and we start seeing inflation rising too much and the currency is losing too much value, taxes should be raised to regulate the private sector and stabilize the currency. They shouldn’t just be raised because public debt has too many digits.
This is possible because the Federal Reserve is capable of generating an unlimited amount of dollars and it can be constrained only by political decisions. There is no technical default risk. The public sector is the only agent that can generate a net increase of financial assets in an economy, if we exclude import and export and, which would make our framework more complicated but wouldn’t change the bottom line. Since the trade on earth has to net to zero and we don’t export to Mars, governments remain the only entity that can increase the overall net financial assets, and should do so to provide wealth to their citizens.
As a consequence, when deficits are too low for a certain economy, we will have the public sector pushing the private sector into a recession. Normally the private sector tries to keep its standard of life by increasing private debt, which at some point will collapse. The greed of some investors that try to ride these waves, the pro-cyclical behavior of the private sector and the domino effect will increase the outcome exponentially.
Private banks can create money too (loans create deposits), and most of the money around us has been created this way but for each dollar created there is an attached liability of one dollar, so the net is zero. Private banks' balance sheets expand when a loan/mortgage is made, and shrink when payments are made. In fact, the more we pay back our loans/mortgages, the more money is actually being destroyed. Therefore, there is no net increase in financial assets in the private sector.
Given this framework, in this paper I’m going to see whether including Federal deficit and surplus into a financial indicator can help in forecasting the stock market.

DATA
For the purpose of this research, I’m going to download monthly data for the S&P 500 from Yahoo Finance and the monthly Federal Surplus/Deficit from the FRED database. The S&P500 data starts from January 1950 but the monthly Federal Surplus or Deficit data starts from October 1980 so I’m going to start my trading model (and the buy-and-hold strategy) on January 1981, in order to have a few months behind me to start performing averages. The last observations are 3/1/2016 for the Federal Surplus/Deficit and 5/2/16 for the S&P500.

In [12]:
import matplotlib.pyplot as plt
from __future__ import print_function
import statsmodels.api as sm
import numpy as np
import pandas as pd
import pandas.io.data as web
import datetime as dt           
from math import log

%matplotlib inline

start = dt.datetime(1980, 10, 1)  # start date
end = dt.datetime(2016, 3, 1) # end date
codes = ['MTSDS133FMS']    # Federal Deficit/Surplus
fred  = web.DataReader(codes, 'fred', start, end)
fred.plot()

url = "http://ichart.finance.yahoo.com/table.csv?s=^GSPC&d=5&e=1&f=2016&g=m&a=9&b=1&c=1980&ignore=.csv"  #S&P500
data = pd.read_csv(url)
dataframe = data.set_index('Date')
dataframe = dataframe.iloc[::-1]
dataframe[[5]].plot()


Out[12]:
<matplotlib.axes.AxesSubplot at 0x7f41eef457b8>

I’m going to adjust labels on both graphs and also the time axis on the S&P graph:


In [15]:
DATES = [dt.datetime.strptime(date, '%Y-%m-%d').date() for date in dataframe.index]

fred.plot()
plt.legend( ('Federal Surplus or Deficit',), loc=0 ,shadow=True)
plt.axhline(0, color='black')
plt.show()

fig, ax = plt.subplots()
plt.plot(DATES, dataframe[[5]], label='S&P500')
legend = ax.legend(loc='upper center', shadow=True)
plt.grid()
plt.show()


Now I want to assemble a single database, combining the two time series. I will lag the deficit/surplus values by 2 months since there is a lag of 6 weeks between the value of the S&P 500 and when the value of deficit/surplus comes out. For example, the value of S&P for 12/1 is ready on 12/1 but the value of deficit/surplus for 12/1 comes around 1/15 so I’m going to use the 12/1 deficit/surplus value for the FCI after 2 months, which is the soonest possible time. By looking at the statement of the Federal Budget for 12/1 on the Treasury website, I see that the value for 12/1 covers all December activity and that’s why it is available in mid-January. This means that the lag is only one month, which might be a good thing anyway, for the domino effect of excessive/insufficient spending on the overall economy.

In [16]:
#Here I'm adding the deficit column, lagged by 2 months.
   
df2 = pd.concat([fred,fred[:2]])
df2 = df2[[0]].shift(+2)
df2 = df2.set_index(dataframe.index)

dataframe['Def'] = df2['MTSDS133FMS']
dataframe = dataframe.fillna(value=0)
dataframe.tail()


Out[16]:
Open High Low Close Volume Adj Close Def
Date
2016-01-04 2038.199951 2038.199951 1812.290039 1940.239990 5153017800 1940.239990 -64552
2016-02-01 1936.939941 1962.959961 1810.099976 1932.229980 4881887000 1932.229980 -14444
2016-03-01 1937.089966 2072.209961 1937.089966 2059.739990 4379759000 2059.739990 55163
2016-04-01 2056.620117 2111.050049 2033.800049 2065.300049 4087129000 2065.300049 -192610
2016-05-02 2067.169922 2083.419922 2066.110107 2081.429932 7682220000 2081.429932 -108043

5 rows × 7 columns

CREATION OF A FINANCIAL CONDITIONS INDEX

Given the volatility and seasonality of the monthly federal deficit/surplus, I’m going to take a 24 month moving average, which stabilizes the curve. To be consistent, I’m going to get the same moving average of the S&P500 too. In order to perform a more accurate analysis and considering the volatility of the stock market, I’m going to calculate the Z-score for the S&P500 based on the 24 month moving average, dividing it by its standard deviation in that period. I’m going to consider the budget deficit/surplus mean reverting so I’m only going to take the 2-year moving average. For now I’m going to use its nominal value but this could probably be improved by using a percent change on averages or an inflation adjusted value, since it would be relative to the period of time and therefore more objective. To create my FCI, I'm going to multiply the Z-score of the S&P500 values for -1, since I consider a federal surplus (positive value) to have a negative effect on the stock market, which now means my S&P500 Z-score becoming positive too. This is consistent with my framework. I could have multiplied deficit/surplus by -1 and just revert my trading model, obtaining the same results. By multiplying the S&P by -1 I just have a nicer graph. Next step is obtaining the mean of these 2 factors. I’m going to do this after dividing the moving average of deficit/surplus by 10,000, to bring its values on the same scale of my Z-score values for the S&P500.


In [17]:
darr = np.array(dataframe['Def'])           
dmean = pd.rolling_mean(darr, 24, min_periods=0)   #rolling mean of deficit

myInt = 10000                 #dividing deficit/surplus for 10,000
REDDE = [x / myInt for x in dmean]

fig, ax = plt.subplots()
plt.plot(DATES, REDDE, label='Deficit/Surplus factor')
plt.axhline(0, color='black')
legend = ax.legend(loc='lower center', shadow=True)
plt.show()

arr = np.array(dataframe['Adj Close'])            #rolling mean of standard deviation of S&P500 to compute the Z-Score
std_err = pd.rolling_std(arr, 24, min_periods=1)
rmean = pd.rolling_mean(arr, 24, min_periods=1) #rolling mean of S&P500 to compute Z-Score

ZSP = (-(dataframe['Adj Close']-rmean))/std_err    #computing the Z-score for S&P
fig, ax = plt.subplots()
plt.plot(DATES, ZSP, label='S&P factor (Z-Score)')
plt.axhline(0, color='black')
legend = ax.legend(loc='upper center', shadow=True)
plt.show()

FCItest = (ZSP+REDDE)/2     #computing my Financial Conditions Indicator, dividing by 2 to get the average
fig, ax = plt.subplots()
plt.plot(DATES, FCItest, label='FCI test INDICATOR')
plt.axhline(0, color='black')
legend = ax.legend(loc='lower center', shadow=True)
plt.show()

fig, ax= plt.subplots(figsize=(18,10))     #plotting S&P and FCI together
plt.plot(DATES, dataframe['Adj Close'], label='S&P')
plt.plot(DATES, FCItest*500, label='FCI test indicator')
legend = ax.legend(loc='lower center', shadow=True)
plt.grid()
plt.axhline(0, color='black')
plt.show()


FIRST RESULTS

The FCI I created gives very interesting insights about the stock market. We see the FCI values becoming positive during recessions and negative when the economy is doing well. Therefore it tells us when a recession is approaching and when it’s over. Someone might notice that my FCI doesn’t revert during the Early 1990s Recession, which lasted 8 months just like Early 2000 recession following the .com bubble. However, just like in 1987, the indicator gets very close to the 0 x-axis so we know the market is in a correction area. This confirms that the smaller the deficit, the higher the probability of a bear market and/or a recession. When there is a surplus, the consequences are pretty clear. The huge surplus in the early 2000’s was actually the main reason why a great economy ended.

TRADING MODEL BASED ON THE FCI

Given the accuracy of my FCI, the next step would obviously be to see if we can apply this financial indicator to a trading model to take advantage of both bull and bear markets. Thus I’m going to create an algorithm that makes my strategy go long on the S&P500 when the indicator is negative and short it when my indicator is positive. Since the FCI crosses the x-axis only 10 times, I can already forecast I will incur in transaction costs only 10 times over 35 years. Given the sporadicity of this issue and the liquidity of the instruments based on the S&P500 across all the financial providers, I’m not going to deduct any transaction costs. Also, since I will be comparing the return of my strategy to the return of the S&P500, which data does not take into account the risk-free rate, for now I’m not going to take it into account either, in order to provide a more objective comparison. I'm going to keep a negative factor for the S&P500 but I'm going to assign a positive factor of 2/3 to my deficit/surplus factor, to bring its values closer to the S&P500 values.


In [18]:
SP = dataframe['Adj Close'].tolist()

r = 0.629        #I assign the value of 0.66 (2/3) to my deficit indicator
PREDDE = [x * r for x in REDDE]
fig, ax = plt.subplots()
plt.plot(DATES, PREDDE, label='Deficit/Surplus factor')
plt.axhline(0, color='black')
legend = ax.legend(loc='lower center', shadow=True)
plt.show()

FCI = (ZSP+PREDDE)/2     #computing my final Financial Conditions Indicator, dividing by 2 to get the average
fig, ax = plt.subplots()
plt.plot(DATES, FCI, label='FCI INDICATOR')
plt.axhline(0, color='black')
legend = ax.legend(loc='lower center', shadow=True)
plt.show()



In [19]:
STRAT = [1,1,1,1]

for i in range(4, len(FCI)):
    x = STRAT[i-1] 
    if i>4:             #starting from month 5 just so we can start in January 1981
        if (FCI[i-2] < 0) and (FCI[i-1] < 0) :
            if (SP[i]) > (SP[i-1]):
                j = x*((SP[i])/(SP[i-1]))
                STRAT.append(j)
            else:  
                j = x*((SP[i])/(SP[i-1]))
                STRAT.append(j)
        
        elif (FCI[i-2] < 0) and (FCI[i-1] > 0) :
            if (SP[i]) > (SP[i-1]):
                j = x*(1-((SP[i])/(SP[i-1])-1))
                STRAT.append(j)
            else: 
                j = x*(1-((SP[i])/(SP[i-1])-1))
                STRAT.append(j)      
             
        elif (FCI[i-2] > 0) and (FCI[i-1] < 0) :
            if (SP[i]) > (SP[i-1]):
                j = x*((SP[i])/(SP[i-1]))
                STRAT.append(j)
            else:
                j = x*((SP[i])/(SP[i-1]))
                STRAT.append(j) 
            
        else: 
            if (SP[i]) > (SP[i-1]):
                j = x*(1-((SP[i])/(SP[i-1])-1))
                STRAT.append(j)
            else:
                j = x*(1-((SP[i])/(SP[i-1])-1))
                STRAT.append(j)
    
    else:
        if (SP[i]) > (SP[i-1]):
                j = x*((SP[i])/(SP[i-1]))
                STRAT.append(j)
        else:  
                j = x*((SP[i])/(SP[i-1]))
                STRAT.append(j)

cumret = [1,1,1,1]
for i in range(4,len(SP)):
    x = SP[3]
    y = SP[0+i]
    ret = y/x
    cumret.append(ret)
    pass          

r = 5
FCIX5 = [x * r for x in FCI]   #times 5 just to to make the graph clear, what is important is the sign (positive or negative)

fig, ax= plt.subplots(figsize=(18,10))
plt.grid()
plt.plot(DATES, FCIX5, label='FCI INDICATOR')
plt.plot(DATES, STRAT, label='Model (cumulative return)')
plt.plot(DATES, cumret, label='S&P (cumulative return)')
legend = ax.legend(loc='upper center', shadow=True)
plt.axhline(0, color='black')
plt.show()

logSP = [log(y,10) for y in cumret]
logSTRAT = [log(y,10) for y in STRAT]

fig, ax= plt.subplots(figsize=(18,10))
plt.plot(DATES, logSP, label='Log S&P (cumulative return)')
plt.plot(DATES, logSTRAT, label='Log Model (cumulative return)')
legend = ax.legend(loc='upper center', shadow=True)
plt.grid()
plt.axhline(0, color='black')
plt.show()


RESULTS OF THE TRADING MODEL

This combination gives me an astonishing annual return of 14.20% over 35 years. The return of the S&P500, over the same period, has been only 8.93%. One dollar invested in the S&P500 in January 1981 would have been worth 16 dollars now. With my strategy, the same dollar over the same period would have been worth 105 dollars. The FCI curve crosses the x-axis only 6 times in 35 years now, reducing transaction costs that were already close to zero. In order to show a more professional and clear comparison, I’m going to take logs of both strategies. We can see that my curve is not only always above the S&P500, but it does a great job in taking advantage of both bull and bear markets. We are short in the ‘80s, during the 1980s recession that started in 1981. We are short again in 2001 during the recession caused by the .com bubble and after the 9/11 attacks. In 2008, we go short again during the Great Recession caused by the subprime mortgage crisis. From the log of my model we also see how perfect it is in “switching” to go short in September 2000, while in 2008 there is a little delay since we had fewer months with a balance surplus that, as I explained, takes money away from the economy. We are also using a moving average, so there is a little delay in responding to a decline in the S&P500. Below some statistics and a more complete graph of my model for further considerations.


In [20]:
RETSP=[0,0,0,0]
RETFCI=[0,0,0,0]
DIFF=[]
for i in range(4,len(FCI)):
    y = cumret[i]/cumret[i-1]-1
    RETSP.append(y)
for i in range(4,len(FCI)):
    t = STRAT[i]/STRAT[i-1]-1
    RETFCI.append(t)

fig, ax= plt.subplots(figsize=(18,7))
plt.axhline(0, color='black')
plt.plot(DATES, RETSP, label='S&P return')
plt.plot(DATES, RETFCI, label='FCI return')
legend = ax.legend(loc='upper center', shadow=True)
plt.show()

for i in range(0, len(FCI)):
    h = RETFCI[i]-RETSP[i]
    DIFF.append(h)

fig, ax= plt.subplots(figsize=(18,7))
plt.plot(DATES, DIFF, label='(FCI-S&P return)')
legend = ax.legend(loc='upper center', shadow=True)
plt.show()

C=[] #this is ugly but I like to keep lists ready, just in case..
S=[]
D=[]
T=[]
MEANSP=[]
MEANFCI=[]
STDEVSP=[]
STDEVFCI=[]
SHARPESP=[]
SHARPEFCI=[]

for i in range(0, len(FCI)):        #computing mean and st.dev. from first observation to get the Sharpe Ratio for the S&P500
    C.append(RETSP[i]) 
    d = np.average(C)
    MEANSP.append(d)
for i in range(0, len(FCI)):
    S.append(RETSP[i]) 
    e = np.std(S)
    STDEVSP.append(e)
for i in range(0,len(FCI)):
    f = MEANSP[i]*12/(STDEVSP[i]*np.sqrt(12))
    SHARPESP.append(f)
    pass
 

for i in range(0, len(FCI)):         #computing mean and st.dev. from first observation to get the Sharpe Ratio for my model
    D.append(RETFCI[i]) 
    d = np.average(D)
    MEANFCI.append(d)
for i in range(0, len(FCI)):
    T.append(RETFCI[i]) 
    g = np.std(T)
    STDEVFCI.append(g)
for i in range(0,len(FCI)):
    f = MEANFCI[i]*12/(STDEVFCI[i]*np.sqrt(12))
    SHARPEFCI.append(f)

fig, ax= plt.subplots(figsize=(18,7))
plt.axhline(0, color='black')
plt.axhline(1, color='red')
plt.axhline(0.5, color='red')
plt.plot(DATES, SHARPESP, label='S&P Sharpe Ratio (0 risk-free rate)')
plt.plot(DATES, SHARPEFCI, label='FCI Sharpe Ratio (0 risk-free rate)')
legend = ax.legend(loc='upper center', shadow=True)
plt.show()                           #plotting both cumulative Sharpe Ratios (since 1981) 

fig, ax= plt.subplots(figsize=(18,10))
plt.plot(DATES, PREDDE, label='Deficit/Surplus factor')
plt.plot(DATES, FCIX5, label='FCI INDICATOR')
plt.plot(DATES, cumret, label='S&P (cumulative return)')
plt.axhline(0, color='black')
plt.axhline(-5, color='red')
legend = ax.legend(loc='upper center', shadow=True)
plt.grid()
plt.show()   #plotting Deficit/Surplus and cumulative return for S&P


By looking at the above graph, we notice something even more interesting: the FCI doesn’t only tell us when recessions start and end by becoming negative or positive, but it also tells us when a bear market is about to happen and that’s when the FCI is greater than -5. This means that when the FCI crosses -5, over the last 35 years, we can be 100% sure that there is a bear market/recession/flat market in front of us. The precision in 1990 and 2007 is very significant. In 2001, the recession starts right when my indicator changes sign, after being above -5 for months, so it was forecasted very well. The delay is mostly due by the incredible growth of private debt that sustained the economy for a while and this growth, once again, was mostly due to a smaller-than-needed federal deficit. When we are back below -5 again, it’s a sign that the market is recovering and it is safe to go long on the S&P 500. Also, it’s interesting to note how every time the FCI indicator curve crosses the deficit/surplus factor curve, we have a recession and/or a bear market. Last but not least, in February of this year, we crossed -5. This is further proof of the precision of this model, and it comes from the most recent data. On February 1st, after plugging in the closing price of the S&P of 1939.38, we crossed the -5, and the next day the market plummeted again. In the following ten days, it went down 6.66% and this was the first time we crossed the -5 line since 2009. A very nice coincidence.

In [21]:
#some more statistics

covariance = np.cov(RETSP,RETFCI)[0][1]  
variance = np.var(RETSP)
beta = covariance / variance
print("Beta is %.2f" %beta)   #Beta

annret = 12*MEANFCI[-1]*100
annstdev = np.sqrt(12)*STDEVFCI[-1]*100
rf = 4
sr = (annret-rf)/annstdev   #Sharpe Ratio FCI

print("The average Annual Return of my strategy is %.2f%%" %annret)
print("The average Standard Deviation of my strategy is %.2f%%" %annstdev)
print("Therefore, with a risk-free rate of 4%%, the Sharpe Ratio is %.2f" %sr)

annretsp = 12*MEANSP[-1]*100
annstdevsp = np.sqrt(12)*STDEVSP[-1]*100
rf = 4
srsp = (annretsp-rf)/annstdevsp     #Sharpe Ratio S&P500

print("The average Annual Return of the S&P 500 is %.2f%%" %annretsp)
print("The average Standard Deviation of the S&P 500 is %.2f%%" %annstdevsp)
print("Therefore, with a risk-free rate of 4%%, the Sharpe Ratio of the S&P 500 is %.2f" %srsp)


Beta is 0.49
The average Annual Return of my strategy is 14.20%
The average Standard Deviation of my strategy is 14.54%
Therefore, with a risk-free rate of 4%, the Sharpe Ratio is 0.70
The average Annual Return of the S&P 500 is 8.93%
The average Standard Deviation of the S&P 500 is 14.88%
Therefore, with a risk-free rate of 4%, the Sharpe Ratio of the S&P 500 is 0.33
The Sharpe Ratio (with no risk-free rate) of my trading model is basically 1, which is a great result since it is based on the S&P500, which is considered to have a Sharpe Ratio of 0.39, but before risk-free rate it is 0.6, based on my monthly data. If we consider a risk-free rate of 4% for both strategies over the last 35 years, the real Sharpe Ratio is 0.70 for my model and 0.33 for the S&P500. It’s a 118.75% increase, obtained with a safe, simple and cost-effective strategy. These numbers are cumulative Sharpe Ratios and have been computed since 1981. It wouldn’t make sense to compare the last 3-5 years, for example, since both strategies have been long on the S&P 500. The max drawdown of my strategy is about 30%, caused by the 1987 crash. This is still a very good result compared to the 53% of the S&P500. Also, always using the 1 dollar invested in 1981 analogy, with my trading model we never went below 1 dollar. In the S&P500, the $1 dollar invested reached a minimum value of 0.83 cents in July 1982. The Beta is 0.49, which makes sense, since this model makes us go long when the S&P goes up and go short when the S&P goes down. Where do we stand now? This past February, we crossed the -5 line again for two months but now we are at -8, due to the last stock market rally but the deficit is small and 2016 Q1 revenues are not sending good signs. The stock market fell a couple of times in the last 10 months and it seems to be recovering, again. However, there is a combination of factors that are likely to soon push the market into correction and the economy into a recession, such as: the deficit is not big enough, the collapse of the oil capital expenditures and related credit stress, a strong dollar, a weaker welfare system, over-inflated stocks prices due also to large buy-back programs, excessive inventories, weak new orders and shipments, weakening industrial production, weak global growth, and so on. Even if basically all the time series I follow are going in the wrong direction, the stock market rose in the last 2 months. Below I added a few more plots with a violin plot comparison between the S&P monthly returns and the monthly returns of my strategy, first over the whole sample (about 35 years) and then during the 2008 recession (January 2007 to December 2009) plus, for curiosity, the ARIMA forecast for 2016 with a 95% confidence interval.

In [22]:
import seaborn as sns
rettt = np.array(RETSP)
fccc = np.array(RETFCI)
VIO=pd.DataFrame(rettt)
VIO["fccc"] = fccc
VIO.columns = ['S&P', 'FCI']
VIO.index=dataframe.index
sns.violinplot(data=VIO)  #over 35 years


Out[22]:
<matplotlib.axes.AxesSubplot at 0x7f41eed53be0>

In [23]:
VIOO = VIO.iloc[316:340,0:2]   #during the 2008 recession
sns.violinplot(data=VIOO)


Out[23]:
<matplotlib.axes.AxesSubplot at 0x7f41eed22d68>

In [259]:
dta=pd.Series(logSP)     #use "arr" instead of "logSP" for the S&P data (before taking logs)
dta.index = pd.DatetimeIndex(start='1980-10-01', end='2016-05-01', freq='MS')
res = sm.tsa.ARIMA(dta, (1, 1, 1)).fit()     #ARIMA(1,1,1) in this case
fig, ax = plt.subplots(figsize=(20,10))
ax = dta.ix['2007-01-01':].plot(ax=ax)
fig = res.plot_predict(399, 435, dynamic=False, ax=ax, plot_insample=False)    #plotting forecast from Jan 2014
plt.show()


CONCLUSION
This paper shows how useful federal deficit and surplus are in forecasting the direction of the market in the next few months. The reason is that budget surpluses are appreciated by people and politicians but they are lethal for the economy, unless we are in a situation of full employment and/or high inflation from the demand side. The private sector is pro cyclical and therefore the public sector needs to be counter cyclical if we want to avoid long and unnecessary recessions. From this research, and based on this framework, there are endless interesting analyses that can be performed on other countries and economies, such as Europe and Asia. In addition, this model does an excellent job with only two factors, but it can obviously be expanded with other time series that help in forecasting recessions and expansions, such as industrial production, private debt, delinquencies rates, sales of heavy weight trucks, sales and inventories ratios, housing starts, etc. Also, this paper does not aim to diminish the responsibility of the dot-com bubble in 2001 or the housing bubble in 2008, but instead it shows how federal deficit and surplus affect the private sector and can burst these bubbles when the federal deficit is not large enough to sustain the economy. What it may suggest is that increased spending or tax cuts (both leading to a bigger deficit) would have helped the economy and deflated the recessions, which always cost jobs, houses, pensions funds and lives. For example, if in 2008 households had had more disposable income, they would have been able to make more mortgage payments, while the Government was regulating and helping the banking/housing systems. In the end, most people speak about legacy of a huge debt for the next generations while it is a legacy of an incredible wealth that makes the USA the richest nation in the world. Public debt is a reflection of how rich we are, not how in debt we are. Public debt is what we own, not what we owe. Public debt is the debt of the public sector, which has unlimited resources, not the debt of the public. To conclude, federal taxes do not finance public spending, and to use an analogy, they should be considered an AC unit that we can regulate based on the temperature of the market and this indicator I created seems to be a very accurate thermometer.