# Introduction

I am very excited to finally share some of my research exploring Meucci’s (Meucci (2005)) portfolio optimization methods, and how the resulting portfolios compare to the use of historical data. For those unfamiliar with Attilio Meucci, he runs an annual Advanced Risk and Portfolio Managment Bootcamp in New York City every summer. The bootcamp attracts academics and professionals within the industry, and over 6 intense days, topics and techniques in Risk Management and Portfolio Management are discussed in depth. I was fortunate to attend the bootcamp during the summer of 2014. In this post, I will be exploring portfolio optimization of the 9 Sector SPDR ETFs, using Meucci’s methods, specifically his framework outlined in “The Checklist” (Meucci (2011)), and I will be verifying if Meucci’s methods add value over the use of historical data. Lastly, I will be using Markowitz’s (Markowitz (1952)) quadratic utility framework for portfolio optimization, and will be comparing results using Minimum Variance and Mean Variance optimizations.

# Background

Meucci’s core methods differ from the standard practice insofar as he prefers the use of Profit-and-Loss [P&L] over returns, and he prefers Monte Carlo Simulation over the direct use of historical data. Details of his framework can be found in “The Checklist” (Meucci (2011)), however I will provide a brief summary here:

1. For each instrument, one needs to identify the “invariants”, or the drivers of the changes in price that are “Identically and Independently Distributed” [IID]. For equities, this is usually log-returns.
2. Once the invariants are identified (e.g. log-returns for equities), this yields a matrix of invariants. To reduce estimation error, dimension reduction is performed by computing the eigenvalues and eigenvectors. Then, a subset of eigenvalues/eigenvectors are identified as the “signal”, whereas the remaining ones are classified as “noise”.
3. For the “signal” eigenvalues and eigenvectors, these are converted to their Principal Components [PCs] (i.e. the product of the invariants with the signal-eigenvectors). The distribution for each of the PCs is estimated, as well as the dependence between them (the copula). For the noise components, these are assumed to be distributed with mean 0, and standard deviation equal to the corresponding noise-eigenvalue, and are assumed to be independent/uncorrelated with each other.
4. Next, the Monte Carlo Simulation is conducted by drawing N trials, for P periods, from the meta-distribution estimated in the previous step. The idea is that the optimization should occur at some future investment horizon, and thus prices should be projected to where they likely will be.
5. Finally, these returns need to be converted to P&Ls, thus the distribution of N invariants at the investment horizon is multiplied by the latest security prices to obtain a distribution of P&Ls.
6. Lastly, the portfolio optimization is conducted on the P&Ls, instead of percent returns. Thus, the solution yields holdings (i.e. number of shares for each security) instead of the traditional asset weights.

In order to simplify this process, a number of assumptions were made for this study.

# Assumptions

• Only equities are considered, whereas Meucci’s framework includes Fixed Income and Derivatives
• All signal-PCs are assumed to have a skew-student-t distribution
• All noise-PCs are assumed to either be normally distributed, or have student-t distribution, and are selected based on the Anderson-Darling goodness of fit test
• Given that there are 9 sector spdr ETFs, it is assumed that there are always 3 signal PCs, and 6 noise PCs
• The dependence between the 3 signal PCs is assumed to have a Vine Copula (Kramer and Schepsmeier (2011)) structure
• The dependence between the 6 noise PCs is assumed to be uncorrelated within the 6 PCs, as well as uncorrelated with the 3 signal PCs
• Transaction costs are ignored for rebalancing
• The optimization is long-only, fully-invested with a max-weight constraint on the 2 highest volatility ETFs
• For each period, the historical volatility is computed, and for the 2 highest volatility ETFs, their weights are constrained by the inverse-volatility weight calculated for that period
• All other “non-high-volatility” ETFs can be fully-invested (100% of wealth)
• For the Auto-Regressive Moving-Average [ARMA] trials, the ARMA model is fit if the Ljung-Box test indicates autocorrelation with 2.5% statistical significance
• Weekly data is used to calculate the invariants, eigenvalues/eigenvector and to ultimately solve for holdings
• The rationale is that this eliminates some of the noise from the daily prices, yet provides enough data to draw conclusions
• Performance is evaluated using daily data
• Covariance matrices are estimated using “robust” methodologies
• robust::covRob() was used to estimate covariance matrices
• 3 Windows are selected to evaluate historical horizons
• 18 months is assumed to be a short-horizon window
• 36 months is assumed to be a medium-horizon window
• 60 months is assumed to be a long-horizon window
• Portfolios are rebalanced monthly, thus the Meucci investment horizon is 4-weeks ahead
• The invariants are projected 4 periods ahead, and then re-priced and optimized at this 4-period-ahead horizon

It should also be noted that, I have made many assumptions above. If anyone happened to read my first post on the Low-Volatilty Trading Strategy, then they may wonder why I did not take the hypothesis-driven approach outlined by Peterson (2014). Work on this post started in Q1 2015, months after I had attended Meucci’s bootcamp. Much of the work, assumptions, and results were obtained throughout 2015. I had taken Brian Peterson’s CFRM561 course in the spring/summer of 2016, after much of the work was completed. Due to the number of assumptions, all of which can arguably be converted to testable hypotheses, this would significantly delay the release of this information. I believe I have some valuable conclusions to share based on the work that has already been completed, and thus decided to write this post. Ideally though, all assumptions I have made above would be tested using the methods outlined in Peterson (2014), to ensure that the distributional and dependence assumptions are sound, and to reduce the risk of overfitting.

# Result Replication

To replicate the results, two of my packages, titled propfolio and tsconv need to be installed. The following lines of code will install the packages, provided that the devtools library is already installed. Warning: propfolio has a lot of dependencies, so installing it will install many additional packages. Please review the description of the package prior to installing. Code to replicate the results can be found in the appendix.

devtools::install_github("erolbicero/tsconv")
devtools::install_github("erolbicero/propfolio")


# ARMA Approach

Throughout 2015, at times during the analysis, I noticed that some of the sectors exhibited autocorrelation (e.g. XLU at times appears to have significant autocorrelation). During the “quest for invariance” step (Meucci (2011)), the result should yield no autocorrelation once the invariant has been identified. Given that I observed autocorrelation, this implied that there was information in the return series that was not being captured. In an effort to obtain uncorrelated residuals (invariants), I decided to verify if there is statistically significant autocorrelation using the Ljung-Box test at 2.5% significance. If the null hypothesis was rejected, then I used forecast::auto.arima() to fit an ARMA model. The only time this additional step is conducted is for the 18 month window. It is an extension of the Monte Carlo approach, and technically more accurate as it ensures that the invariants are IID. I did not do this for the 36 month and 60 month windows, since I did test this previously in 2015, and found that using forecast::auto.arima() did not make performance improvements; if anything it negatively impacted performance.

# Optimization Criteria, Objectives and Choice of Benchmark

## Objective Function Criteria

The objective in this study is to maximize risk-adjusted returns, or put anther way, obtain the highest level of returns for the lowest amount of risk. Thus, the Sharpe Ratio will predominantly be used to assess strategy performance, however this will be evaluated in the context of the annualized return and annualized volatility. E.g., if the Sharpe Ratio of strategy 1 is 2.0, yet the annualized return is 3%, and if the Sharpe Ratio of strategy 2 is 1.5, yet the annualized return is 9%, then strategy 2 may be preferred over strategy 1.

The objective can be illustrated with the chart below. Once the portfolio or strategy performance is obtained, and the annualized risk and return is calculated, it can be plotted on a risk-return scatter plot to visually compare strategies. The goal is to select the strategy that is the highest to the top-left; this is the lowest risk and highest return strategy. In practice, there are typically no points in this space, and all points usually tend towards the right (i.e. higher risk). If two points are on the same horizontal line (same return), the strategy on left is the best (the “Good” arrow), since it has the same return but lower risk. If two points are on the same vertical line, the strategy on top (the “Better” arrow) is the best, since it has more return for the same level of risk. Any strategy to the top-left of another point is better (the “Great” arrow) since it has higher return and lower risk. Any strategies that fall on some positive-slope line in relation to one another will need to be evaluated on a case-by-case basis, since if they have the same Sharpe Ratio, then they are roughly “equivalent”, however, if the point is significantly higher, and only slightly to the right, then it is equivalent to the effect in the “Better” line, where a significant amount of return is gained for a small increase in risk.

## Choice of Benchmark

The benchmark for this study will be the equal-weight portfolio of all 9 sector spdrs, rebalanced monthly. It is tempting to select the S&P500 as the benchmark, as it is easier to outperform, however with an equal weight benchmark, not only does it outperform the cap-weighted S&P500 index, it makes no model assumptions and thus is a fair benchmark to be evaluated against.

This outperformance of the equal-weight portfolio over cap-weighted portfolio is clear in the charts and table below, where the equal-weight benchmark exhibits higher annualized returns, lower annualized risk, and better tail characteristics (lower VaR and ETL). The optimizations in this study may outperform the cap-weighted index, however, if they cannot outperform the equal-weight portfolio, then they are not useful, further justifying the use of the equal-weight portfolio as the benchmark.

#### (June 2000 – December 2016)

SPY Benchmark
Annualized Return 4.65% 6.94%
Annualized Standard Deviation 19.62% 18.58%
Semi-Deviation 14.04% 13.44%
Annualized Sharpe Ratio (Rf=0%) 0.24 0.37
Adjusted Sharpe ratio (Risk free = 0) 0.32 0.42
Average Drawdown 2.3% 2.21%
Average Recovery 20.62 13.17
VaR 95 -1.94% -1.79%
VaR 99 -3.46% -3.26%
ETL 95 -2.93% -2.81%
ETL 99 -4.85% -4.66%
Worst Loss -9.84% -9.58%
Skewness 0.19 -0.09
Excess Kurtosis 11.62 8.59
Information Ratio -0.49 NaN

# Overall Structure and Process

First, the Minimum Variance Optimization will be evaluated for each time window, where Meucci’s Monte Carlo approach will be compared to the historical method. The stronger strategy, i.e. Historical vs. Monte Carlo, will be identified for each window length.

Next, the same comparison (Historical vs. Monte Carlo) will be evaluated for the Optimal Sharpe criterion, within each window length.

Lastly, a summary of outperforming strategies relative to the benchmark will be identified, and then it can be assessed whether or not the Monte Carlo [MC] approach adds value over the benchmark and the historical method.

# Historical vs. Monte Carlo Performance for Minimum Variance Optimization

## Performance Comparison

### 60 Month Window

In this case, performance of the MC approach vs. the historical approach are virtually indistinguishable from one another. Both strategies exhibit comparable risk and return characteristics. Drawdowns are lower for both MC and historical approaches vs. the benchmark. On the risk-return scatter, we can see that both strategies are in the “Good” scenario, where they appear to exhibit lower risk vs. the benchmark. This is confirmed in the t-test and F-tests below, where the MC and historical methods are statistically indistinguishable, whereas the historical method has statistically significantly lower risk vs. the benchmark.

I do not interpret this to conclude that the MC approach adds no value here. One possible interpretation is that the MC approach confirms that a “global minimum variance” portfolio has been identified, insofar as both strategies select the same sector ETFs with comparable weights across time. However, there is not enough information here to conclude whether or not this is true, or if the MC approach adds no value over the historical method.

Given the choice between the two methods, since the historical approach is more simple, I would likely use this approach, given that it is indistinguishable from the MC approach.

### (60 Month Window)

#### (December 2003 – December 2016)

Minimum.Variance.60mo.Hist Minimum.Variance.60mo.MC Benchmark
Annualized Return 9.18% 9.15% 8.93%
Annualized Standard Deviation 13.23% 13.2% 18.34%
Semi-Deviation 9.6% 9.6% 13.37%
Annualized Sharpe Ratio (Rf=0%) 0.69 0.69 0.49
Adjusted Sharpe ratio (Risk free = 0) 0.57 0.57 0.47
Average Drawdown 1.47% 1.44% 1.91%
Average Recovery 9.5 9.38 9.47
VaR 95 -1.23% -1.25% -1.71%
VaR 99 -2.39% -2.4% -3.55%
ETL 95 -1.95% -1.96% -2.84%
ETL 99 -3.32% -3.3% -4.91%
Worst Loss -6.43% -6.61% -9.58%
Skewness -0.11 -0.12 -0.16
Excess Kurtosis 9.01 9.19 10.92
Information Ratio 0.03 0.02 NaN
## [1] "Paired t-test (mean difference) for 'Minimum Variance 60mo Hist' vs. 'Minimum Variance 60mo MC'"
## [1] "p-value = 0.94"
## [1] "F-test (ratio of variances) for 'Minimum Variance 60mo Hist' vs. 'Minimum Variance 60mo MC'"
## [1] "p-value = 0.92"
## [1] "Paired t-test (mean difference) for 'Minimum Variance 60mo Hist' vs. 'Benchmark'"
## [1] "p-value = 0.82"
## [1] "F-test (ratio of variances) for 'Minimum Variance 60mo Hist' vs. 'Benchmark'"
## [1] "p-value = 0"

### 36 Month Window

Here, we have a very similar result to the 60 month window, except it appears as though the mean return is worse than the benchmark, especially when we look at the risk-return scatter plot below. However, given that the Sharpe Ratio for both the MC and historical approach are comparable to the Sharpe Ratio of the benchmark, and, given that the risk for both strategies is statistically significantly lower (confirmed in the F-test below), if the strategies were to be levered up to the volatility of the benchmark, then comparable returns would be generated. Furthermore, the mean difference for the return distributions of the strategies is not statistically significantly different from that of the benchmark (confirmed with the t-test below), therefore, my interpretation is that this is a similar case to the 60 month window, where the performance of MC and historical vs. the benchmark are comparable, and a statistically significant reduction of risk in the portfolios is obtained.

Again, given the simplicity of the historical approach, I would select the historical method over the MC approach.

### (36 Month Window)

#### (December 2001 – December 2016)

Minimum.Variance.36mo.Hist Minimum.Variance.36mo.MC Benchmark
Annualized Return 6.7% 6.48% 7.94%
Annualized Standard Deviation 13.97% 14.07% 18.65%
Semi-Deviation 10.1% 10.2% 13.49%
Annualized Sharpe Ratio (Rf=0%) 0.48 0.46 0.43
Adjusted Sharpe ratio (Risk free = 0) 0.48 0.46 0.45
Average Drawdown 1.84% 1.86% 2.11%
Average Recovery 15 15.64 11.18
VaR 95 -1.38% -1.34% -1.8%
VaR 99 -2.53% -2.5% -3.31%
ETL 95 -2.07% -2.09% -2.84%
ETL 99 -3.34% -3.43% -4.75%
Worst Loss -6.69% -7.43% -9.58%
Skewness -0.04 -0.08 -0.08
Excess Kurtosis 7.83 9.37 9.16
Information Ratio -0.14 -0.16 NaN
## [1] "Paired t-test (mean difference) for 'Minimum Variance 36mo Hist' vs. 'Minimum Variance 36mo MC'"
## [1] "p-value = 0.68"
## [1] "F-test (ratio of variances) for 'Minimum Variance 36mo Hist' vs. 'Minimum Variance 36mo MC'"
## [1] "p-value = 0.67"
## [1] "Paired t-test (mean difference) for 'Minimum Variance 36mo Hist' vs. 'Benchmark'"
## [1] "p-value = 0.41"
## [1] "F-test (ratio of variances) for 'Minimum Variance 36mo Hist' vs. 'Benchmark'"
## [1] "p-value = 0"

### 18 Month Window

In this instance, it appears as though the results are indistinguishable from the benchmark, both for the historical and MC approaches. However, upon further inspection of the risk characteristics, what we have is the same return for both the historical and MC approaches for statistically significantly less risk vs. the benchmark (confirmed in the F-test below). Upon inspection of the risk-return scatter plot, the “Good” scenario can be observed, where all 3 strategies (historical, MC and ARMA) exhibit statistically significantly lower risk vs. the benchmark.

Again, since the historical approach is comparable to both the MC and ARMA approaches, given a choice I would select the historical approach due to its simplicity.

### (18 Month Window)

#### (June 2000 – December 2016)

Minimum.Variance.18mo.Hist Minimum.Variance.18mo.MC Minimum.Variance.18mo.ARMA Benchmark
Annualized Return 7.11% 6.86% 7.25% 6.94%
Annualized Standard Deviation 13.94% 13.87% 13.99% 18.58%
Semi-Deviation 10.1% 10.04% 10.09% 13.44%
Annualized Sharpe Ratio (Rf=0%) 0.51 0.49 0.52 0.37
Adjusted Sharpe ratio (Risk free = 0) 0.51 0.5 0.52 0.42
Average Drawdown 1.82% 1.92% 1.8% 2.21%
Average Recovery 13.99 15.87 13.46 13.17
VaR 95 -1.37% -1.38% -1.35% -1.79%
VaR 99 -2.53% -2.48% -2.5% -3.26%
ETL 95 -2.05% -2.04% -2.05% -2.81%
ETL 99 -3.3% -3.28% -3.33% -4.66%
Worst Loss -6.69% -6.42% -6.59% -9.58%
Skewness -0.13 -0.12 -0.1 -0.09
Excess Kurtosis 5.84 5.87 5.92 8.59
Information Ratio 0.02 -0.01 0.03 NaN
## [1] "Paired t-test (mean difference) for 'Minimum Variance 18mo Hist' vs. 'Minimum Variance 18mo ARMA'"
## [1] "p-value = 0.85"
## [1] "F-test (ratio of variances) for 'Minimum Variance 18mo Hist' vs. 'Minimum Variance 18mo ARMA'"
## [1] "p-value = 0.8"
## [1] "Paired t-test (mean difference) for 'Minimum Variance 18mo Hist' vs. 'Benchmark'"
## [1] "p-value = 0.79"
## [1] "F-test (ratio of variances) for 'Minimum Variance 18mo Hist' vs. 'Benchmark'"
## [1] "p-value = 0"
## [1] "Paired t-test (mean difference) for 'Minimum Variance 18mo ARMA' vs. 'Benchmark'"
## [1] "p-value = 0.86"
## [1] "F-test (ratio of variances) for 'Minimum Variance 18mo ARMA' vs. 'Benchmark'"
## [1] "p-value = 0"

## Section Results

In all the window lengths, both the historical method and MC approach exhibited statistically significantly lower risk vs. the benchmark, however between the two strategies, they were virtually indistinguishable.

My interpretation is that, since minimum variance optimization effectively ignores any return inputs, either the MC (and ARMA) approaches are confirming that the historical method is finding the global minimum variance portfolio, or it adds no value. What is needed is return information to verify if adding return information as an input into the optimization enables a distinction in the performance between MC and historical approaches. This will be investigated in the subsequent section.

# Historical vs. Monte Carlo Performance for Optimal Sharpe Ratio Optimization

## Performance Comparison

### 60 Month Window

In this particular case, the MC approach is a poor performing strategy, when compared to both the benchmark and the historical method. The historical method manages to maintain the same level of return, with a statistically significantly lower risk, however the MC approach performs very poorly, especially since the t-test comparing the return distribution between both approaches is fairly low at 0.13, indicating that there is a chance that the returns are statistically significantly different.

It appears as though the MC approach at this window length is unable to capture the correct return characteristics, which yields the extremely poor performance vs. both the benchmark and historical method.

Thus, the historical approach is still preferred at the 60 month window length.

### (60 Month Window)

#### (December 2003 – December 2016)

Optimal.Sharpe.60mo.Hist Optimal.Sharpe.60mo.MC Benchmark
Annualized Return 8.92% 5.75% 8.93%
Annualized Standard Deviation 16.09% 16.45% 18.34%
Semi-Deviation 11.65% 11.77% 13.37%
Annualized Sharpe Ratio (Rf=0%) 0.55 0.35 0.49
Adjusted Sharpe ratio (Risk free = 0) 0.51 0.38 0.47
Average Drawdown 1.91% 2.08% 1.91%
Average Recovery 13.4 19.55 9.47
VaR 95 -1.55% -1.5% -1.71%
VaR 99 -2.95% -2.85% -3.55%
ETL 95 -2.41% -2.42% -2.84%
ETL 99 -3.96% -4.24% -4.91%
Worst Loss -7.79% -9.45% -9.58%
Skewness 0.14 0.39 -0.16
Excess Kurtosis 11.78 17.46 10.92
Information Ratio 0 -0.31 NaN
## [1] "Paired t-test (mean difference) for 'Optimal Sharpe 60mo Hist' vs. 'Optimal Sharpe 60mo MC'"
## [1] "p-value = 0.13"
## [1] "F-test (ratio of variances) for 'Optimal Sharpe 60mo Hist' vs. 'Optimal Sharpe 60mo MC'"
## [1] "p-value = 0.2"
## [1] "Paired t-test (mean difference) for 'Optimal Sharpe 60mo Hist' vs. 'Benchmark'"
## [1] "p-value = 0.87"
## [1] "F-test (ratio of variances) for 'Optimal Sharpe 60mo Hist' vs. 'Benchmark'"
## [1] "p-value = 0"

### 36 Month Window

In this scenario, it almost appears as though similar results are obtained as the minimum variance optimization case, insofar as the returns are lower (yet not statistically significantly different), and the risk is statistically significantly lower, both for the MC and historical approaches vs. the benchmark. However, in the minimum variance case, the Sharpe Ratio for both strategies was comparable to the Sharpe Ratio of the benchmark, yet in this case, it is much lower (0.45 vs. 0.38 and 0.37, benchmark vs. historical and MC, respectively). Thus, theoretically levering the strategies to the volatility of the benchmark would yield portfolios that exhibit comparable returns with higher risk.

Thus, both the MC and historical approaches are poor performers vs. the benchmark in this scenario, and the benchmark should be used over any approach.

What is interesting in this case, is that the MC approach exhibits statistically significantly lower risk vs. the historical approach. This implies that the MC approach is capturing the risk and return characteristics better than the historical approach at this window length, which is a positive characteristic of the MC approach, and can be used in future studies to potentially improve the performance over the benchmark.

### (36 Month Window)

#### (December 2001 – December 2016)

Optimal.Sharpe.36mo.Hist Optimal.Sharpe.36mo.MC Benchmark
Annualized Return 5.33% 5.13% 7.94%
Annualized Standard Deviation 16.35% 15.81% 18.65%
Semi-Deviation 11.79% 11.24% 13.49%
Annualized Sharpe Ratio (Rf=0%) 0.33 0.32 0.43
Adjusted Sharpe ratio (Risk free = 0) 0.38 0.37 0.45
Average Drawdown 2.41% 2.21% 2.11%
Average Recovery 17.17 18.28 11.18
VaR 95 -1.62% -1.48% -1.8%
VaR 99 -2.97% -2.7% -3.31%
ETL 95 -2.43% -2.3% -2.84%
ETL 99 -3.87% -3.76% -4.75%
Worst Loss -7.14% -6.94% -9.58%
Skewness 0.08 0.42 -0.08
Excess Kurtosis 7.73 13.06 9.16
Information Ratio -0.29 -0.29 NaN
## [1] "Paired t-test (mean difference) for 'Optimal Sharpe 36mo Hist' vs. 'Optimal Sharpe 36mo MC'"
## [1] "p-value = 0.89"
## [1] "F-test (ratio of variances) for 'Optimal Sharpe 36mo Hist' vs. 'Optimal Sharpe 36mo MC'"
## [1] "p-value = 0.04"
## [1] "Paired t-test (mean difference) for 'Optimal Sharpe 36mo Hist' vs. 'Benchmark'"
## [1] "p-value = 0.21"
## [1] "F-test (ratio of variances) for 'Optimal Sharpe 36mo Hist' vs. 'Benchmark'"
## [1] "p-value = 0"

### 18 Month Window

Here, the MC approach begins to exhibit strong performance over the historical approach, especially when the ARMA method is used. This is clear from the cumulative return chart, where ARMA and MC are the long-run strong performers, as well as the drawdown chart, where the benchmark appears to have the worst drawdowns, and the historical approach appears to have a sluggish post-financial-crisis recovery vs. the MC and ARMA strategies. Upon examining the risk-return scatter, the “Great” scenario can be observed, where the ARMA strategy is to the top-left of the benchmark.

Here, the historical, MC and ARMA approaches all have statistically significantly lower risk vs. the benchmark, however, all of the return distributions are statistically indistinguishable. What is interesting about the ARMA strategy, is that the average drawdown-recovery is lower than that of the historical approach. Lastly, both VaR 100 (i.e. worst loss) are lowest for the MC and ARMA approaches.

My own concern with the ARMA approach, is that I did not test the quality of the signal process, nor did I conduct any tests for overfitting, thus I am cautious with these results. However, it is the more correct approach for Meucci’s methods insofar as it ensures that the invariants are IID.

Therefore, it appears that at lower window lengths, the power of Meucci’s MC approach is quantifiable, even if only the MC approach is used (i.e. not the ARMA approach), insofar as it yields lower risk with a higher Sharpe Ratio vs. both the benchmark and historical method. This is supported when evaluating the 36 month window, where the MC approach exhibited better risk characteristics vs. the historical approach.

### (18 Month Window)

#### (June 2000 – December 2016)

Optimal.Sharpe.18mo.Hist Optimal.Sharpe.18mo.MC Optimal.Sharpe.18mo.ARMA Benchmark
Annualized Return 7.34% 7.4% 8.81% 6.94%
Annualized Standard Deviation 16.18% 15.13% 15.56% 18.58%
Semi-Deviation 11.76% 11.07% 11.34% 13.44%
Annualized Sharpe Ratio (Rf=0%) 0.45 0.49 0.57 0.37
Adjusted Sharpe ratio (Risk free = 0) 0.46 0.51 0.54 0.42
Average Drawdown 2.22% 2.11% 2.05% 2.21%
Average Recovery 17.59 15.26 13.24 13.17
VaR 95 -1.57% -1.57% -1.54% -1.79%
VaR 99 -2.88% -2.72% -2.83% -3.26%
ETL 95 -2.4% -2.26% -2.31% -2.81%
ETL 99 -3.98% -3.47% -3.76% -4.66%
Worst Loss -9.37% -6.04% -8.53% -9.58%
Skewness -0.08 -0.25 -0.28 -0.09
Excess Kurtosis 9.67 3.64 5.97 8.59
Information Ratio 0.04 0.04 0.16 NaN
## [1] "Paired t-test (mean difference) for 'Optimal Sharpe 18mo Hist' vs. 'Optimal Sharpe 18mo ARMA'"
## [1] "p-value = 0.59"
## [1] "F-test (ratio of variances) for 'Optimal Sharpe 18mo Hist' vs. 'Optimal Sharpe 18mo ARMA'"
## [1] "p-value = 0.01"
## [1] "Paired t-test (mean difference) for 'Optimal Sharpe 18mo Hist' vs. 'Optimal Sharpe 18mo MC'"
## [1] "p-value = 0.96"
## [1] "F-test (ratio of variances) for 'Optimal Sharpe 18mo Hist' vs. 'Optimal Sharpe 18mo MC'"
## [1] "p-value = 0"
## [1] "Paired t-test (mean difference) for 'Optimal Sharpe 18mo Hist' vs. 'Benchmark'"
## [1] "p-value = 0.98"
## [1] "F-test (ratio of variances) for 'Optimal Sharpe 18mo Hist' vs. 'Benchmark'"
## [1] "p-value = 0"
## [1] "Paired t-test (mean difference) for 'Optimal Sharpe 18mo ARMA' vs. 'Benchmark'"
## [1] "p-value = 0.67"
## [1] "F-test (ratio of variances) for 'Optimal Sharpe 18mo ARMA' vs. 'Benchmark'"
## [1] "p-value = 0"

## Section Results

It appears as though, at shorter window lengths, the power of the MC approach begins to surface insofar as the the risk of the strategies is statistically significantly lower. In order to assess if the MC approach is truly optimal, a comparison of the strongest strategies across window periods and optimization methods is required to assess if it is truly a superior strategy. This will be the goal in the next section.

# Overall Comparison and Identifying the Optimal Strategy

Upon comparing the results above, the clear performers are:

• Historical Minimum Variance (60 Months)
• Monte Carlo Optimal Sharpe (18 Months)
• ARMA Optimal Sharpe (18 Months)

Which is summarized in the table below.

18 Months 36 Months 60 Months
Minimum Variance H H H
Optimal Sharpe A M H
Overall Performer OS MV MV

Where:

• H: Historical
• M: Monte Carlo
• A: ARMA
• OS: Optimal Sharpe
• MV: Minimum Variance

The 36 month window is excluded since it could not outperform the benchmark in all tests.

### 18 Month vs. 60 Month Windows

In order to identify the true top performer, the results for 18 month window are re-run with the same window length as the 60 month window. Upon inspecting the cumulative returns, it is clear that both the 18 month MC and ARMA approaches exhibit strong returns. The t-test below confirms that the 18 month ARMA Optimal Sharpe has relatively statistically significantly different returns vs. the 60 month Historical Minimum Variance strategy. Both 18 month strategies (ARMA and MC) exhibit statistically significantly lower risk vs. the 60 month Minimum Variance strategy. Thus, based on these results, and the constraints within this study, Meucci’s methods appear to add significant value over historical data, if they are used with relatively shorter time windows (i.e. < 60 months).

### (18 Month vs. 60 Month Windows)

#### (December 2003 – December 2016)

Minimum.Variance.60mo.Hist Optimal.Sharpe.18mo.MC Optimal.Sharpe.18mo.ARMA Benchmark
Annualized Return 9.18% 10.52% 12.13% 8.93%
Annualized Standard Deviation 13.23% 14.46% 14.53% 18.34%
Semi-Deviation 9.6% 10.74% 10.76% 13.37%
Annualized Sharpe Ratio (Rf=0%) 0.69 0.73 0.83 0.49
Adjusted Sharpe ratio (Risk free = 0) 0.57 0.64 0.67 0.47
Average Drawdown 1.47% 1.82% 1.73% 1.91%
Average Recovery 9.5 10.64 9.2 9.47
VaR 95 -1.23% -1.51% -1.48% -1.71%
VaR 99 -2.39% -2.7% -2.7% -3.55%
ETL 95 -1.95% -2.22% -2.21% -2.84%
ETL 99 -3.32% -3.5% -3.6% -4.91%
Worst Loss -6.43% -6.04% -6.84% -9.58%
Skewness -0.11 -0.42 -0.4 -0.16
Excess Kurtosis 9.01 4.45 5.29 10.92
Information Ratio 0.03 0.16 0.31 NaN
## [1] "Paired t-test (mean difference) for 'Minimum Variance 60mo Hist' vs. 'Optimal Sharpe 18mo ARMA'"
## [1] "p-value = 0.12"
## [1] "F-test (ratio of variances) for 'Minimum Variance 60mo Hist' vs. 'Optimal Sharpe 18mo ARMA'"
## [1] "p-value = 0"
## [1] "Paired t-test (mean difference) for 'Minimum Variance 60mo Hist' vs. 'Optimal Sharpe 18mo MC'"
## [1] "p-value = 0.46"
## [1] "F-test (ratio of variances) for 'Minimum Variance 60mo Hist' vs. 'Optimal Sharpe 18mo MC'"
## [1] "p-value = 0"

# Final Thoughts and Overall Conclusion

There is one additional consideration that needs attention. I assumed no transaction costs, and given the shorter time horizon of the 18 month strategies, as well as the optimization method (Optimal Sharpe vs. Minimum Variance), when the weights are plotted for each of the strategies, it is clear that the 18 month ARMA and MC strategies are not really portfolio management strategies; they exhibit characteristics of tactical trading strategies due to the aggressively changing portfolio weights.

Thus, if the objective is for stable portfolio weights, then a long horizon window and minimum variance optimization appears to be a better choice under this particular study, due to the relatively stable weights. However, if there is appetite for complexity, and individual transaction costs are low, then there is potential to add more value using Attilio Meucci’s methods.

It may be possible to stabilize the weights under the 18 month scenarios, however it is out of scope for this study, and may be something valuable to investigate at a later date.

# Opportunities for Future Research

Based on the work in this study, I think there is a lot of opportunity to explore the power of Meucci’s methods. I do think there is a lot of potential for horizons

• Start hypothesis testing from first-principles to reduce the number of assumptions (e.g. signal-PC distribution)
• Use Cross Validation to evaluate distribution and copula fitting
• Verify predictive ability of the ARMA process
• Add concentration constraints to Optimal Sharpe optimization to stabilize asset weights, especially in the 18-month window
• Identify different regimes (e.g. High vs. Low volatility) and apply different models/hypotheses/assumptions to each regime

# Appendix: Code to Replicate Results

Historical strategy code:

library(propfolio)
library(robust)
library(PerformanceAnalytics)
library(foreach)
library(doParallel)

nYears <- c(5,3,1.5) #set this to the 1st index for 60 months, 2nd index for 36 months or 3rd for 18 months
nYears <- nYears[1]
freq <- "weekly"
freqScale <- switch(freq, daily = 252, weekly = 52, monthly = 12, quarterly = 4, yearly = 1, stop("undefined"))

#########
#Start from scratch
symb <- c("XLE"
,"XLU"
,"XLK"
,"XLB"
,"XLP"
,"XLY"
,"XLI"
,"XLV"
,"XLF"
)

optTestPriceXtsMatrixALLweekly <- importEquityPricesToXtsMatrix(symbolVector = symb
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = freq
, keep = FALSE
)

optTestPriceXtsMatrixALLdaily <- importEquityPricesToXtsMatrix(symbolVector = symb
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = "daily"
, keep = FALSE
)

SPY <- importEquityPricesToXtsMatrix(symbolVector = "SPY"
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = freq
, keep = FALSE
)

SPYd <- importEquityPricesToXtsMatrix(symbolVector = "SPY"
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = "daily"
, keep = FALSE
)

RSP <- importEquityPricesToXtsMatrix(symbolVector = "RSP"
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = freq
, keep = FALSE
)

RSPd <- importEquityPricesToXtsMatrix(symbolVector = "RSP"
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = "daily"
, keep = FALSE
)

cl <- makeForkCluster(detectCores())

registerDoParallel(cl)

loopStart <- Sys.time()

rollingOptimizationResults<-

foreach(dateIndex = toRowList(createRollingTimeIndices(index(optTestPriceXtsMatrixALLweekly), estimationLength = 52*nYears, timeUnit = "weeks", investmentHorizonLength = 4))
, .errorhandling = "pass"
) %dopar%
{
#allocate date indices
OOSIndex <- as.vector(dateIndex[2])
pseudoISPeriodPlot <- as.vector(dateIndex[3])
dateIndex <- as.vector(dateIndex[1])

#filter out time series object
optTestPriceXtsMatrix <- optTestPriceXtsMatrixALLweekly[dateIndex]

#extract "horizon" prices
optTestLatestPrices <- as.vector(last(optTestPriceXtsMatrix))

#compute linear returns
optTestLRetXtsMatrix <- retMatrixL(optTestPriceXtsMatrix)

#convert historical prices to change in prices (using horizon prices)
optTestLRetXtsMatrixPnL <- coredata(do.call(merge,lapply(1:length(optTestLatestPrices)
,function(x){optTestLRetXtsMatrix[,x] * optTestLatestPrices[x]}
)))

#compute moments
covRobResults <- robust::covRob(data = optTestLRetXtsMatrixPnL)
optTestRobMean <- as.vector(covRobResults$center) optTestRobCovPnL <- covRobResults$cov

#compute historical inverse volatility weights (for constraints)
sdWeightVec <- 1/apply(optTestLRetXtsMatrix,2,sd)/sum(1/apply(optTestLRetXtsMatrix,2,sd))

#extract 7 lowest volatility names (highest 2 will be constrained by inverse volatility weight)
constraintNames <- names(which(rank(sdWeightVec)>=7))

sdWeightVecConstraint <- sdWeightVec[constraintNames]

#compute efficient frontier
LongOnly <- TRUE
optTestOptimizedPortfolio <-
optimizeMeanVariancePortfolio( meanVector = optTestRobMean
, covarianceMatrix = optTestRobCovPnL
, horizonPriceVector = optTestLatestPrices
, horizonPnLMatrix = optTestLRetXtsMatrixPnL
, wealthBalance = 10000
, maxWeightConstraint = list(    names = constraintNames #safe weights
, constraintWeight = sdWeightVec #all weights
, maxWeight = ifelse(LongOnly,1,100)
)
, colNames = names(optTestPriceXtsMatrix)
, longOnly = LongOnly
)

return(
list(
estimationPeriod = dateIndex
, OOSPeriod = OOSIndex
, pseudoISPeriodPlot = pseudoISPeriodPlot
, optimalResult = optTestOptimizedPortfolio$optimalResult , minVarResult = optTestOptimizedPortfolio$minVarResult
, expectedReturn = optTestOptimizedPortfolio$expectedReturn , standardDeviation = optTestOptimizedPortfolio$standardDeviation
)

)

}

stopCluster(cl=cl)
closeAllConnections()
gc()
cl <- NULL

loopEnd <- Sys.time()
loopEnd - loopStart
# Time difference of 52.48464 mins on 8 cores

#create final result list object
rollingOptimizationResults <- list( portfolioNames = names(optTestPriceXtsMatrixALLweekly)
, portfolioFreq = "weeks"
, portfolioStats = rollingOptimizationResults
)


Monte Carlo Simulation strategy code

library(propfolio)
library(robust)
library(PerformanceAnalytics)
library(foreach)
library(doParallel)

nYears <- c(5,3,1.5) #set this to the 1st index for 60 months, 2nd index for 36 months or 3rd for 18 months
nYears <- nYears[1]
freq <- "weekly"
freqScale <- switch(freq, daily = 252, weekly = 52, monthly = 12, quarterly = 4, yearly = 1, stop("undefined"))

#########
#Start from scratch
symb <- c("XLE"
,"XLU"
,"XLK"
,"XLB"
,"XLP"
,"XLY"
,"XLI"
,"XLV"
,"XLF"
)

optTestPriceXtsMatrixALLweekly <- importEquityPricesToXtsMatrix(symbolVector = symb
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = freq
, keep = FALSE
)

optTestPriceXtsMatrixALLdaily <- importEquityPricesToXtsMatrix(symbolVector = symb
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = "daily"
, keep = FALSE
)

SPY <- importEquityPricesToXtsMatrix(symbolVector = "SPY"
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = freq
, keep = FALSE
)

SPYd <- importEquityPricesToXtsMatrix(symbolVector = "SPY"
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = "daily"
, keep = FALSE
)

RSP <- importEquityPricesToXtsMatrix(symbolVector = "RSP"
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = freq
, keep = FALSE
)

RSPd <- importEquityPricesToXtsMatrix(symbolVector = "RSP"
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = "daily"
, keep = FALSE
)

cl <- makeForkCluster(detectCores())
loopStart <- Sys.time()

rollingOptimizationResults<-

parLapply( cl = cl
,  toRowList(createRollingTimeIndices(index(optTestPriceXtsMatrixALLweekly), estimationLength = 52*nYears, timeUnit = "weeks", investmentHorizonLength = 4))
,function(dateIndex)
{

#allocate date indices
OOSIndex <- as.vector(dateIndex[2])
pseudoISPeriodPlot <- as.vector(dateIndex[3])
dateIndex <- as.vector(dateIndex[1])

#filter out time series object
optTestPriceXtsMatrix <- optTestPriceXtsMatrixALLweekly[dateIndex]

#calculate log returns
optTestCRetXtsMatrix <- retMatrixC(optTestPriceXtsMatrix)

#eigen/dimension reduction
optTestRobCov <- robust::covRob(coredata(optTestCRetXtsMatrix))$cov optTesteig <- eigenDimensionReduction(optTestRobCov) #Identify dominant eigenvectors optTestSigNoise <- dominantEigenvectors(optTesteig) #fitting signal and noise marginals optTestSigMarginalFits <- fitMarginalSignal(optTestCRetXtsMatrix, optTestSigNoise$signal$eigenValues, optTestSigNoise$signal$eigenVectors) optTestNoiseMarginalFits <- fitMarginalResidual(optTestCRetXtsMatrix, optTestSigNoise$noise$eigenValues, optTestSigNoise$noise$eigenVectors) #fit copula optTestCopulaFit <- fitCopula( coredata(tsEigenProduct(optTestCRetXtsMatrix, optTestSigNoise$signal$eigenVectors))) #simulate data to horizon optTestSimulatedInvariantsToHorizon <- simulateDataToHorizon( numSimulations = 52*200 , numPeriodsForward = 4 #22 , vineCopulaFit = optTestCopulaFit , signalMarginalFit = optTestSigMarginalFits$fits
, residualMarginalFit = optTestNoiseMarginalFits$fits , eigenVectors = optTesteig$eigenVectors #all eigenvectors
)
#dim(optTestSimulatedInvariantsToHorizon)

#pricing
optTestHorizonPriceVector <- as.vector(coredata(optTestPriceXtsMatrix[nrow(optTestPriceXtsMatrix),]))

optTestHorizonPrices <- horizonPrices(     simulatedHorizonInvariants = optTestSimulatedInvariantsToHorizon
, horizonPriceVector = optTestHorizonPriceVector
)

#extract horizon profit-and-loss
optTestHorizonPnLs <- horizonPnLs( simulatedHorizonPrices = optTestHorizonPrices
, horizonPriceVector = optTestHorizonPriceVector
)

#compute moments
covRobResults <- robust::covRob(data = optTestHorizonPnLs)
optTestHorizonRobMean <- as.vector(covRobResults$center) optTestHorizonRobCovPnL <- covRobResults$cov

#compute historical inverse volatility weights (for constraints)
sdWeightVec <- 1/apply(optTestCRetXtsMatrix,2,sd)/sum(1/apply(optTestCRetXtsMatrix,2,sd))

#extract 7 lowest volatility names (highest 2 will be constrained by inverse volatility weight)
constraintNames <- names(which(rank(sdWeightVec)>=7))

sdWeightVecConstraint <- sdWeightVec[constraintNames]

#compute efficient frontier
LongOnly <- TRUE
optTestOptimizedPortfolio <-
optimizeMeanVariancePortfolio( meanVector = optTestHorizonRobMean
, covarianceMatrix = optTestHorizonRobCovPnL
, horizonPriceVector = optTestHorizonPriceVector
, horizonPnLMatrix = optTestHorizonPnLs
, wealthBalance = 10000
, maxWeightConstraint = list(    names = constraintNames #safe weights
, constraintWeight = sdWeightVec #all weights
, maxWeight = ifelse(LongOnly,1,100)
)
, colNames = names(optTestPriceXtsMatrix)
, longOnly = LongOnly
)

return(
list(
estimationPeriod = dateIndex
, OOSPeriod = OOSIndex
, pseudoISPeriodPlot = pseudoISPeriodPlot
, optimalResult = optTestOptimizedPortfolio$optimalResult , minVarResult = optTestOptimizedPortfolio$minVarResult
, expectedReturn = optTestOptimizedPortfolio$expectedReturn , standardDeviation = optTestOptimizedPortfolio$standardDeviation
, fitData = list(signalMarginals = optTestSigMarginalFits$evaluation , noiseMarginals = optTestNoiseMarginalFits$evaluation
, copula = -999
)

)

)

})

stopCluster(cl=cl)
closeAllConnections()
gc()
cl <- NULL

loopEnd <- Sys.time()
loopEnd - loopStart
# Time difference of 11.50882 mins  #aws 64 cores

#create final result list object
rollingOptimizationResults <- list( portfolioNames = names(optTestPriceXtsMatrixALLweekly)
, portfolioFreq = "weeks"
, portfolioStats = rollingOptimizationResults
)


Monte Carlo with ARMA strategy code:

library(propfolio)
library(robust)
library(PerformanceAnalytics)
library(foreach)
library(doParallel)

nYears <- c(5,3,1.5) #set this to the 1st index for 60 months, 2nd index for 36 months or 3rd for 18 months
nYears <- nYears[1]
freq <- "weekly"
freqScale <- switch(freq, daily = 252, weekly = 52, monthly = 12, quarterly = 4, yearly = 1, stop("undefined"))

#########
#Start from scratch
symb <- c("XLE"
,"XLU"
,"XLK"
,"XLB"
,"XLP"
,"XLY"
,"XLI"
,"XLV"
,"XLF"
)

optTestPriceXtsMatrixALLweekly <- importEquityPricesToXtsMatrix(symbolVector = symb
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = freq
, keep = FALSE
)

optTestPriceXtsMatrixALLdaily <- importEquityPricesToXtsMatrix(symbolVector = symb
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = "daily"
, keep = FALSE
)

SPY <- importEquityPricesToXtsMatrix(symbolVector = "SPY"
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = freq
, keep = FALSE
)

SPYd <- importEquityPricesToXtsMatrix(symbolVector = "SPY"
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = "daily"
, keep = FALSE
)

RSP <- importEquityPricesToXtsMatrix(symbolVector = "RSP"
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = freq
, keep = FALSE
)

RSPd <- importEquityPricesToXtsMatrix(symbolVector = "RSP"
, startDate = "1998-01-01"
, endDate = "2016-12-30"
, freq = "daily"
, keep = FALSE
)

cl <- makeForkCluster(detectCores())

registerDoParallel(cl)

loopStart <- Sys.time()

rollingOptimizationResults<-

foreach(dateIndex = toRowList(createRollingTimeIndices(index(optTestPriceXtsMatrixALLweekly), estimationLength = 52*nYears, timeUnit = "weeks", investmentHorizonLength = 4))
, .errorhandling = "pass"
) %dopar%
{

#allocate date indices
OOSIndex <- as.vector(dateIndex[2])
pseudoISPeriodPlot <- as.vector(dateIndex[3])
dateIndex <- as.vector(dateIndex[1])

#filter out time series object
optTestPriceXtsMatrix <- optTestPriceXtsMatrixALLweekly[dateIndex]

#calculate log returns
optTestCRetXtsMatrix <- retMatrixC(optTestPriceXtsMatrix)

#verify autocorrelation
corrTheshold <- 0.025
#stats::Box.test(x = , type = "Ljung-Box")
#check if correlation is above threshold, for lag = 1
corrVector<-
sapply(optTestCRetXtsMatrix
, function(x){

LBtest <- stats::Box.test(x = x, type = "Ljung-Box")
LBpvalue <- LBtest$p.value return(LBpvalue) } ) corrIDX <- which(corrVector <= corrTheshold) #if autocorrelation, fit ARMA model arimaModels <- lapply(corrIDX , function(x){ vectorToFit <- optTestCRetXtsMatrix[,x] arimaFit <- forecast::auto.arima(x = vectorToFit) return(arimaFit) } ) residualData <- do.call(cbind,lapply(arimaModels , function(x){ x$residuals
}
))

combinedInvariants <- NULL

for(idx in 1:ncol(optTestCRetXtsMatrix)){
if(idx %in% corrIDX){
combinedInvariants <- cbind(combinedInvariants
, if(length(corrIDX)==1){residualData} else {residualData[,which(idx == corrIDX)]}
)
} else {
combinedInvariants <- cbind(combinedInvariants, coredata(optTestCRetXtsMatrix[,idx]))
}

}

colnames(combinedInvariants) <- symb

#eigen/dimension reduction
optTestRobCov <- robust::covRob(coredata(combinedInvariants))$cov optTesteig <- eigenDimensionReduction(optTestRobCov) #Identify dominant eigenvectors optTestSigNoise <- dominantEigenvectors(optTesteig) #fitting signal and noise marginals optTestSigMarginalFits <- fitMarginalSignal(optTestCRetXtsMatrix, optTestSigNoise$signal$eigenValues, optTestSigNoise$signal$eigenVectors) optTestNoiseMarginalFits <- fitMarginalResidual(optTestCRetXtsMatrix, optTestSigNoise$noise$eigenValues, optTestSigNoise$noise$eigenVectors) #fit copula optTestCopulaFit <- fitCopula( coredata(tsEigenProduct(optTestCRetXtsMatrix, optTestSigNoise$signal$eigenVectors))) #simulate data to horizon optTestSimulatedInvariantsToHorizon <- simulateDataToHorizon3( numSimulations = 52*200 , numPeriodsForward = 4 #22 , vineCopulaFit = optTestCopulaFit , signalMarginalFit = optTestSigMarginalFits$fits
, residualMarginalFit = optTestNoiseMarginalFits$fits , eigenVectors = optTesteig$eigenVectors #all eigenvectors
, corrIDX = corrIDX
, arimaModel = arimaModels
)

#pricing
optTestHorizonPriceVector <- as.vector(coredata(optTestPriceXtsMatrix[nrow(optTestPriceXtsMatrix),]))

optTestHorizonPrices <- horizonPrices(     simulatedHorizonInvariants = optTestSimulatedInvariantsToHorizon
, horizonPriceVector = optTestHorizonPriceVector
)

#extract horizon profit-and-loss
optTestHorizonPnLs <- horizonPnLs( simulatedHorizonPrices = optTestHorizonPrices
, horizonPriceVector = optTestHorizonPriceVector
)

#compute moments
covRobResults <- robust::covRob(data = optTestHorizonPnLs)
optTestHorizonRobMean <- as.vector(covRobResults$center) optTestHorizonRobCovPnL <- covRobResults$cov

#compute historical inverse volatility weights (for constraints)
sdWeightVec <- 1/apply(optTestCRetXtsMatrix,2,sd)/sum(1/apply(optTestCRetXtsMatrix,2,sd))

#extract 7 lowest volatility names (highest 2 will be constrained by inverse volatility weight)
constraintNames <- names(which(rank(sdWeightVec)>=7))

sdWeightVecConstraint <- sdWeightVec[constraintNames]

#compute efficient frontier
LongOnly <- TRUE
optTestOptimizedPortfolio <-
optimizeMeanVariancePortfolio( meanVector = optTestHorizonRobMean
, covarianceMatrix = optTestHorizonRobCovPnL
, horizonPriceVector = optTestHorizonPriceVector
, horizonPnLMatrix = optTestHorizonPnLs
, wealthBalance = 10000
, maxWeightConstraint = list(    names = constraintNames #safe weights
, constraintWeight = sdWeightVec #all weights
, maxWeight = ifelse(LongOnly,1,100)
)
, colNames = names(optTestPriceXtsMatrix)
, longOnly = LongOnly
)

return(
list(
estimationPeriod = dateIndex
, OOSPeriod = OOSIndex
, pseudoISPeriodPlot = pseudoISPeriodPlot
, optimalResult = optTestOptimizedPortfolio$optimalResult , minVarResult = optTestOptimizedPortfolio$minVarResult
, expectedReturn = optTestOptimizedPortfolio$expectedReturn , standardDeviation = optTestOptimizedPortfolio$standardDeviation
, fitData = list(signalMarginals = optTestSigMarginalFits$evaluation , noiseMarginals = optTestNoiseMarginalFits$evaluation
, copula = -999
)

)

)

}

stopCluster(cl=cl)
closeAllConnections()
gc()
cl <- NULL

loopEnd <- Sys.time()
loopEnd - loopStart

# Time difference of 1.595575 hours
# Time difference of 13.83757 mins #ON AWS 64 cores

rollingOptimizationResults <- list( portfolioNames = names(optTestPriceXtsMatrixALLweekly)
, portfolioFreq = "weeks"
, portfolioStats = rollingOptimizationResults
)


Calculating results after running one of 3 scenarios above

#Optimal Sharpe
realizedOPPortfolioMatrix<-
Return.portfolio(R = retMatrixL(optTestPriceXtsMatrixALLdaily)
, weights = createOptimalStatisticTimeSeries(optimizationResultsObject =  rollingOptimizationResults
, statisticType =  "opWeights"
, datePeriodType = "test"
, indexTimeSeriesFreq = "monthly")
)
colnames(realizedOPPortfolioMatrix) <- "Optimal Sharpe"
realizedOPPortfolioMatrix <- merge(realizedOPPortfolioMatrix, retVectorL(RSPd), retMatrixL(optTestPriceXtsMatrixALLdaily))
colnames(realizedOPPortfolioMatrix)[2] <- "Benchmark"

#Minimum Variance
realizedMVPortfolioMatrix<-
Return.portfolio(R = retMatrixL(optTestPriceXtsMatrixALLdaily)
, weights = createOptimalStatisticTimeSeries(optimizationResultsObject =  rollingOptimizationResults
, statisticType =  "mvWeights"
, datePeriodType = "test"
, indexTimeSeriesFreq = "monthly")
)
colnames(realizedMVPortfolioMatrix) <- "Minimum Variance"
realizedMVPortfolioMatrix <- merge(realizedMVPortfolioMatrix, retVectorL(RSPd), retMatrixL(optTestPriceXtsMatrixALLdaily))
colnames(realizedMVPortfolioMatrix)[2] <- "Benchmark"


# References

Kramer, Nicole, and Ulf Schepsmeier. 2011. “Introduction to Vine Copulas.” Technische Universitat Munchen. https://www.statistics.ma.tum.de/fileadmin/w00bdb/www/veranstaltungen/Vines.pdf.

Markowitz, Harry. 1952. “Portfolio Selection.” Journal of Finance 7 (1). Wiley for the American Finance Association: 77–91. http://www.jstor.org/stable/2975974.

Meucci, Attilio. 2005. “Risk and Asset Allocation.” Springer Finance Textbooks. https://www.arpm.co/book/.

———. 2011. “‘The Prayer’ Ten-Step Checklist for Advanced Risk and Portfolio Management,” Feb. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1753788.

Peterson, Brian. 2014. “Developing & Backtesting Systematic Trading Strategies.” https://r-forge.r-project.org/scm/viewvc.php/*checkout*/pkg/quantstrat/sandbox/backtest_musings/strat_dev_process.pdf?root=blotter.

# Introduction

In this post, I provide R code that enables the replication of the Center for Research in Security Prices (CRSP) Volatiliy Deciles using Yahoo! Finance data. This post is related to my last blog post in that it will generate the CRSP low volatility decile portfolio, thereby facilitating the replication of the associated EMA trading strategy.

There are a few caveats to this replication:

• There will be survivorship bias since Yahoo! Finance does not contain de-listed stocks
• This code takes a long time to run (+2 hrs on my machine, excluding downloading the symbols), and so it should not all be executed at once
• The parallel processing libraries in this code run in linux
• For Windows, “library(snow)”” should be used instead of “library(parallel)” and “library(doSNOW)” should be used instead of “library(doParallel)”
• A SOCK cluster will likely need to be used instead of a FORK cluster
• Consequently, objects, functions and libraries will have to explicitly be exported to the cores, thereby consuming more RAM
• There are +5,000 securities to download from Yahoo! Finance, and any one of them can fail randomly, thereby yielding different results each time the code is executed
• It is up to the individual user to decide if they want to download the failed symbols
• The quality control [QC] for this code is not exhaustive, i.e. my goal was to quickly get data for a time-sensitive paper-replication project, thus there is potential for more QC, which I will indicate in the post

This code is related to a paper replication project for a course I took over the summer of 2016 (Advanced Trading System Design). I will be sharing the project in a subsequent post.

To run the following script, one of my packages, titled tsconv (which stands for “Time Series Convenience”) needs to be installed. The following line of code will install the package, provided that the devtools library is already installed.

devtools::install_github("erolbicero/tsconv")

In order to determine the universe of stocks to include, a list of security tickers that trade on the NYSE and NASDAQ is required. This was located by reading the relevant post by Louis Marascio, and as per the discussion, the list of stock tickers that trade on the NYSE and AMEX was then obtained from http://www.nasdaqtrader.com/trader.aspx?id=symboldirdefs on July 8th, 2016.

The file format is a pipe-delimited text file. It contains preferred shares and ETFs, which should be excluded from the replication. Therefore, the file was imported into Microsoft Excel, and then filtered for tickers that were not ETFs, and where the “ACT Symbol” excluded the “$” character, as these were predominantly preferred shares. Lastly, the exchange needed to be part of the NYSE, or NASDAQ, and the only exchange code listed on NASDAQ which is not a part of these exchanges is “Z”, which is the label for BATS Global Markets; therefore any stock with an exchange code “Z” was excluded. After these exclusions, 6,157 ticker symbols remained in the file. The symbols from this cleaned list were exported to a Comma-Separated-Value [CSV] file. For convenience, I am providing a clean CSV file at this link containing all the tickers I used from the July 8th, 2016 extract. # Load the Tickers and Download the Data Next, the tickers are loaded into the global environment, and then downloaded one by one from Yahoo! Finance to avoid getting blocked by the website. It may be possible to download more symbols at a time, thereby speeding up the process, however I did not test that to ensure I was not cut off. These are stored in a separate environment titled “stockPriceEnv”. I have added code to save and load the downloaded tickers, since it is time-consuming, taking almost 2 hrs on my machine. library(quantmod) library(xts) library(PerformanceAnalytics) library(tsconv) library(stringr) library(foreach) library(doParallel) library(timeDate) stockSymbols <- read.csv(file = paste0(getwd(),"/StockList2016-07-08.csv"), stringsAsFactors = FALSE, header = FALSE) stockSymbols <- as.vector(stockSymbols[,1]) stockPriceEnv <- new.env() successSymbols <- NULL failSymbols <- NULL #Use tryCatch since some symbols will cause the function to fail #Also run painfully slowly so you don't get cut off from Yahoo! startTime <- Sys.time() for(stock in stockSymbols) { tryCatch(expr = { getSymbols.yahoo(Symbols = stock , env = stockPriceEnv , from = "1963-07-01" # July 1st, 1963 , to = "2016-07-11" # July 11th, 2016 ) successSymbols <- c(successSymbols, stock) print(paste(stock, "Succeeds")) } , error = function(err){ print(paste(stock, "Fails")) failSymbols <<- c(failSymbols, stock) } ) } endTime <- Sys.time() endTime - startTime # Time difference of 1.93681 hours #list all stocks ls(envir = stockPriceEnv) #save the downloaded tickers saveRDS(stockPriceEnv, file = "stockPriceEnv-2016-12-01.RData") ################################################################ # Use this commented code to load stock symbols #stockPriceEnv <- readRDS("stockPriceEnv-2016-12-01.RData") #failSymbols <- stockSymbols[!(stockSymbols %in% ls(envir = stockPriceEnv))] ################################################################  The following symbols failed during the download. This is important, since the failed symbols appear to change each time this code is run, which ultimately yields slightly different volatility decile performance. This means that to replicate the numbers in this post, the exact same symbols will need to fail/be present. #### Failed Download Symbols ## [1] "AA" "AGM.A" "AKO.A" "AKO.B" "APHB" "ARP" "ASB.W" ## [8] "AST.W" "BAC.A" "BAC.B" "BF.A" "BF.B" "BGX" "BIO.B" ## [15] "BIOA.W" "BNK" "BRK.A" "BRK.B" "BTX.W" "BWL.A" "C.A" ## [22] "CBO" "CJES" "CMA.W" "COF.W" "CRD.A" "CRD.B" "CVM.W" ## [29] "DEG" "DWRE" "DXI" "DYN.W" "FCE.A" "FCE.B" "FPP.W" ## [36] "FUR" "GEF.B" "GM.B" "GSI" "GTN.A" "HBM.W" "HEI.A" ## [43] "HIVE" "HLS.W" "HTS" "HVT.A" "IBO" "IHS" "JW.A" ## [50] "JW.B" "KEG" "KKD" "KODK.A" "KODK.W" "LEN.B" "LNC.W" ## [57] "MKC.V" "MOG.A" "MOG.B" "MSK" "MTB.W" "NPD" "ONE" ## [64] "PBR.A" "PVCT.W" "QIHU" "RDS.A" "RDS.B" "SBNB" "SCQ" ## [71] "STI.A" "STI.B" "STZ.B" "TAL" "TCB.W" "TI.A" "TUMI" ## [78] "VALE.P" "VLY.W" "WSO.B" "ABEOW" "ADXSW" "AGFSW" "ANDAR" ## [85] "ANDAU" "ANDAW" "ARWAR" "ARWAU" "ARWAW" "AUMAW" "BBCN" ## [92] "BHACR" "BHACU" "BHACW" "BIND" "BLVDW" "BNTCW" "BVXVW" ## [99] "CADT" "CADTW" "CATYW" "CERCW" "CERCZ" "CERE" "CHEKW" ## [106] "CLACW" "CLRBZ" "CNYD" "COWNL" "COYNW" "CPXX" "CYHHZ" ## [113] "CYRXW" "DRIOW" "DRWIW" "EACQU" "EACQW" "EAGLW" "ECACR" ## [120] "ELECU" "ELECW" "EPRS" "EYEGW" "FNFG" "FNTC" "FNTCU" ## [127] "FNTCW" "GFNSL" "GGAC" "GGACR" "GIGA" "GPACU" "GPACW" ## [134] "GPIAW" "GRSHW" "HCACU" "HCACW" "HCAPL" "HDRAR" "HDRAU" ## [141] "HDRAW" "HMPR" "HNSN" "HRMNW" "IMOS" "IRCP" "JASNW" ## [148] "JSYNR" "JSYNW" "KLREW" "KTOVW" "KUTV" "LBTYA" "LDRH" ## [155] "LINDW" "LMFAW" "LPSB" "MDVXW" "MFLX" "MIIIU" "MRKT" ## [162] "NUROW" "NXEOW" "NXTDW" "OACQR" "OACQU" "OACQW" "OCLSW" ## [169] "ONSIW" "ONSIZ" "OPGNW" "OXBRW" "PAACR" "PAACW" "PACEW" ## [176] "PAVMU" "PBIP" "QDEL" "QPACU" "QPACW" "RNVAW" "SHLDW" ## [183] "SNFCA" "SNHNL" "SNI" "SNTA" "SOHOM" "SQI" "SRAQW" ## [190] "SRTSU" "TACOW" "TALL" "TFSC" "TFSCR" "TFSCU" "TFSCW" ## [197] "TRTLW" "VKTXW" "WIBC" "WVVIP" "WYIGW" "ZAZZT" "ZBZZT" ## [204] "ZCZZT" "ZIONZ" "ZJZZT" "ZNWAA" "ZVZZC" "ZVZZT" "ZWZZT" ## [211] "ZXYZ.A" "ZXZZT"  # Quality Control Here, there are 2 known issues I have come across while working on this code. One, is that there’s a symbol with the name “TRUE”, which not surprisingly, causes issues. The fix is to rename it to “TRUE.”. The second issue is that the ticker OHGI appears to have “garbage” data, thus it is excluded completely from the replication. #### OHGI Adjusted Closing Prices It is possible that there are more anomalies within the data, however, from my perspective, there was a time-sensitive deliverable, and thus given that the numbers were sensible, insofar as they were comparable to results generated by the actual CRSP volatility decile indices, they were used for the replication project. Any additional investigations and QC should be performed at this stage. #*********IMPORTANT #clean-up known error-causing stocks stockPriceEnv$TRUE. <-stockPriceEnv$TRUE rm(TRUE, envir = stockPriceEnv) rm(OHGI, envir = stockPriceEnv) #This is the time for quality control, since it gets more challenging to fix any issues later on #**********  This next block of code uses roughly 17 GB of RAM on my machine using a fork cluster with 8 cores, therefore, please ensure that there is adequate memory to run the code. The code checks that each stock has at least 80% of trading days populated within a year (as per CRSP’s requirements), and then calculates the annualized volatility for that stock for each year. Fortunately, the code only takes 1.5 minutes to run using parallel processing. # Adjusted prices # stock needs valid returns for eighty percent of trading days # CRSP uses linear returns #*****Warning, lots of RAM required annualizationScaling <- 252 percentOfYear <- 0.8 startTime <- Sys.time() cl <- makeForkCluster(nnodes = detectCores()) registerDoParallel(cl = cl) stockVolatility <- foreach(stockName = ls(envir = stockPriceEnv), .errorhandling = "pass") %dopar% { stock <- get(x = stockName, envir = stockPriceEnv) typeIndex <- grep("Adjusted",colnames(stock)) adjustedPrices <- stock[,typeIndex] #Compute linear returns stockLinearReturns <- retVectorL(adjustedPrices) #extract time index and slice into years yearsAvail <- unique(str_sub(string = index(stockLinearReturns), end=4)) #check that stock has 80% of trading days validYearTable <- lapply(yearsAvail , function(year){ numDaysInYear <- nrow(stockLinearReturns[year]) #need >= 80% validYearFlag <- ifelse(numDaysInYear >= floor(annualizationScaling*percentOfYear),1,0) return(c(as.numeric(year),validYearFlag)) } ) validYearTable <- do.call(rbind, validYearTable) #Take valid years validYears <- as.character(validYearTable[validYearTable[,2]==1,1]) volatilityAnnual <- lapply(validYears , function(year){ timeSeries <- stockLinearReturns[year] volatilityAnn <- sd(coredata(timeSeries))*sqrt(annualizationScaling) volYear <- as.yearmon(paste0(year,"-","12")) volatilityAnn <- xts(x = volatilityAnn, order.by = volYear) return(volatilityAnn) } ) volatilityAnnual <- do.call(rbind, volatilityAnnual) return(volatilityAnnual) } stopCluster(cl = cl) cl <- NULL gc() endTime <- Sys.time() endTime - startTime #Time difference of 1.408692 mins #On 8 cores #name the list names(stockVolatility) <- ls(envir = stockPriceEnv) # http://www.crsp.com/products/documentation/stock-file-indexes-0  # Volatiliy Decile Replication Next, the data is checked for continuity, i.e. that there are no missing years. #Grab min and max year from each stock stockVolatilityYears <- lapply(stockVolatility ,function(stockVolVector){ years <- index(stockVolVector) years <- str_sub(string = years, start = 5) yearRange <- c(min(years) ,max(years) ) return(yearRange) } ) #merge min and max into a table stockVolatilityYears <- do.call(rbind, stockVolatilityYears)  The plot below indicates a break before 1973, indicating that there are gaps in the data prior to that year. ## [1] "1964" "1970" "1973" "1974" "1975" "1976" "1977" "1978" "1979" "1980" ## [11] "1981" "1982" "1983" "1984" "1985" "1986" "1987" "1988" "1989" "1990" ## [21] "1991" "1992" "1993" "1994" "1995" "1996" "1997" "1998" "1999" "2000" ## [31] "2001" "2002" "2003" "2004" "2005" "2006" "2007" "2008" "2009" "2010" ## [41] "2011" "2012" "2013" "2014" "2015"  Thus, the data is filtered to ensure that the time series begins in 1973. stockVolatilityYearsSeq <- sort(unique(stockVolatilityYears[,1])) #exclude first two entries, we want a continuous time series stockVolatilityYearsSeq <- stockVolatilityYearsSeq[3:length(stockVolatilityYearsSeq)]  Next, stocks are assigned to deciles. In order to accomplish this, the code below extracts all volatilities for the stocks for each year… #now build a list of volatility vectors #loop through the years cl <- makeForkCluster(nnodes = detectCores()) startTime <- Sys.time() stockVolatilityTimeSeriesList <- parLapply(cl , stockVolatilityYearsSeq ,function(year){ #here we'll loop through the stocks, and pull out all the years volList <- lapply(stockVolatility , function(stockVolVector){ return(stockVolVector[year]) } ) volVector <- do.call(rbind, volList) return(volVector) } ) stopCluster(cl = cl) cl <- NULL gc() #set names names(stockVolatilityTimeSeriesList) <- stockVolatilityYearsSeq endTime <- Sys.time() endTime - startTime # Time difference of 53.64748 sec # On 8 cores</code></pre> …and then assigns the volatilities to deciles for each year. <pre class="r"><code>decilesByYear <- lapply(stockVolatilityTimeSeriesList , function(stockVolatilityTimeSeries){ quantile(x = stockVolatilityTimeSeries , probs = seq(0,1,0.1) , na.rm = TRUE ) } ) decilesByYear <- do.call(rbind,decilesByYear) years <- rownames(decilesByYear) deciles <- colnames(decilesByYear)  By summarizing the results, outliers can clearly be observed in the more recent years. This indicates that the high volatility stocks can very likely also benefit from additional QC, however to keep this post brief, no additional QC will be conducted. #### Volatility Decile by Year (1995 and onward) 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 1995 0.04 0.16 0.19 0.22 0.27 0.32 0.38 0.46 0.57 0.75 5.28 1996 0.10 0.16 0.20 0.24 0.28 0.33 0.40 0.50 0.63 0.79 31.40 1997 0.00 0.18 0.24 0.27 0.31 0.37 0.43 0.51 0.62 0.80 6.27 1998 0.00 0.22 0.30 0.35 0.40 0.45 0.53 0.61 0.73 0.95 23.28 1999 0.00 0.21 0.28 0.34 0.39 0.44 0.52 0.61 0.74 1.01 14.21 2000 0.08 0.23 0.33 0.40 0.46 0.53 0.63 0.76 0.97 1.29 92.44 2001 0.07 0.22 0.29 0.35 0.41 0.48 0.57 0.69 0.85 1.11 4.98 2002 0.06 0.21 0.28 0.34 0.38 0.45 0.52 0.62 0.79 1.02 8.00 2003 0.06 0.16 0.21 0.26 0.30 0.34 0.40 0.49 0.62 0.85 12.00 2004 0.07 0.15 0.20 0.23 0.27 0.31 0.36 0.44 0.54 0.71 36.57 2005 0.04 0.14 0.19 0.22 0.26 0.30 0.34 0.39 0.47 0.63 39.75 2006 0.03 0.13 0.18 0.22 0.26 0.30 0.35 0.41 0.47 0.61 4.79 2007 0.06 0.18 0.24 0.28 0.31 0.35 0.39 0.44 0.51 0.65 11434.96 2008 0.07 0.41 0.50 0.57 0.63 0.70 0.77 0.85 0.96 1.15 83.85 2009 0.05 0.29 0.38 0.45 0.52 0.59 0.67 0.77 0.91 1.18 103.81 2010 0.00 0.19 0.24 0.28 0.33 0.37 0.42 0.48 0.56 0.70 13.77 2011 0.04 0.21 0.27 0.33 0.37 0.42 0.47 0.53 0.62 0.76 188.01 2012 0.04 0.14 0.19 0.23 0.28 0.32 0.37 0.43 0.53 0.69 501.00 2013 0.01 0.14 0.18 0.21 0.24 0.28 0.32 0.38 0.48 0.66 38.60 2014 0.01 0.12 0.17 0.21 0.25 0.29 0.34 0.41 0.51 0.68 424.01 2015 0.03 0.15 0.21 0.24 0.28 0.33 0.39 0.47 0.58 0.75 1493.12 Now that volatility has been assigned to deciles for each year, stocks can be assigned deciles based on their volatilities for each year. This takes roughly 37 minutes on my machine using all 8 cores. #now assign the deciles to each of the stocks by year cl <- makeForkCluster(nnodes = detectCores()) startTime <- Sys.time() registerDoParallel(cl) stockDeciles <- foreach(stockVolVector = stockVolatility, .errorhandling = "pass") %dopar% { if(prod(is.na(stockVolVector))){return(NA)} else { stockDecileVector <- stockVolVector stockDecileVector <- xts(x=rep(NA,nrow(stockDecileVector)),order.by = index(stockDecileVector)) for(dec in deciles){ for(yr in years){ stockDecileVector[yr] <- ifelse(stockVolVector[yr] <= decilesByYear[yr,dec] & is.na(stockDecileVector[yr]),dec,stockDecileVector[yr]) } } return(stockDecileVector) } } stopCluster(cl = cl) cl <- NULL gc() #assign names names(stockDeciles) <- names(stockVolatility) endTime <- Sys.time() endTime - startTime # Time difference of 37.39486 mins # on 8 cores  So far, we have lost 212 stocks from a failure to download, and after running the code block above, we have also lost an additional 392 symbols due to NA or NULL values, which ultimately results in a loss of 9.81% of symbols. Based on the results in the previous block of code, decile portfolios can be constructed for each year. The following code takes roughly 24 minutes to run on my machine (using 8 cores). #assign stocks to decile portfolios cl <- makeForkCluster(nnodes = detectCores()) startTime <- Sys.time() decilePortfolioWeights <- parLapply(cl , deciles[-1] , function(dec){ matrixWeights <- lapply(years ,function(yr){ rowVectorWeights <- lapply(names(stockDeciles) , function(stock){ assignedStockDecile <- stockDeciles[[stock]][yr] #There will be an error if the stock previously didn't meet a condition to get assigned a volatility (too few obs) if(length(assignedStockDecile)==0 | class(stockDeciles[[stock]])[1] == "simpleError" | prod(is.na(stockDeciles[[stock]]))){ dummyResult <- xts(x = 0 , order.by = as.yearmon(paste0(yr,"-12")) ) colnames(dummyResult) <- stock return(dummyResult) } else { assignedWeight <- ifelse(assignedStockDecile == dec,1,0) } colnames(assignedWeight) <- stock return(assignedWeight) } ) rowVectorWeights <- do.call(cbind, rowVectorWeights) return(rowVectorWeights) } ) matrixWeights <- do.call(rbind, matrixWeights) return(matrixWeights) } ) stopCluster(cl = cl) cl <- NULL gc() #assign names names(decilePortfolioWeights) <- deciles[-1] endTime <- Sys.time() endTime - startTime # Time difference of 24.0372 mins # on 8 cores  The code above generates a matrix of 1/0 flags, where the rows are the years, and the columns are the stocks. Two things now need to happen to proceed. First, the date needs to be converted to an end-of-year date, since CRSP assigns the portfolio weights in the subsequent year (later, PerformanceAnalytics::Return.portfolio() will then assign the weights to the subsequent year). Second, the 1/0 flags need to be converted to weights, so that they sum to 1 for each year (each row). #convert to date at end of year decilePortfolioWeights <- lapply(decilePortfolioWeights , function(decilePortfolio){ decilePortfolioIDX <- index(decilePortfolio) decilePortfolioIDX <- as.Date(timeLastDayInMonth(as.Date(decilePortfolioIDX))) decilePortfolio <- xts(x = coredata(decilePortfolio) , order.by = decilePortfolioIDX ) return(decilePortfolio) } ) #Now convert them from flags to weights decilePortfolioWeights <- lapply(decilePortfolioWeights , function(decilePortfolio){ decilePortfolioIDX <- index(decilePortfolio) decilePortfolioMatrix <- coredata(decilePortfolio) sumVector <- apply(decilePortfolioMatrix,1,sum) scaledDecilePortfolioMatrix <- lapply(1:length(sumVector) ,function(rowNum){ scaledVector <- decilePortfolioMatrix[rowNum,]/sumVector[rowNum] return(scaledVector) } ) scaledDecilePortfolioMatrix <- do.call(rbind, scaledDecilePortfolioMatrix) decilePortfolio <- xts(x = coredata(scaledDecilePortfolioMatrix) , order.by = decilePortfolioIDX ) return(decilePortfolio) } ) Some checks are performed to ensure that the weights sum to 1 across all years. #### Verification of Weight-Sums Across Deciles and Years #check, should see 1's across the board xts( do.call(cbind, lapply(decilePortfolioWeights , function(x){apply(coredata(x),1,sum)} )) , index(decilePortfolioWeights[[1]]) )  ## 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% ## 1973-12-31 1 1 1 1 1 1 1 1 1 1 ## 1974-12-31 1 1 1 1 1 1 1 1 1 1 ## 1975-12-31 1 1 1 1 1 1 1 1 1 1 ## 1976-12-31 1 1 1 1 1 1 1 1 1 1 ## 1977-12-31 1 1 1 1 1 1 1 1 1 1 ## 1978-12-31 1 1 1 1 1 1 1 1 1 1 ## 1979-12-31 1 1 1 1 1 1 1 1 1 1 ## 1980-12-31 1 1 1 1 1 1 1 1 1 1 ## 1981-12-31 1 1 1 1 1 1 1 1 1 1 ## 1982-12-31 1 1 1 1 1 1 1 1 1 1 ## 1983-12-31 1 1 1 1 1 1 1 1 1 1 ## 1984-12-31 1 1 1 1 1 1 1 1 1 1 ## 1985-12-31 1 1 1 1 1 1 1 1 1 1 ## 1986-12-31 1 1 1 1 1 1 1 1 1 1 ## 1987-12-31 1 1 1 1 1 1 1 1 1 1 ## 1988-12-31 1 1 1 1 1 1 1 1 1 1 ## 1989-12-31 1 1 1 1 1 1 1 1 1 1 ## 1990-12-31 1 1 1 1 1 1 1 1 1 1 ## 1991-12-31 1 1 1 1 1 1 1 1 1 1 ## 1992-12-31 1 1 1 1 1 1 1 1 1 1 ## 1993-12-31 1 1 1 1 1 1 1 1 1 1 ## 1994-12-31 1 1 1 1 1 1 1 1 1 1 ## 1995-12-31 1 1 1 1 1 1 1 1 1 1 ## 1996-12-31 1 1 1 1 1 1 1 1 1 1 ## 1997-12-31 1 1 1 1 1 1 1 1 1 1 ## 1998-12-31 1 1 1 1 1 1 1 1 1 1 ## 1999-12-31 1 1 1 1 1 1 1 1 1 1 ## 2000-12-31 1 1 1 1 1 1 1 1 1 1 ## 2001-12-31 1 1 1 1 1 1 1 1 1 1 ## 2002-12-31 1 1 1 1 1 1 1 1 1 1 ## 2003-12-31 1 1 1 1 1 1 1 1 1 1 ## 2004-12-31 1 1 1 1 1 1 1 1 1 1 ## 2005-12-31 1 1 1 1 1 1 1 1 1 1 ## 2006-12-31 1 1 1 1 1 1 1 1 1 1 ## 2007-12-31 1 1 1 1 1 1 1 1 1 1 ## 2008-12-31 1 1 1 1 1 1 1 1 1 1 ## 2009-12-31 1 1 1 1 1 1 1 1 1 1 ## 2010-12-31 1 1 1 1 1 1 1 1 1 1 ## 2011-12-31 1 1 1 1 1 1 1 1 1 1 ## 2012-12-31 1 1 1 1 1 1 1 1 1 1 ## 2013-12-31 1 1 1 1 1 1 1 1 1 1 ## 2014-12-31 1 1 1 1 1 1 1 1 1 1 ## 2015-12-31 1 1 1 1 1 1 1 1 1 1  Here is another check, where the number of stocks for each decile by year is computed. What we should observe, and do ultimately see, is an approximately equal distribution of stocks across all deciles within a year. Or in other words, the number of stocks across each row (year) should nearly all be equal. #### Count of Stocks Across Deciles and Years #Number of stocks in each portfolio by year do.call(cbind, lapply(decilePortfolioWeights , function(x){vector <- apply(coredata(x),1,function(y){length(which(y>0))}) names(vector) <- years return(vector) } ))  ## 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% ## 1973 4 5 5 5 5 4 5 5 5 5 ## 1974 7 8 7 8 7 8 7 8 7 8 ## 1975 7 8 7 8 8 7 8 7 8 8 ## 1976 7 8 8 7 8 8 7 8 8 8 ## 1977 8 9 9 8 9 9 8 9 9 9 ## 1978 8 9 9 9 9 9 9 9 9 9 ## 1979 9 9 9 9 9 9 9 9 9 9 ## 1980 10 10 10 10 10 10 10 10 10 10 ## 1981 18 19 19 19 19 19 19 19 19 19 ## 1982 22 22 22 22 23 22 22 22 22 23 ## 1983 22 23 23 23 23 22 23 23 23 23 ## 1984 26 27 27 26 27 27 26 27 27 27 ## 1985 35 35 35 35 36 35 35 35 35 36 ## 1986 38 40 40 39 40 40 39 40 40 40 ## 1987 44 44 44 44 45 44 44 44 44 45 ## 1988 62 62 62 62 62 62 62 62 62 62 ## 1989 64 65 65 65 65 65 65 65 65 65 ## 1990 72 72 72 73 72 72 73 72 72 73 ## 1991 95 95 95 95 95 95 95 95 95 95 ## 1992 115 115 115 116 115 115 116 115 115 116 ## 1993 129 129 130 129 130 129 129 130 129 130 ## 1994 146 146 146 147 146 146 147 146 146 147 ## 1995 161 162 162 161 162 162 161 162 162 162 ## 1996 182 182 182 182 183 182 182 182 182 183 ## 1997 201 201 201 201 202 201 201 201 201 202 ## 1998 215 216 216 216 216 215 216 216 216 216 ## 1999 231 232 231 232 232 231 232 231 232 232 ## 2000 250 251 250 251 251 250 251 250 251 251 ## 2001 264 265 264 265 264 265 264 265 264 265 ## 2002 272 272 273 272 273 272 272 273 272 273 ## 2003 284 285 285 285 285 284 285 285 285 285 ## 2004 301 301 302 301 302 301 301 302 301 302 ## 2005 320 320 320 321 320 320 321 320 320 321 ## 2006 339 339 339 339 340 339 339 339 339 340 ## 2007 358 358 358 358 358 358 358 358 358 359 ## 2008 383 383 383 383 384 383 383 383 383 384 ## 2009 395 396 395 396 396 395 396 395 396 396 ## 2010 406 406 407 406 407 406 406 407 406 407 ## 2011 429 430 429 430 430 429 430 429 430 430 ## 2012 451 452 451 452 452 451 452 451 452 452 ## 2013 477 477 477 478 477 477 478 477 477 478 ## 2014 512 513 513 512 513 513 512 513 513 513 ## 2015 554 555 555 555 555 555 555 555 555 555  We are almost ready to compute the volatility decile portfolios. The column names across all volatility deciles are verified to ensure that they are equal across all years. This will be used to match weights to the stock returns when computing a portfolio. #Make sure they match decilePortfolioWeights names #Should return empty values (since it is looking for where it is NOT TRUE) which( !apply( do.call(rbind,lapply(decilePortfolioWeights , function(x){colnames(x)} )) ,2 ,function(y){isTRUE(all.equal( max(y) ,min(y)) ) #This function taken from here: #http://stackoverflow.com/questions/4752275/test-for-equality-among-all-elements-of-a-single-vector } ))  ## integer(0)  It returns an empty result, indicating that any one of the list item’s names can be used as a reference. #So we can take the first since we know they're all equal stockNames <- colnames(decilePortfolioWeights[[1]])  Next, the “stockNames” variable contents are matched to the stock price names so that matrix multiplication can be performed later (i.e. the order of the weights match the order of the stock returns). stockEnvIndex <- do.call(c, lapply(stockNames ,function(stock){which(stock == names(as.list(stockPriceEnv)))}) )  Now, a very large matrix of prices, in the same order as the stock weights, will be created. Ideally, the xts::merge() function would be used. However, since there is a large volume of data, my machine cannot handle it. The way around it, is to loop through all the stocks and join them together, very slowly. The code below runs in sequence (one core) and takes just over 30 minutes on my machine. #CRSP uses adjusted prices (for dividends) #http://www.crsp.com/files/data_descriptions_guide_0.pdf #p.102 adjustedColumn <- grep(".Adjusted",colnames(as.list(stockPriceEnv)[[stockEnvIndex[1]]])) #First one... stockPriceMatrix <- as.list(stockPriceEnv)[[stockEnvIndex[1]]][,adjustedColumn] colnames(stockPriceMatrix) <- gsub(".Adjusted","",colnames(stockPriceMatrix)) ##head(stockPriceMatrix) startTime <- Sys.time() #the rest for(stockIDX in 2:length(stockEnvIndex)){ stockVector <- as.list(stockPriceEnv)[[stockEnvIndex[stockIDX]]][,adjustedColumn] colnames(stockVector) <- gsub(".Adjusted","",colnames(stockVector)) #head(stockVector) stockPriceMatrix <- merge(stockPriceMatrix,stockVector) } rm(stockVector) endTime <- Sys.time() endTime - startTime # Time difference of 30.50445 mins # on 1 core  So we have a matrix of prices, in the same order as the weights that were calculated. The code below verifies that the order is the same across all decile portfolios. #check that names and order of names match do.call(c, lapply(decilePortfolioWeights ,function(col){ identical( colnames(stockPriceMatrix) , colnames(col) ) }) )  ## 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% ## TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE  Returns then need to be calculated from the matrix of prices. #Now we can calculate linear returns stockLRetMatrix <-retMatrixL(stockPriceMatrix)  Finally, the decile portfolio returns can be calculated, using the stock returns and weights that were calculated previously. decilePortfolioMatrix <- do.call(merge, lapply(decilePortfolioWeights ,function(portfolioWeights){ Return.rebalancing(R = stockLRetMatrix , weights = portfolioWeights ) } )) names(decilePortfolioMatrix) <- names(decilePortfolioWeights)  Below, the cumulative returns are plotted on a log-scale, which facilitates a visual comparison of the decile performance. We can see that the higher volatility deciles exhibit higher cumulative returns, which is what is expected as the volatility increases. #### Cumulative Returns of Volatility Decile Portfolios (Log Scale) Numeric results can be generated using the following line of code. computeTimeSeriesStatistics(linearReturnMatrix = decilePortfolioMatrix)  The Sharpe Ratio declines as the volatility increases, which is consistent with the low-volatility anomaly. However, the peculiar result is that the Sharpe ratio then increases at the high volatility portfolios. One reason for this could be the embedded survivorship bias, since riskier stocks in the high volatility portfolios would be excluded, thereby increasing risk-adjusted returns. However, if we compare the Sharpe-Ratio-trend against actual CRSP data results, seen in the table below, they exhibit the initial decline in Sharpe Ratio, and subsequent rise in Sharpe Ratio for the higher volatility deciles, which conflicts with the low-volatility anomaly results. Overall, replicated volatility decile returns, risk, and tail risk increase as the volatility increases, which is consistent with expectations, and which is observed in the actual CRSP volatility decile performance. #### Replicated Volatility Decile Results 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Annualized Return 9.77 11.92 11.85 12.60 13.51 14.48 15.22 16.69 18.85 33.46 Annualized Standard Deviation 8.62 11.48 14.13 14.93 15.80 16.80 17.83 19.08 20.15 25.30 Semi-Deviation 6.28 8.43 10.20 10.84 11.52 12.18 12.93 13.91 14.79 17.65 Annualized Sharpe Ratio (Rf=0%) 1.13 1.04 0.84 0.84 0.86 0.86 0.85 0.87 0.94 1.32 Adjusted Sharpe ratio (Risk free = 0) -3.10 -0.30 0.22 0.35 0.43 0.51 0.53 0.57 0.57 0.33 Average Drawdown 1.13 1.40 1.78 2.04 2.18 2.56 2.84 3.01 3.33 4.05 Average Recovery 11.73 10.07 10.50 10.75 10.63 11.08 12.39 12.60 13.50 13.81 VaR 95 -0.68 -1.01 -1.27 -1.39 -1.43 -1.57 -1.66 -1.83 -1.97 -2.29 VaR 99 -1.51 -2.05 -2.39 -2.47 -2.70 -2.86 -2.97 -3.14 -3.47 -4.08 ETL 95 -1.26 -1.71 -2.05 -2.16 -2.31 -2.45 -2.60 -2.78 -2.98 -3.56 ETL 99 -2.44 -3.07 -3.68 -3.80 -4.07 -4.22 -4.41 -4.67 -4.98 -6.03 Worst Loss -11.13 -13.77 -15.23 -15.98 -15.02 -15.04 -15.54 -15.44 -16.08 -12.95 Skewness -0.02 -0.77 -0.28 -0.61 -0.58 -0.44 -0.42 -0.46 -0.54 0.45 Excess Kurtosis 71.10 25.61 22.85 16.06 13.08 10.96 10.31 8.76 8.26 12.49 #### Actual Volatility Decile Results Table extracted from: Han, Yufeng, Ke Yang, and Guofu Zhou. 2013. “A New Anomaly: The Cross-Sectional Profitability of Technical Analysis.” Journal of Financial and Quantitative Analysis 48 (5). Cambridge University Press: 1433–61. http://dx.doi.org/10.1017/S0022109013000586. # Conclusion This post presents R code which uses free Yahoo! Finance data to replicate the performance of CRSP volatility deciles. Though the replicated portfolios contain survivorship bias, the overall risk and return characteristics match those exhibited by actual CRSP volatility decile portfolios. © 2016 Erol Biceroglu # An EMA Trading Strategy for a Low Volatility Portfolio # Introduction The process I’m going to follow is based on content from the University of Washington’s CFRM561 course Advanced Trading System Design. “Hypothesis driven development” is the core principle of this course, where each step in the development process involves hypothesizing testable ideas, and verifying these ideas before proceeding to the next stage. The stages involve identifying one or more market indicators, testing that the indicators are actually measuring the intended market phenomena, hypothesizing entry and exit signals based on the market indicator(s), confirming if the signals have predictive power, and then setting entry and exit rules based on the signals. I am personally a fan of low volatility investing, insofar as it provides enhanced risk-adjusted returns vs. traditional cap-weighted market indices. Thus, I have reconstructed the CRSP lowest volatility decile portfolio using data from Yahoo! Finance, and will use this index as the baseline for evaluating the trading strategy. The goal here is to verify if a Exponentially-Weighed Moving Average [EMA] trading strategy can further enhance risk-adjusted returns by either increasing returns without increasing risk, decreasing risk without impacting returns, or both. This post is a significantly condensed version of the full report. The full-version report, which is the actual assignment I submitted, can be found here. It contains all the details for each step, results of the hypothesis tests, including the signal strength, as well as the interim parameter optimization steps. ### Indicator The first question related to the indicator that we need to ask ourselves is, “what do we think we’re measuring?” It can be argued that the EMA is measuring the “true de-noised price level”, where the thinking is that the moving average “averages” the noise away, thereby revealing the true level. It can also be argued that, assuming the EMA is measuring the “true” level, it is also measuring the price trend. The rationale here is that, if there’s a true level revealed, then the trend can be inferred from the time series of “true levels”. In order to test that EMA is measuring the true level, the correlation between the low-vol index level and EMA indicator is tested for strength and statistical significance. To test the trend, it is compared against a measure of “true” slope by testing for high and statistically significant correlation and cointegration. ### Signal Prior to defining any signal, the main prediction that might be possible with an EMA is stating whether or not the future index level will be greater than or less than the current index level, instead of generating a point-forecast. For this tool, confusion matrices are used to measure if the future-index-level-forecasts are correctly classified as higher or lower than current index levels. The statistical significance of the confusion matrices are also verified to ensure that there is some information content. In terms of the actual signal, the thought behind it is that if the current index level exceeds the EMA level, this will increase the subsequent EMA value (which was verified as a measure of “true” price level), which will ultimately increase the EMA trend. This would then be interpreted as a potential signal that the index level will rise. The rationale for the converse stands, in that, if the current index level is less than the EMA level, this will decrease the subsequent EMA value (which is a measure of “true” price level), thereby indicating that the index level will potentially decrease. ### Rule The rules are simple: if it is likely that the price will increase based on the relationship between indicator and index, (i.e. the index level exceeds the EMA level) then a long position should be initiated. Conversely, if it is likely that the price will decrease based on the same relationship, then any long position(s) should be closed. # Benchmarks and Objectives of Strategy ## Objectives The main goal is to verify if the risk-adjusted returns for the low volatility index can be enhanced. Thus, maximizing a measure of risk-adjusted performance is the objective of this strategy, since leverage can always be applied to increase absolute returns. $MAX(return)$ $\; \; \; \; s.t. MIN(risk)$ Given that the objective is to maximize risk-adjusted returns, the following statistics can be used to quantify risk-adjusted returns: • Sharpe Ratio [SR] • Profit-to-Max Drawdown [PMD] During the trading simulations, PMD will be used to assess risk-adjusted returns as it is more sensitive to the tails than the SR. However, in order to evaluate the viability of the trading strategy, the SR will be used. ## Benchmarks Results of the EMA trading strategy will be compared against a 100% long version of the low volatility index. This portfolio is the ‘buy-and-hold’ [BH] version of the index, whereas the trading strategy will be referred to as the Moving-Average Portfolio [MAPortfolio]. The BH portfolio is the “no effort” portfolio with which to measure if the trading signals are adding value. Success is not defined on absolute returns; it may be possible to observe absolute lower returns, however exhibit higher risk-adjusted returns. # Results ## Signal Strength Results The Heidke Skill Score [HSS] is used to quantify signal strength, as it is more unforgiving than the “Probability of Detection”. HSS for the up signal indicates that the skill level is relatively low for lag days of 15 or greater. Moreover, the skill level is highest for 1 day-forward with a 5 day lag, and drops off relatively rapidly for a 5-day lag when the number of forward days exceed 3 days. The HSSs are much higher for the down signal, and appear to persist for 1 – 5 forward days, and lag days up to 60 days, assuming that less than 1% skill is not enough to make useful predictions. To clarify, the 60-day-persistence isn’t visible on the contour plot, however numerically the HSSs are more persistent for the down signal vs. the up signal. So far, this implies that the down signal contains more predictive power than the up signal. All skill scores determined to be “strong” are also statistically significant, both for up and down signals. ## Trading Strategy Results After testing various lag-lengths exhibiting high HSSs for both up and down signals, as well as performing a parameter optimization, the conclusion detailed in the full report was that it is likely more reliable to set the lag length based only on the HSS strength instead of the backtest results due to poor out-of-sample performance [OOS]. The OOS results, as well as the code, for the high HSS lag lengths (5 days for the up signal and 5 days for the down signal) are displayed below. decilePortfolioMatrixLevel <- do.call(merge, lapply(decilePortfolioMatrix ,retToLevel)) filterRange <- "1995/" MAPortfolioALL <- decilePortfolioMatrixLevel[filterRange] startDate<-as.Date(index(MAPortfolioALL)[1], format="%Y-%m-%d") endDate<-as.Date(index(last(MAPortfolioALL)), format="%Y-%m-%d") currency("USD") stock(primary_id = c("MAPortfolio"),currency = "USD") #name maStrategy <-"MAStrategy" #Date, one day before prices strategyDate <- min(index(MAPortfolioALL)) - 1 #******************************* #Used to reset for testing #rm.strat(maStrategy) #rm(mktdata) #if (!exists('.blotter')) .blotter <- new.env() #if (!exists('.strategy')) .strategy <- new.env() #******************************* NumSh<-1 NumDeciles <- ncol(MAPortfolioALL) maxDecile<-1 MAPortfolio <- MAPortfolioALL[,maxDecile] colnames(MAPortfolio) <- "MAPortfolio.Close" maLag <- 5 maLagDown <- 5 #init portfolio and account initPortf(name = maStrategy , symbols = list("MAPortfolio") #as defined in Financial instrument , initDate = strategyDate ) initAcct(name = maStrategy ,portfolios = maStrategy ,initDate = strategyDate ,initEq = 1 ) #order book, and strategy initOrders(portfolio = maStrategy , initDate = strategyDate ) #position limits addPosLimit(maStrategy, symbol = "MAPortfolio", strategyDate, maxpos = NumSh, longlevels = NumSh, minpos = 0) strategy( maStrategy, store = TRUE) #add indicator add.indicator(strategy = maStrategy , name = "EMA" , arguments = list(x = quote(mktdata$MAPortfolio.Close), n = quote(maLag))
, label = "MAPortfolioma"
)

, name = "EMA"
, arguments = list(x = quote(mktdata\$MAPortfolio.Close), n = quote(maLagDown))
)

#MAPortfolio is greater than N-Lag Moving Average
, name = "sigComparison"
, arguments = list(columns = c("MAPortfolio.Close","MAPortfolioma")
, relationship = "gt"
)
, label = "MAPortfolio.gt.ma"
)

#MAPortfolio is less-than-equal-to N-Lag Moving Average
, name = "sigComparison"
, arguments = list(columns = c("MAPortfolio.Close","MAPortfoliomaDown")
, relationship = "lte"
)
, label = "MAPortfolio.lte.ma"
)

#Entry and exit rules

#buy MAPortfolio when above N-Lag Moving Average
, name = "ruleSignal"
, arguments = list(sigcol = "MAPortfolio.gt.ma"
, sigval = TRUE
, orderqty = NumSh
, ordertype = "market"
, orderside = NULL
, osFUN = "osMaxPos"
, symbol = "MAPortfolio"
)
, type = "enter"
)

#sell MAPortfolio when above N-Lag Moving Average
, name = "ruleSignal"
, arguments = list(sigcol = "MAPortfolio.lte.ma"
, sigval = TRUE
, orderqty = "all"
, ordertype = "market"
, orderside = NULL
, osFUN = "osMaxPos"
, symbol = "MAPortfolio"
)
, type = "exit"
)

applyStrategy(strategy = maStrategy
, portfolios = maStrategy
)

updatePortf(maStrategy)
updateAcct(maStrategy)
updateEndEq(maStrategy)


BH MAPortfolio
Annualized Return 8.66 10.18
Annualized Standard Deviation 7.62 4.19
Semi-Deviation 5.61 3.04
Annualized Sharpe Ratio (Rf=0%) 1.14 2.43
Adjusted Sharpe ratio (Risk free = 0) -7.62 -18.36
Average Drawdown 0.84 0.49
Average Recovery 9.33 7.05
VaR 95 NA -0.25
VaR 99 -16.93 -2.99
ETL 95 -25.02 -0.25
ETL 99 -16.93 -2.99
Worst Loss -6.64 -4.17
Skewness 1.88 -0.84
Excess Kurtosis 152.83 37.56
## [1] "t-test p-value:  0.52"
## [1] "F-test p-value:  0"

One key observation is that the performance during the 2008 financial crisis was relatively spectacular, with a drawdown of only approximately 10%, vs. 40% for the BH index. During the start of the OOS performance, there was some underperformance, up until the dot-com crash in late 1999 and early 2000, where drawdowns for the MA portfolio were minimal, allowing returns to be maintained.

Mean returns are not statistically significantly different, however variance is statistically significantly lower, resulting in the superior Sharpe Ratio of 2.43, vs. 1.14 for the BH Index. Unfortunately, the trading strategy does not exhibit positive skew, whereas, surprisingly, the BH Index exhibits positive skew. However, excess kurtosis, though high for both portfolios, is significantly higher for the BH index. It appears as though the trading strategy is significantly reducing the impact of negative market events, allowing the preservation of accumulated returns, with the drawback of potentially missing out on any upside. Given that the objective is to enhance risk-adjusted returns, the biased parameter combination appears to successfully meet this criteria. One additional factor is the transaction costs, which were not considered in this study. These would need to be factored in to the backtests to verify that superior risk-adjusted returns are maintained.