Last weekend I ran some numbers on the grid that I had downloaded a month previously to see which predictors from a month ago predicted the returns actually experienced over the last 30 days.

Looking at the results, it’s easy to see why it’s so difficult to pick winners (here or anywhere).

The data are very messy, so the results are not very stable, either looking at bivariate correlations or at regression analyses. To reduce some of the noise, I did most of my analyses on strategies ranking at least 30 (out of 100) on the C2 Score a month ago (316 strategies). Still outliers may be driving some of the results.

Among strategies with at least a 30 C2 Score, the best predictor of performance over the last 30 days was subscription cost (Pearson correl. r=-.169). **The lower the subscription cost (a month ago), the better the performance over the last 30 days.**

The other significant predictors were:

**• Log of (Annual Returns w Trading Costs +1): -.164; Higher annual returns as of a month ago did worse going forward 30 days**

**• Sharpe Ratio: -.140; Higher ratios did worse**

**• Annual Returns w/out fees: -.119; Higher returns did worse**

**• Last 60 days return: .118; Higher recent returns did BETTER going forward**

**• Correl. To SP500: -.117; Lower correlations did BETTER**

I can understand why higher 60 day returns would be a better positive predictor than annualized returns, but why would 60 day returns be better than 90 day returns? [probably just noise]

Among the most interesting insignificant results were:

**• Longer strategy age had a positive coefficient on 30-day returns going forward (.042);**

**• Higher drawdowns had a tiny negative coefficient (-.009);**

**• Higher winning percentage had a tiny negative coefficient (-.009).**

In the regression models, which were confounded by high multicollinearity, **longer strategy age (and its log) usually had a significant positive effect on returns**, controlling for other variables.

When I have more data to analyze in a month or two, I’ll do a more extensive analysis.

I think that the results reflected the stalling out that the market did over the last month, and 30 days is **MUCH** too short to judge criteria for winning strategies. **I suspect that if I had data to do the analysis going back to November, the results would be very different—and many (or most) of the correlations would be reversed.**

As they say here and elsewhere, past performance is no guarantee of future results.

BTW, Matt and Collective2 should be commended for giving us so much data with which to make better informed choices. The more time I spend here, the better I like this site.