How Long Does It Take High-Flying Systems To Crash?

I think the random sampling gives a view of the possibilities, because the conditions that would trigger the system to make certain trades could happen in a different order and frequency in the future. So while the system would make the same trades for the same data, the order of trades could be completely different and possibly lead to a higher max drawdown.

Still for now, the return / risk ratio is very good.

Thanks,

However, the monte carlo method still doesn’t take into account particularities of a system. For example, a system may be ‘trained’ to give more weight to forecasted volatile days vs. forecasted less volatile days. A simulation would not be able to take this level of detail into account when it runs its millions of trades. The simulation would assign trades at random where they would not apply in real life.

Such a simulation should also then, in theory, be able to tell you a systems best future performance, etc. etc. (Who needs backtests…just run a monte carlo on it. :smile: )

These simulations, imo, are not accurate for every type of system. And I’m not sure how accurate they are period; as I’ve never seen a study done on them.

Given enough simulations (aka: curve fitting/optimizing), and tweaking, you can find/generate most any statistic you’re looking for.

Regardless, one ought not rely on it anyway, imo. Protective measures would still be used in any case. So I don’t see much use in them…but that’s just me.

If trades have a correlation between them then Monte Carlo won’t be as accurate, but if trades are uncorrelated then simulations will be quite accurate presuming you have an accurate distribution. The distribution already contains the effects of system volatility scaling (for example) as well as other “particularities” of the system. Simulations on the distribution will contain these particularities implicitly because they are represented in the real results that generated the distribution. This is similar to how all market news is reflected in the price action of the market and a system can work simply by watching price rather than having to follow and understand news.

And you are correct, these simulations can tell you the likely range of system performance, likely drawdown ranges, expected “lull” periods without new highs, etc. I even prefer using MC simulated numbers over backtest numbers because they are based on live results and don’t suffer from the potential overfitting problems of backtested numbers. Backtests give you results for specific historical market periods–they are useful in many ways but have significant problems (e.g. overfitting, limited data points). MC simulations give you statistical ranges for what to expect independent of specific past markets (presuming the system holds to the distribution), have infinite data points, don’t suffer from the “optimism” of overfit backtests, and are useful in many ways as an additional tool despite their limitations. You generally want to see an agreement between MC simulations and your backtests–if you don’t, something is off (e.g. your backtests might be overly curve fitted). The backtests of my system Drunk Uncle and the MC simulations on my system’s live data from C2 generally match for both max DD and return, and that gives me confidence in my estimations for these numbers.

Your final comment about curve fitting/optimization on MC simulations makes no sense. There is nothing that gets curve fit, optimized, or “tweaked” in a MC simulation. MC simulations are objective–I can run them on any system in C2 and get objective, repeatable answers (with limitations of course). It’s backtesting that has problems with curvefitting/optimization.

1 Like

Perhaps this can explain it better than I can:

Excerpts for the underlying article:

The problem is that the typical assumption set used in Monte Carlo simulation assumes normal distributions and correlation coefficients of zero, neither of which are typical in the world of financial markets.

The problem is the confusion of risk with uncertainty. Risk assumes knowledge of the distribution of future outcomes (i.e., the input to the Monte Carlo simulation). Uncertainty or ambiguity describes a world (our world) in which the shape and location of the distribution is open to question. Contrary to academic orthodoxy, the distribution of U.S. stock market returns is far from normal.

The probability results from Monte Carlo simulation may look impressive to a client. However, if that number is derived from assumptions that are not realistic, there is no value to the number. It does provide a good excuse: “Well, the Monte Carlo model did tell us there was a 15 percent chance of this happening.”

In the end, Monte Carlo simulation seems to clash with the continuing development of a holistic approach that allows changes in the client’s investment allocation over time, with the corresponding changes of anticipated rate of return over the same time periods. Modern software can and should incorporate greater flexibility within the financial planning model, making random rate-of-return simulations even less relevant.

While the profession quietly questions Monte Carlo simulation, the benefits are being loudly proclaimed by the software industry as the hottest new innovation in financial planning in decades. Marketing hype touts this new information as one more item of value in a client’s financial plan, while opponents say that when we extend client expectations out 20 or 30 years, identifying factors such as lifestyle expenditures, tax rates, inflation and investment preference based on risk may be more important. Chaos theory teaches us how small errors in the early years of a financial plan can make dramatic consequences when compounded over a long period of time, so let’s not make our real problems any bigger than they are. Some “what if” scenarios are necessary, but let’s do it right and do it often. The real answer is to make the plan as representative as possible under the circumstances and to update it regularly. The benefit that Monte Carlo simulation promises to provide might be better achieved by using common sense in the financial planning process.

Real world results are worse than monte carlo because of emotional trading during drawdowns .

1 Like

A couple replies…

Most importantly I’m well aware of the limitations of MC but I find it useful despite these limitations. Having a supporting (or contrasting) estimate on return/DD as an additional indicator to backtesting or C2 stats is a lot better than not having it. And I observe the value of MC as my MC estimates usually correlate quite well with C2 stats on mature systems that have a lot of data. Accuracy drops with younger systems as there is not as much live data to build a representative distribution, however it’s still better than “flying blind.”

As for the excerpt you posted, notice I’m not using MC to create a general financial plan, I’m evaluating trading system performance (again, getting that 2nd opinion over backtest data or C2 stats alone). I’m also not presuming the market is normally distributed. It’s not the market returns that are being simulated, it’s trading system returns. Trading system returns might not be normally distributed either (most are skewed or otherwise NOT normal), but I’m not presuming normal distribution, I’m using a distribution created from actual live trading results. Of course a system could be changed by its developer or otherwise not have returns that make a representative distribution (especially true for young systems without much data), but again I find it more useful than not having it. There also doesn’t need to be a presumption about the correlation of trading system returns as it’s trivial to actually check their auto-correlation with a simple spreadsheet call.

I get that you’re not convinced of the value of MC simulation but is it just because it reports something you don’t agree with? What is your estimation of the DD for your system over the longer term based on backtests or whatever you’re using? I have my estimation and I have my basis for that estimation (and it will improve as your system generates more data). I trust my live-data-based estimations more than developer backtests–I know how easily backtests are optimistic and over-fit.

1 Like

I agree that emotions may adversely affect certain discretionary traders with regard to their trading.

However, those using a ‘system’ are likely less affected.

And those using an algo are even less affected.

Seasoned traders should have realized that drawdowns happen. No one trader or system will be correct everyday, of course.

There is no way to verify a backtest; yet, everyone wants to see one. There is also no way to verify a MC simulation. Both must really just ‘wait and see.’

Whether one has backtests, MC simulations, 20 years of data; the bottom line is that a smart trader will use–or should use–their own methods of protection. A trader doesn’t have to follow every signal from an unknown, but profitable-so-far system. (I could speak more on that.)

I’m happy if someone can rely upon what they see so far, combined with their personal proper money management, and trade using a so-far well-performing system.

And I’m also happy if someone wants to watch me perform well for 20 years (to get more data to analyze) before even considering following my system.

:smile:

I appreciate your confidence and I don’t like being “that guy,” but there is no way that system will limit to 28% DD. That is a near mathematical certainty. The instruments you are trading are extremely leveraged, and your trades have variation even as they make money–just from the normal volatility of the trades you are making you will eventually double that DD even presuming your profitability remains the same. These are mathematical realities. You have built a great loaded dice and you’re rolling mostly big wins with it, but regardless there are mathematically predictable drawdowns you will eventually suffer simply due to chance. These are what MC simulations express.

The market will be a better teacher than I however. Perhaps after you hit a 40% or 50% DD you’ll be more open to furthering discussion. I’d wish you luck, but I think this goes beyond luck. If you roll a pair of dice enough times you’ll eventually hit snake eyes.

1 Like

Ok, we’re back to where we started. You say you know what my drawdown will be going forward based of your random simulations or “math.”

I say you don’t know my system.

I’ve already indicated I now use stop-losses and profit targets where I haven’t in the past. I also don’t use 100% allocation like I did in the past. You seem to admit I can forecast with ‘loaded dice’ like accuracy. Yet, none of the above has swayed your guess.

Nevertheless, you insist my dd will be closer to 50% than 28%. I say it will be closer to 28% than 50%. We’ll just have to wait and see and agree to disagree, no?

1 Like

I’ll preface this question by deferring to the knowledge & expertise of DavidStephens & MachineLearningTradr, and to say I do not mean this to be antagonistic…

Why have you changed your methods to include these two parameters?

Good question.

Well, “changed” methods is a strong phrase. I wouldn’t call it that; but I know what you mean.

I had a few followers prior to taking subscribers here on C2. Since day one, one particular follower suggested stops would help performance. I indicated maybe, and that I’ll get around to looking into it. I wanted algo controlled or derived stops, so it would require some coding and time to implement. So, the priority was improving the algo first; stops, etc. later.

So, with the algo improved, and other tests and experiments out of the way, I put the stops in. I put the targets in since I was under the hood anyway, it wouldn’t be that much more effort to add them as well. Also, I would curious to see how both would affect performance. The algo could choose to not use them if it determines they don’t help performance. But in fact, they do, and it does.

I am currently still doing experiments with a dynamic position sizing feature. It should be ready tonight or tomorrow.

Admittedly, one or two of my subscribers, and just being responsible for the well-being of others’ money has torqued my brain more towards the importance of risk counter-measures than it was when I only traded my own money.

4 Likes

Just as a note, changes to a system will usually modify the system’s return distribution and invalidate data from prevoius MC simulations based on a now non-representative distribution.

I’m not a fan of changing a live system, but that’s another topic.

You shouldn’t change a system because of subscribers … it implies that the system wasnt working in the first place or you are not sure it will work well .

Right. That’s why I thought it was strange that you stood by your simulations even after I first mentioned that I added stop losses, etc.

I understand some feel that way. But, I’ve gotten this far by constantly improving my system. If the change doesn’t improve it; it’s reverted back.

Or, as in this case, the change is merely an ability for the algo to change something on its own. It won’t make the change unless it determines an improvement will take place.

I don’t believe I said that.

What if the subscriber had meaningful recommendations for improvement of the system?

Edit: And/or, had suggestions that may make the system more appealing to other potential subscribers?

1 Like

You take notes and feedback yes but you dont change your system because of a subscriber , doesn’t make sense , what if you had 100 subscribers or maybe the subscriber should start a system himself .

IMO the problem with changes to a system is the changes can take a functioning system and introduce subtle curve-fitting issues. To make a change (even a minor change) there is usually optimization done against historical data and that’s a difficult process to get right. Just getting a system to function correctly going forward a single time is incredibly challenging–each further change is a risk to the forward functioning of the system. There are exceptions but generally my view is that after a change to a system the forward functioning of the system needs to be proven again with months of live results. A good way to do it is to group several changes and then start a new “version 2” of the system with the changes. Users can decide to switch to the new system or stick with the old one until the new one proves itself out.

When you look at how few systems last more than a few years it makes you wonder. I have no doubt “changes” are at least partially responsible for quite a number of systems that stopped working.

3 Likes

I don’t think a general statement like that can responsibly be made without evidence that it applies so broadly.

Even worse than MC simulations; it brushes every developer that ever improves the system that they built from nothing with a broad, unsubstantiated, opinion.

I respect that this may have been your result; but I don’t think you have grounds to make such a general statement. It certainly hasn’t been my result.

But once again, like before…we’ll see.