The opinions expressed in these forums do not represent those of C2, and any discussion of profit/loss
is not indicative of future performance or success.
There is a substantial risk of loss in trading. You should therefore carefully consider
whether such trading is suitable for you in light of your financial condition. You should read,
understand, and consider the Risk Disclosure Statement that is provided by your broker
before you consider trading. Most people who trade lose money.
The subject matter may be a little dry and esoteric for some, but if you’re of a mathematical bent, you’ll be interested in the work of Daniil, one of C2’s quant analysts. He just published a blog post here:
Hi,
Yes, there is a simple formula derived from this tool and is currently testing in real time with a smart portfolio.
A few words on how the formula is derived:
Even without taking the boundary values of the attributes, to go through only weights from 0 to 5. We get 2176782336(6^12) combinations. Agree that this is a lot even for a machine search.
So there is a lot of art in developing the formula at this stage.
I’ve tried quite a few complex variants of the formula (many attributes) and a number of simple ones (few attributes).
Of the variants I have tried (a drop in the ocean of all possible combinations), the simple formulas have shown the most stable results in terms of different portfolio sizes. What I mean is that sometimes the formula for the top 5 portfolio shows much worse results for top 4 or top 6 portfolios. This is an indicator of over fitting.
There is no such thing in the case of simple formulas.
Thank you for the info Daniil (and Matthew), I truly admire “quants” like yourself, they can find and exploit tons of market inefficiencies that the average trader cannot possibly see.
Keep up the good work.
While it’s quite possible we made errors in implementation, the design of the methodology was meant to be very thoughtful about hindsight bias.
A few specifics:
When we say, “Let’s take the top 5 strategies in January 2022 and look at their subsequent 3 month performance” we actually go back to the strategy rankings as of January 2022 – without any particular knowledge about how the strategies scored after that date.
We also designed the methodology to be aware of survivorship bias – a very insidious but surprisingly common flaw in many people’s backtesting. A common example of this is someone who wants to analyze at a database of “the entire set of stock symbols” as of June 2022. Perhaps he says, “Gee I wonder what would have happened if I went back in time and used Method X to select from this database of all known stock symbols.”
The inexperienced analyst tells himself: “It’s cool – there’s no bias, because, after all, my database contains the entire universe of today’s stock symbols. I’m not cherry-picking only the best stocks.”
The flaw here is not obvious. Do you see what it is? It’s that lots of shitty companies were delisted and disappeared between January 2022 and June 2022… so that “database of all known stock symbols” that you “choose from” with complete dispassion is actually filtered to have previously gotten rid of the crap.
So, yeah, we designed our methods to avoid this. We never “delist” strategies from our database. Crappy strategies that “disappear” are still in the database, etc.
Again, I’m not guaranteeing Daniil and the team here didn’t make a bonehead mistake – these kind of mistakes are very common when doing this sort of analysis – but I am saying that we’re not complete nubes, and yeah we understand how it’s easy to over-fit data with the benefit of hindsight. We try to avoid those pitfalls.
I want to believe these results. They just seem too good to be true. I would love to see the results of a real account dedicated to just following this screening system. Then it would be great to see the results for the real account compared to a backtest over the same time period.
There’s definitely no holy grail. So I do think there must be something wrong with these results. I only wanted to point out that whatever it is, it’s not a “let’s just blindly over-optimize backtest data” error baked into the design (at least as far as I can determine), but is more likely an implementation issue, perhaps with the way the historical data is generated.
And actually, thinking about it more, the problem here is probably a version of the “wastepaper basket” problem. When you have a nice UI where you can pull sliders left and right, and see results if you do this, or do that; then it’s easy to find some set of parameters that produce nice results. (That’s the analogy to a “wastepaper basket” filled with 100 crumpled papers with ideas that didn’t work; then finally idea #101 just happens to be great! You publish idea #101 but never talk about ideas 1-100.)
The way around this is to use out-of-sample testing: I.E. decide on a model that probably will work, run it on days 1 -1000 of backtest data, see that it actually did work okay… and then run it on days 1001-1500 to confirm.
But the more times you try to find a winning strategy in days 1-1000, and the more degrees of freedom you have to play with (i.e. how many little sliders and stats can you play with), then the less likely that the out-of-sample test performance will correspond to the in-sample test.