C2 Score

I have just a question from my observation - Why do ALL my systems have the SAME C2 score when the performance, length and consistency is different for each one?



Does this mean, if I have a real good system and a crappy system, the real good one gets penalized due to the crappy system? I am hoping that is not the case…Please explain

The C2 Score is a measure for a vendor not an individual system.



Nobody (except C2 Staff) knows how its calculated though…

What makes you think C2 Staff knows how it’s calculated …?

Then this C2 score measure is completely a misleading metric… A system should be just measured on its own merit rather than a Vendor…



Say for example, a vendor offers several systems, but his intention is not for subscribers to subscribe to ALL of his systems (thus collectively giving them a C2 Score), but rather each individual system, which should be measured based on the system’s performance.



Seems like if they are rating a VENDOR rather than a SYSTEM, then each vendor should create NEW ACCOUNTS as alias vendors and just post one system per account… that would be the only way to get a SYSTEM’s actual C2 score…



IMHO, rating a vendor is the wrong way to depict these.



Any comments from the C2 staff?

This question comes up often. The thinking behind assigning a vendor score (rather than a system score) is as follows.



Imagine an unscrupulous system developer. He starts two systems on C2: Twiddle-Dee and TwiddleDum.



Whenever Twiddle-Dee buys, Twiddle-Dum sells. And vice versa.



At the end of a year, he looks at which system happened to performed well. (It is possible that neither did. But it is also possible that one performed quite well while the other did not.)



This is the reason (among others) that we assign VENDOR scores as opposed to system scores.



But the reality is that the C2 Score will be undergoing a major overhaul in the near future, to make it more interesting and (hopefully) predictive. We still haven’t decided whether to make the score system based or vendor based.



Matthew

Mathew:

With all due respect, that example of T-Dee and T-Dumb does not make sense… If subscribers want to subscribe to one of the two systems, then they should be able to measure (ala C2-Score) the merit of each system by itself… By creating a C2 Score on Vendor, you are saying "subscribers, IF you subscribe to ALL the models from this vendor then the C2 Score is…"



I beg to differ… Should I then be creating multiple accounts and have 1 System Per Account? Looks like that is the only way around… And what happens if I "KILL" some systems from my account? Does the C2 score use the performance of the KILLED systems also?



Please help for me to understand this



Thanx

Matthew,

I think your T dee and T dumb concept is an unprovable myth. Set up two accounts and over time show us that it is not a myth. Make the monthly fee so high no one will subscribe and put the proving a concept in the description to protect investors.

Still waiting to hear back from Mathew or any other C2 Staff… Looks like the vendors do agree regarding the reputation of the C2 Score

My only comment is that of course system vendors agree that they would rather have a score that does not follow them from system to system.



That in itself is not a good thing or a bad thing, nor will it ultimately decide the question conclusively, one way or the other.



But I don’t think it’s very surprising that, from a purely vendor-specific point of view, system-only scores are much preferred to vendor scores.

I should also add that I opened another thread in the C2 Software Developer’s Forum where people can brainstorm with me and each other about the ideal way to re-jigger the C2 Score.



Let’s try to re-direct responses there, if appropriate:



http://www.collective2.com/cgi-perl/board.mpl?want=listmsgs&boardid=20960084&threadhilite=9647

C2 score should have 2 scores:



Vendor score

System score



System score should be ranked on these factors



Smoothness of equity curve

Risk-adjusted return

Total return

Random probability of achieving those returns

Statistical validity of those returns

Both long term and short term performance

Max open trade risk

System Age

# trades

Best case but realistic commissions and fees (not typical).



Score should be self-adjusting. Systems/vendors that fly high and crash should be heavily penalized. However, everyone should have the ability to improve their score if they do well.



Somethings to watch for:

Systems that achieve all winnings from only 1 or a few trades. This is a negative.

Systems that have high max open trade drawdown risk. This is a negative.

Systems that trade infrequently. This is a negative in many ways. Although positive for fees and commissions.