The Grid

Why does the Grid show only 789 systems when the front page of C2 says it has audited over 10,000 systems?

Why would a system like Diamond NQ 100AT which has been underwater since inception have a score of 1000? Likewise TMD: Google which is underwater has a C2 score of 999.

most systems are dead or very inactive. The 10,000 systems is misleading. The active systems here are at best in the few hundreds.

I don’t understand why Matthew would bother to advertise that C2 has over 10,000 systems when less than 800 are active.

Never worked in sales then Fred? :slight_smile:

If I did and wanted repeat customers then I would work hard to establish my credibility not make a sale regardless of the consequences. But, hey, I’m different!

I still am left wondering about the value of C2 scores given that Diamond NQ100 AT has a score of 1000 and an annualized return of -23.6%.

Well after comparing my C2 score with some on here I’m still puzzled that it is so low given that all but one of my systems are in profit!

Put the word “Diamond” in your system’s name;>) The system I mentioned in my first post still has a score of 1000 and an annualized return of around -20%!


I believe the C2 Score is the Vendor score, not the individual system score. So if the vendor has 5 "good" systems and 1 "bad" system, the C2 score (vendor score) could be high and show under each of the respective systems.

Thanks guys.

But we really should not have to scratch around in the dark searching for the answer. Somebody from C2 should explain these oddities afterall this service ISN’T FREE (in fact it’s relatively expensive). Vendors should know precisely what makes up the Vendor score. At the moment it seems rather arbitrary with very little logic underpinning it.

Sorry, Notch. I thought we had a page that explained the C2 Score, but I think it was hidden in the transition to the new site. I’ll work on making it more obvious.

C2 Score is a vendor score – not a system score. The reason for this is to prevent gaming of the C2 Score by unscrupulous vendors who create 100 systems, all doing very different things, and then marketing the hell out of the one randomly good system that happens to garner a good score. By making the C2 Score vendor wide, this hole is mostly eliminated.


I can empathize with Notch’s confusion.

Many traders use the grid for finding or identifying systems they are interested. Most fields on the grid describe the individual system that is identified on that line. About the only field that doesn’t describe the individual system is the C2 score. However, the verbiage (“C2 score”) makes it sound like the C2 score of the individual system.

Perhaps, as Matthew wrote above, the field should be named “Vendor Score”.

What about the "Popularity" field? Is that based on the vendor rather than the system as well?

No, it is based on the system. The only vendor-wide stat is the C2 Score.

Yes that’s a good suggestion: rename “C2 score” to “C2 Vendor Score”. There is a constant flow of confusion about the C2 score that would go away with a simple re-naming.

Also, I understand that it may be undesirable to publish the whole formula for the C2 score (in case it gives people a way of manipulating it) but there should be some way for vendors with low scores to receive some kind of feedback so they can improve their service.

For instance, my score is pretty low. I have no idea why; I don’t see that I am doing anything wrong. If I knew why then maybe I could do better.

If you did provide feedback I’m sure you’d receive many complaints from people who don’t agree with the reasoning, dealing with which I suspect would add appreciably to your workload. On the other hand, it may help develop the C2 score that much quicker.

It appears there is strong support for re-naming the C2 score to either Vendor Score or C2 Vendor Score. The sooner the change is made the better in my opinion.


Perhaps this is "Much ToDo About Nothing".

Personally, I totally ignore any of the scores posted by C2 - there is so much controversy surrounding them - they have been and remain meaningless.

Before anyone gets worked up about them, answer this… how many Customers out there using C2 rely on these numbers and if so - to what extent?

What does Customer research from this site tell us about their relevancy? Only Matthew knows - the rest of us would be merely speculating.

My advice - ignore them and do your homework as if they did not exist.



I’m not an expert on the C2 score and I don’t know the formula. However, to give you an answer to your question I can offer an opinion. I believe the time your systems are in existence is an important criterion in the C2 score. All things being equal, your C2 score will improve as your systems produce a longer trade history.

The C2 score is meaningless. It might have significance if it was only applied to one system but not to all systems.

Most traders will have many failed system attempts before creating a winner. So do we want subscribers scared off by pointing to results of failed systems no longer relevant to the trader at this point in time.

Another point, a system creator has 10 attempts before hitting on something that looks good but isn’t. 10 other traders each have 1 system attempt and out of those 10 attempts one looks good but isn’t. There is no difference, no one is being protected here. This is wrong thinking.

Either a system is good standing on it’s own or it isn’t. All the rest of the BS in the world changes nothing about a good system or a bad one. It’s all in the systems numbers. The most important one being time.

I disagree very strongly. You are saying that your entire track record as a whole is meaningless, but rather the only thing that matters is the track record of one particular system.

The reason the C2 Score is based on all systems can best be summed up by an old con game.

Before the Kentucky Derby, you call eight people on the telephone. You tell them you have a fabulous system for picking horses – and you will prove it to them. “I’ll give you my pick for free,” you say. "Go bet $1000 on my pick. And if you win, which I am sure you will, then give me half the profits and subscribe to my $1000 per year horse-picking service."

You proceed to tell each person you call that a different horse will win the race.

The next day, you ignore the angry phone calls from seven of the suckers. But you call the eighth sucker (the lucky one). “See?” you say. "I told you I could pick who would win."

And that is the reason why your C2 Score depends on all your systems, and not just the eighth and lucky one.

This is all still missing the point…

What data do Customers on C2 use? Please answer that question first. The Vendor score, no matter what you call it or how it is defined? If they don’t, who cares about anything else?

I’m a Customer and I ignore it completely. Why?

Because it has no integrity, it is inconsistent, it is often erroneous and it constantly breeds confusion. Therefore, one cannot use it with confidence. That’s all we need to know. QED.

Would you trust a fuel gauge that consistently reports the wrong amount of gas in the tank? No. You can’t trust it. Would you trust a temperature gauge that gives inconsistent readings (or even inconsistent interpretations of its meaning). I think not. Even if it is empirically correct - if it breeds confusion then it is Suspect. That is enough to reject it.

Hence, the Vendor Score cannot be trusted. Think of ALL the complaints that is has spawned and continues to spawn on this site. Think about it!

Until it is reliable, consistent and doesn’t breed confusion, we can find much better information upon which to make our investment decisions.

It is a good idea, but unfortunately, cannot be trusted to yield valuable information.

My 2 cents.