Message Font: Serif | Sans-Serif

No. of Recommendations: 1
Say the 20 pick guy after a year is up 15 pts on his correct picks and down 15 points on his incorrect ones. With his 75% accuracy this gives him a score of 150 ((15*15)-(5*15))
The 200 pick guy is also up 15 on correct picks and down 15 on incorrect ones. With his 60% accuracy he has a score of 600 ((120*15)-(80*15)).

I don't think normalizing is the way to go, so we need accuracy. Just fix the exploits.

Your example is the only case in which accuracy is a useful metric - when all winners and losers are exactly the same size. The problem is that players are encouraged to let their losers get much bigger than their winners, then accuracy no longer means anything. The simple solution is to weight picks by their size in the accuracy calculation (essentially remapping to a set where all the picks are equally sized). That's exactly what my "point efficiency" formula does.

Suppose I told you I had an absolutely fool-proof strategy that allows me win money in 72% of my casino visits? Impressed? Okay, how about one that wins over 92% of the time? How about winning 97% of the time? Suppose some professional poker player makes far more money/year than I do at gambling (even when I'm using my 97% method), but he only wins about 40% of the time. Well, I'm still much more "accurate," right? See why this metric is completely useless if we don't account for the size of the wins/losses?

Dave