It seems to me to be a tautology that someone who has 10 picks with an accuracy rating of 100% should be considered MORE accurate than someone who has only 7 picks with the same rating. And yet when I sort for accuracy, Fool4Fishing, with 10 picks and 100%, shows up in 101st place, while danodell43, with 7 picks, is considered the #1 for accuracy, also at 100%.This seems a simple adjustment. I would suggest 2 -- they may have the salutory effect of encouraging more selections.1. A tie in accuracy goes to the CAPS player with MORE picks. 2. A tie in accuracy goes to the CAPS player whose average active pick's holding period is LONGER. Bill Mann
This seems a simple adjustment. I would suggest 2 -- they may have the salutory effect of encouraging more selections.1. A tie in accuracy goes to the CAPS player with MORE picks. 2. A tie in accuracy goes to the CAPS player whose average active pick's holding period is LONGER. I really like this idea!First, it will help 'normalize' the distributions of accuracy rankings by quite a bit (which has been a bit of a problem).Additionally, It will help solve the "there's a huge tie for #457 in accuracy and one is either #457 or #1,032 in accuracy with nobody inbetween them" problem.Regards,Russell
Bill,**WARNING – horribly wonkish**I pestered a statistics oriented friend (used to be) with this.But first, I think most, would argue that 10 for 10 IS more accurate than 7 for 7. This is because the probability of being 7 for 7 is 0.781% (1/128) and being 10 for 10 is 0.098% (1/1024).But is being 9 for 12 (75%) more accurate than being 6 for 8 (75%). It is in my opinion because again the probabilities are 5.42% (222/4096) and 10.94% (28/256) respectively. [5.42% is the probability of being exactly 9 for 12, not 9 for 12 or better].The adjustment is to turn the accuracy into a score which can be done by dividing the accuracy by the probability of that accuracy.So for example: 7 for 7 is 100% with a probability of 0.781%Score=128=1.0/0.0781The score for 10 for 10 is 100% (1) divided by 0.000976 (1/1024)Score=1024OK so this skews really high on the ends - no surprise there. How does it work in the middle?6.87 = Score for 6 for 8 [0.75/(22/256)]13.8 = Score for 8 for 12 [0.75/(222/4096)] which is better than 6 for 8.It works for in between sized portfolios.The respective accuracy scores for 7 for 10 and 8 for 10 are:5.97 = [0.7/(120/1024)] - worse than 6 for 818.2 = [0.8/(45/1024)] – better than 8 for 12But it is worth noting that 11 for 12 would be better than 7 for 7 with scores of 321 and 128 respectively. Similarly being 10 for 12 (score=51.7) would be better than 7 for 8 (score 28), despite being a somewhat lower % accuracy.Thoughts (LOL)?Zz
My efforts to consider looking at this with large number sets, say portfolios of 50+ gives all kinds of large numbers and odd answers. Not sure I quite know what I'm doing, but it looks like the method falls apart for high numbers.Zz - shame really.
Zz,A quick and dirty way to use your idea is to multiply the accuracy measure by (1-0.5^n), where n is the total number of picks used to calculate the accuracy ratio. This will always order people with the same accuracy ratio from most picks to least picks, but it prevents people with lower ratios from being considered more accurate that people with higher ratios unless they have a lot more picks. Examples:7/7 correct gives an accuracy of 1.0*(1-0.5^7) = 0.99229/9 correct gives an accuracy of 1.0*(1-0.5^9) = 0.99806/8 correct gives an accuracy of 0.75*(1-0.5^8) = 0.747112/16 correct gives an accuracy of 0.75*(1-0.5^16) = 0.75007/7 correct = 1.0*(1-0.5^7) = 0.992224/25 correct = (24/25)*(1-0.5^25) = 0.9600149/150 correct = (149/150)*(1-0.5^150) = 0.9933As you can see the factor (1-0.5^n) heads towards 1 very quickly with increasing n, and nothing blows up on you in this way.Cheers,LoneIguana
Best Of |
Favorites & Replies |