Monday, September 8, 2008

Rantin' Bout Ratings

In my time in the sport of disc golf, I've covered many things from a journalism stand point, seen many things from both the view as a player and a fan, and I've heard every complaint and compliment one could hear as a tournament director. No matter the topic or the source, when it comes down to a players ability or the effect of a round no matter who it is coming from, one common theme comes into play. Ratings.

There is no doubt Chuck Kennedy has made one of the biggest impacts the sport of disc golf will ever see. As the creator of the now standardized rating system the PDGA uses, Kennedy was able to seemingly fix one of the main problems within disc golf that ball golf never had an issue with; standardization of par.

The idea behind ratings is very simple. In disc golf, some courses are very very easy and you see scores in the upper 30's to mid 40's. Some courses have players routinely shooting sub 54. Others you have only a handful players to ever break 60. This combination of old school short and simple courses have now meshed with the new school thought of longer and tougher courses and thus has created an inconsistency in par. While disc golf is a spin off of golf (whether we like it or not), it is polar opposites when it comes to par. Pretty much every ball golf course you will play will have par at 70, 71, or 72. In disc golf, par can be and has been from 54 to 72 and everything in between.

This is where one of the biggest difference in the sports come and one of the reasons ratings were created. If I told you I usually shoot around 95 when I hit the white ball at Wil Mar Country Club in Raleigh, NC, you would have a pretty good idea of my talents as a golfer without any knowledge of the course, its length, or its difficulty. If I told you I average 50 at Cedar Hills Disc Golf Course in Raleigh, NC, you have no idea what that means unless you have knowledge of me as a player or knowledge of the course.

The ratings system, in theory, has provided a way for a round shot at one course to be compared to any other round you or anyone has shot on any other course and thus gives us a standardization of par within the game. Instead of me only saying I shoot 50 at Cedar Hills, I typically add that a 50 would be rated about 990 and then everything makes sense.

From this point, you are probably thinking I am big fan of the ratings and its effect. However, if you have ever read a single post of mine of the PDGA Message Board, you will soon find that nothing could be further from the truth.

Ratings have grown to the point where people are tanking rounds or quitting tournaments to preserve their rating. There was always a rumor floating around that to be considered to be on Team Innova, you had to be 1000 rated. There is a now a popular website, http://www.1000rated.com/, that tracks the hot rounds of the week through out the country, up and coming 1000 rated players and those who have fallen from this mark. Tournaments are starting to turn two 27 hole rounds into three hole rounds just so the ratings won't be off. Many players often comment on how many 1000 rated rounds they have shot and feel they get screwed if a round rating is 999. The PDGA Message Board has its own section about ratings. And finally, perhaps the largest proof of the explosion of ratings is that there are now rules of where a player is allowed to compete in PDGA sanctioned tournaments based on their rating.

Based on this previous paragraph, you would think that there would not be many problems with the system because, as you can see and whether you like or not, ratings are pretty important. However, there are very obvious problems within the ratings system that it seems everyone but Chuck Kennedy recognizes.

In my opinion the biggest problem with the ratings is how they are calculated. For some reason even though disc golf is an individual sport, ratings are calculated based on what everyone shoots. If the field plays bad, ratings are high. If the field plays good, ratings are low. This leads to people getting rounds rated to high or to low depending on the situation.

I've never understood why this is the case. If I shoot a course record and come in and find that eight players during the same round beat my score and the course record, does that mean I played poorly? No. All it means is eight people played better than me. Why should I penalized for this? If I shoot horrible and come in with a five stroke lead, does this mean I played well? No. All it means is everyone played worse.

Perhaps my favorite example this came on at Glenburnie Park in New Bern, NC. This past year on the short tees, I put together my best personal round of 46. Mike Hofmann, now rated 1005, matched this round as well. However, North Carolina's newest 1000 rated player, Jeb Bryant, smoked the course and fired a course record 41.

Since I had shot a 49 at a previous tournament on this course in similar conditions and received a round rating 1008, I was anxious to see a round rating of possibly 1040, which would be my third highest ever. However, since Jeb and others played really well that round, my 46 was rated LOWER than my 49. Exact same course. Same conditions. Three strokes better. Two points lower.

Here is another level of stupidity about the ratings. Lets say you have 20 players in a field. The highest rated player is rated 1000, you have one player at 990, one at 980, one at 970, etc all the way down to 800. According the system and the way it was designed, that round should have one player shoot 1000. One player shoot 990. One shoot 980, etc. How in the world can anyone predict what anyone will shoot before they even play the round? Isn't possible that three people shoot over 1000? Isn't it possible that no one does?

It gets worse. We now categorize different groups of ratings. Chuck has posted on the PDGA message board many times that a certain hot round is now the nth ranked rating for a certain SSA group. This information is categorized because the higher a course's SSA is, the lower amount of points per stroke is used to calculate the rating. This leads to some very high round ratings on courses with an SSA of around 50 to 54 and some very low round ratings on courses with a SSA over 60.

No one outside of our sport has a clue what a rating is or how it is calculated. All they understand on the surface is what ball golf has taught them and that is how many under or over par a player is. Imagine explaining all this stuff and all the ratings and how they are calculated and why they are needed and all that and then them finally understanding it. Now, they see a 1020 rated round next to a 1010 rated round and you having to explain why the 1010 rated round was actually better because the SSA was really high that round and the field player really well. I'm confused and I understand the whole process.

In the end, I know ratings kinda don't matter; the best score wins regardless of those rounds were all over 1050 or all under 950. However, if we are going to base who gets sponsored, who gets recognition and by all means who can play in what division on our ratings, shouldn't they be a little bit more accurate and not based on such a silly system?