Ratings vs Rankings: Why One Has More Value

How do ratings work? What is the difference between ratings and rankings? These are the questions I often come across when I post information on teams to watch and put them in a numerical system.


Rankings, whether it be published by myself or the AP voters, are done on a subjective basis based on information they have to form an opinion on how teams should be ranked.


On my end I like to put together a panel of football guys; coaches, football fans, and media members that all pay attention to the full landscape of football across the classes we cover.


The AP polls that do the state rankings are a panel of media members made up of news reporters across the state that put together a list of teams they rank. Due to these members not paying a whole lot of attention to areas outside of their coverage area, and submitting for all 8 classes, you can see a different amount of voters each week. This is due to the time that votes are to be submitted by, which is Monday afternoon. If a news reporter misses his vote this can tip the polls to favor teams in other regions simply because most news reporters will show bias to their area when ranking teams, especially when they don't know much about the classification they are voting for; such as a Chicagoland reporter putting in votes for Class 1A when they don't have many 1A teams to watch or cover.


Either way you decipher the state rankings, it is still fun to see where teams stack up, and typically by mid to late season you see the voters really start to bring the true top teams to the top of the rankings. There are times when you see a 9-0 or 8-1 team from a relatively weak conference get some high votes based on record alone, and that is where the ratings system can read through some of the bias rankings.


The ratings are done through the Freeman Rating system, created by Ned Freeman about 25 years ago. Through the years the formulas to create the rating system have been tweaked to an almost perfect way to calculate how good a team is, and the prediction cycle has proven to be around 85% accurate. The outlet that uses these is CalPreps, which derives its information from MaxPreps. MaxPreps has been collecting information from individual high schools or high school associations for years, and also uses a version of the Freeman Ratings they use to determine their annual National Champions. CalPreps has been collecting information since 2001, first just in California, and then in 2003 began collecting information nationally.


The pre-season ratings are based off how teams finished last year, and what they have returning for this season. Some teams may go up in ratings to begin the season due to who they have coming back, while others may drop if they lost more than the normal amount of seniors. Other teams, may simply stay rated where they finished the end of the season.


While teams have a rating number to start the season, all teams actually begin the season at 0. This leaves all bias out of the equation and simply rates teams based on cold hard results. As the season progresses, the ratings adjust based on wins, losses, and margin of victory. It also calculates good wins, bad wins, good losses, and bad losses into the formula. The beginning rating is pulled out about mid-season, so if a team who was rated high to begin the season performs bad they will see a significant drop in their rating since all teams actually start the season off at a 0 rating.


How are the good wins, bad wins, good losses, bad losses calculated? There is a maximum win amount and a minimum win amount, which keeps teams from needing to run up the score on much weaker opponents to gain ratings points. In the Freeman Rating scenario if Team A has a 15 rating and Team C has a -5 rating, that tells us that Team A should win that game by at least 20 points. If they fail to beat Team C by that many points, then Team A will lose rating points, while Team C will gain rating points for performing better than originally expected. This means that the ratings are fluid every week until all play stops. Likewise, if Team A is rated at 15 and Team E is rated at -20, Team A should win by 35 points, but the max points you can gain in one game is 30, so they would hit that threshold and it would not effect either team's current rating because according to the formula Team A should win by 30 plus points. This is where a cap gets put on to level the field off so you don't have teams just running up the score for intents and purposes of driving ratings.


So, how does a 1-2 team rate higher than a team that is 2-1? Let's take a look at the following scenario provided by CalPreps.com.


Team A is 3-0 with wins against C, D, and E

Team B is 2-1 with wins against F and F and a loss against E

Team C is 2-1 with wins against D and E and a loss against A

Team D is 1-2 with a win against F and losses against A and C

Team E is 1-2 with a win against B and losses against A and C

Team F is 0-3 with losses against B, B and D


Take a minute and review this data. Team A is clearly the strongest team at this point in the season and Team F is the worst team. A closer look would likely convince you that, of the 2-1 teams, C should be rated above B. B's wins came at the hands of the 0-3 team, while C's wins were against stronger competition. Likewise, B's loss was against a weaker opponent than C's loss, as C lost to the top team (A). Regarding the 1-2 teams, they have identical losses, but E's win came against a stronger opponent (D beat the 0-3 team). So, E should be rated slightly higher than D.


So, in the rating system the following teams would be rated as such:


Team A is 3-0 with a rating of 22.2 - wins against C, D, and E

Team C is 2-1 with a rating of 13.2 - wins against D and E with a loss to A

Team E is 1-2 with a rating of 3.2 - win against B and losses to A and C

Team D is 1-2 with a rating of -0.9 - win against F and losses to A and C

Team B is 2-1 with a rating of -8.7 - wins against F and F and a loss to E

Team F is 0-3 with a rating of -22.1 - losses against B, B, and D


As you can see, the ratings system sees through the weakness or strengths of a team's schedule and places the rating based off results against the teams they played. It will also use the information of the opponents wins and losses against their opponents to help adjust the rating.


For example; if Team A beats Team D by 40 points, Team C by 10 points, and Team E by 20 points, and Team B beats Team C by 30 points and Team D by 20 points it will adjust the ratings based on those results as it should show that Team A and Team B may be close to the same caliber of strength.


No matter how you shake it, state rankings and ratings systems are both very fun to look at and analyze. The one thing that always matters is that neither are 100% perfect, and every team still needs to show up and play. The cool thing with the ratings system is that its level of accuracy in a given season provides a higher and more accurate barometer of what teams are the strongest, especially when you consider how the state of Illinois seeds its playoffs in a north and south split.