I suppose the point of a thread on a "perfect" ranking, is to suss out what attributes such a ranking would have... not just the particulars of its algorithm. To that end, 4 ideal
Attributes are proposed in the OP, they are:
(A) simple and non-proprietary,
(B) similar to human polls,
(C] iterative, not a simple formula, and
(D) not the final say on postseason placement
(12-24-2019 11:25 AM)orangefan Wrote: The biggest difference of opinion tends to be whether all wins should be given equal weight regardless of the margin of victory, or whether the margin of victory should be considered. Beyond that, different modelers have developed different algorithms to address each of these factors. For instance, weight has to be given to the quality of wins by a team that was defeated to determine the value of the win, but how much weight? There are many additional nuanced decisions built into every model, each of which will affect the output.
Here is a description of the ranking system I've been using recently (shared mostly because it is the offseason, I have a couple days off, and I'm bored); it fulfills
Attribute A, simple and non-proprietary:
#1) A straight SRS; and all FCS teams are assigned a value of -25 (ie. 25 point underdogs to an average FBS team).
#2) A SOR (Strength of Record) formula is used, given each team's above SRS to sort teams into their final ranking. #1 and #2 fulfill
Attribute C
#3) Until CCGs are played, preseason rankings have some weight (which diminishes to 0 by December), so that rankings in September and October don't look wacky.
This part is effective, but complicated and over a decade old from a previous system; it may not satisfy Attribute C, but it matters 0 by the end of the season, so I'm not interested in altering it if its only for my private consumption.
Many SOR rankings (#2, above) I've seen have an arbitrary quality to them. ESPN, for example, ranks teams by how difficult it would be for "an average Top 25 team" to achieve the same or better record. This is of no help outside the Top 40 and oftentimes we're more interested in how the Top 5 stack up against one another, for playoff purposes. Instead I worked backwards, and rather than ranking based on
likelihood of repeating the record, teams are ranked by the
strength needed to repeat the record. It smoothes nicely over all 130 teams.
By using only SOR for the final rankings, the rankings reflect "deservedness" rather than "power". Last year, Alabama was #4 in SRS, ahead of playoff-bound Oklahoma; they were a powerful team and certainly played LSU closer than the Sooners did. However the human polls (and the committee) are more retrodictive and by going 12-1 and winning their conference against decent competition, Oklahoma "deserved" the 4th playoff spot more. This fulfills
Attribute B.
Still, the first SRS step is necessary to correctly grade SOR. A win over 11-2 Alabama (#4 in SRS) is much more impressive than a win over 12-2 Boise (#40 in SRS). Because SRS is MOV-heavy, running up the score ends up rewarding the teams on your schedule more than rewarding you (though there is no negative effect). This preserves the principle enshrined in the BCS era that MOV should not be used, so that teams hoping to maximize their computer ranking would not run up the score on their opponents. While the intent of the BCS forefathers was noble (good sportsmanship and all), MOV is immensely helpful in calibrating a final ranking. The BCS ended up using the human polls as a greater crutch after the 2001 and 2003 debacles to offset this limitation; the "eye-test" we called it. an
Attribute E? or still B?
Finally, no ranking should be the end-all for post-season play. "Tie-breakers" so to say, should be used. While 2018 Georgia may have had a more impressive 11-2 than Oklahoma and Ohio State's 12-1's, the number of losses break the tie and Georgia does not go to the playoff. Similarly in 2017, Ohio State's superior 11-2 performance loses out to the 1-loss campaign of Alabama. This fulfills
Attribute D.