(09-29-2013 09:05 PM)Sultan of Euphonistan Wrote: They said the same thing going during the BCS time but removed most of it because going with SOS so highly did not give the end results that they wanted. I will believe high SOS being a real component when I see it.
That's not exactly true. In terms of why they changed it. What happened was the original BCS formula, which equally weighted both human polls and six computer polls, in addition to having a separate SOS component, essentially made SOS the most important part of the equation. From Wikipedia:
Wikipedia Wrote:The BCS formula calculated the top 25 teams in poll format. After combining a number of factors, a final point total was created and the teams that received the 25 lowest scores were ranked in descending order. The factors were:
- *Poll average: Both the AP and ESPN-USA Today coaches polls were averaged to make a number which is the poll average.
*Computer average: An average of the rankings of a team in three different computer polls were gathered (Jeff Sagarin/USA Today, Anderson-Hester/Seattle Times, and New York Times), with a 50% adjusted maximum deviation factor. (For instance, if the computers had ranked a team third, fifth, and twelfth, the poll which ranked the team twelfth would be adjusted to rank the team sixth.)
*Strength of Schedule: This was the team's NCAA rank in strength of schedule divided by 25. A team's strength of schedule was calculated by win/loss record of opponents (66.6%) and cumulative win/loss record of team's opponents' opponents (33.3%). The team who played the toughest schedule was given .04 points, second toughest .08 points, and so on.
Margin of victory is a key component in the decision of the computer rankings to determine the BCS standings.
- *Losses: One point was added for every loss the team has suffered during the season. All games are counted, including Kickoff Classics and conference title games.
Before the 1999ā2000 season, five more computer rankings were added to the system: Richard Billingsley, Richard Dunkel, Kenneth Massey, Herman Matthews/Scripps Howard, and David Rothman. The lowest ranking was dropped and the remainder averaged.
It then goes on to explain how the foumula evolved. Anyone there were a lot of problems with the initial formula, but the biggest one is that every poll, human or computer, has a SOS component to it. In addition there was a separate SOS component added. What ended up happening was the SOS became more important than anything else, and you had teams with bad records near the top of the rankings due to SOS.
Now I will give you that the BCS formula was designed to counteract claims that the media polls were biased, and that it is true that every time the BCS number 1 and 2 did not match the media poll 1 and 2, the formula was changed. That is valid, and something I do mention on occasion. But it wasn't without merit, as they simply had to fix the flaws. First the SOS was too important. Then they added multiple computer rankings and weighted them evenly with media polls, which made SOS and margin of victory too important, causing teams to really run up the score. Then they tried to balance it out, and made the computer component and SOS probably a bit too weak and the media component too strong, hence the run of SEC teams in the BCS title game. And while tweaks to the formula could have been made to improve it, I do think that of the various incarnations of the formula, it is probably the best one we have had (well at least before the AP poll was switched out for the Harris).
I'd have probably preferred to see a basketball style RPI component* added (see below) which would actually favor smaller schools and help scale down the media polls impact (new formula would be 30% Coaches, 30% Harris, 30% computer 10% RPI) but I think this is still better than the previous versions that made SOS most important, regardless of result, or ones that encouraged beating the living piss out of teams.
* RPI in basketball
The current and commonly used formula for determining the RPI of a college basketball team at any given time is as follows.
RPI = (WP * 0.25) + (OWP * 0.50) + (OOWP * 0.25)
where WP is Winning Percentage, OWP is Opponents' Winning Percentage and OOWP is Opponents' Opponents' Winning Percentage.
The WP is calculated by taking a team's wins divided by the number of games it has played (i.e. wins plus losses).
For Division 1 NCAA Men's basketball, the WP factor of the RPI was updated in 2004 to account for differences in home, away, and neutral games. A home win now counts as 0.6 win, while a road win counts as 1.4 wins. Inversely, a home loss equals 1.4 losses, while a road loss counts as 0.6 loss. A neutral game counts as 1 win or 1 loss. This change was based on statistical data that consistently showed home teams in Division I basketball winning about two-thirds of the time.[2] Note that this location adjustment applies only to the WP factor and not the OWP and OOWP factors. Only games against Division 1 teams are included for all RPI factors. As an example, if a team loses to Syracuse at home, beats them away, and then loses to Cincinnati away, their record would be 1-2. Considering the weighted aspect of the WP, their winning percentage is 1.4 / (1.4 + 1.4 + 0.6) = 0.4117
The OWP is calculated by taking the average of the WP's for each of the team's opponents with the requirement that all games against the team in question are removed from the calculation. Continuing from the example above, assume Syracuse has played one other game and lost, while Cincinnati has played two other teams and won. The team in question has played Syracuse twice and therefore must be counted twice. Thus the OWP of the team is (0/1 + 0/1 + 2/2) / 3 (number of opponents - Syracuse, Syracuse, Cincinnati). OWP = 0.3333
The OOWP is calculated by taking the average of each Opponent's OWP. Note that the team in question is part of the team's OOWP. In fact, the most re-occurring opponent of your opponents is the team in question.