SHARE

The NCAA selection criteria have been altered repeatedly over the years. Some changes have been mega-obvious (the institution and removal of the “Colorado College rule” giving autobids to both regular-season and tournament conference champions); some have been well-publicized though less dramatic (the .500 rule for NCAA tournament eligibility); and some have flown largely under the radar.

Changes falling into the last category usually have to do with the mechanics of the PairWise Rankings. The overall theme of the PWR (compare all eligible teams to one another, one pair at a time, and add up the results) has stayed the same for years, but the components that go into each PWR comparison have been altered several times.

Those changes include switching the PWR to compare the top 25 teams in the Ratings Percentage Index instead of all teams at or above .5000 in the RPI; removing the “Last 16” games (previously “Last 20”) comparison criterion; and adding the 10-game minimum before the Teams Under Consideration criterion can kick in.

But the least-understood changes have been in how the RPI itself is calculated. Since the RPI is a PWR comparison criterion and is also the tiebreaker for comparisons between teams, it is easily the most important PWR element; that can be seen by visiting USCHO.com’s PWR and RPI pages, comparing the two and seeing how well the rankings line up side by side.

Three different formulas for computing RPI have been used over the years. Recall that RPI is just a weighted average of a team’s winning percentage (WP), its opponents’ collective winning percentage (OWP) and its opponents’ opponents’ winning percentage (OOWP). That has always been the same, but the weights assigned to WP, OWP and OOWP have been changed.

In a previous blog post I looked at some weirdness resulting from the current RPI formula, which is 25% WP, 21% OWP and 54% OOWP. In the past, the PWR used a 35%-50%-15% weighting and later a 25%-50%-25% weighting before the current 25%-21%-54% system was adopted.

So what happens to the PWR if we go back to the old RPI weights? The listings below count all games through Friday, Feb. 6. Of course, the numbers will move around as time passes, but the principles should stay the same.

(Note: all rankings below are the PWR, not the RPI alone, since the PWR is what NCAA selection is about. Also, I have not re-inserted other rules from days gone by — when I talk about the PWR with the 35%-50%-15% RPI weighting, I didn’t put the “Last 20” rule back in, for example. The purpose here is just to look at the effect of the RPI rules on the PWR, not to examine every change in the PWR that’s been made over the past decade. Also, I have broken ties in the PWR in the usual “Bracketology” way.)

35%-50%-15%

1 Boston University

2 Notre Dame

3 Vermont

4 Cornell

5 Michigan

6 Northeastern

7 Denver

8 Princeton

9 Miami

10 Minnesota

11 New Hampshire

12 Yale

13 Boston College

14 North Dakota

15 Minnesota Duluth

16 St. Lawrence

17 Ohio State

18 Air Force

19 Wisconsin

20 St. Cloud

21 Alaska

22 Dartmouth

23 Colorado College

24 Minnesota State

25 Nebraska-Omaha

25%-50%-25%

1 Boston University

2 Vermont

3 Notre Dame

4 Northeastern

5 Cornell

6 Denver

7 Michigan

8 Miami

9 New Hampshire

10 Princeton

11 Minnesota

12 Boston College

13 North Dakota

14 Yale

15 Wisconsin

16 St. Lawrence

17 Minnesota Duluth

18 St. Cloud

19 Ohio State

20 Air Force

21 Alaska

22 Minnesota State

23 Dartmouth

24 Colorado College

25 Nebraska-Omaha

25%-21%-54%

1 Boston University

2 Notre Dame

3 Vermont

4 Cornell

5 Northeastern

6 Michigan

7 Miami

8 Denver

9 Princeton

10 Minnesota

11 New Hampshire

12 Yale

13 North Dakota

14 Minnesota Duluth

15 Boston College

16 Ohio State

17 Wisconsin

18 St. Cloud State

19 St. Lawrence

20 Colorado College

21 Dartmouth

22 Air Force

23 Alaska

24 Minnesota State

25 Nebraska-Omaha

What you notice first about these listings is how little real difference has been made by the changes in the RPI. The list of the top 25 teams — the ones the PWR uses to make comparisons — is the same in all cases, and most teams are within a couple of places of having the same ranking no matter which version is used.

But it’s the differences, even small ones, that are interesting. Here’s where we see that the two changes have tended to cancel each other out. For instance, New Hampshire is 11th in the 35%-50%-15% version, ninth in the 25%-50%-25% version and 11th again in the current 25%-21%-54% version. Minnesota Duluth is 15th, 17th and then 14th. Ohio State is 17th, 19th and then 16th. Yale goes from No. 12 to No. 14 and back to No. 12. Notre Dame and Vermont switch places between No. 2 and No.3, then switch back again.

Why? In switching from 35%-50%-15% to 25%-50%-25% winning percentage was obviously made less important, but what’s less obvious is that WP got more important when the change was made from 25%-50%-25% to 25%-21%-54%. Why? Well, WP varies a lot from team to team, of course. OWP varies a lot less since it’s a compilation of many other teams’ WPs, and OOWP doesn’t differ much at all because it involves hundreds and hundreds of games (it’s your opponents’ opponents’ winning percentage, after all).

So when OOWP became a bigger factor, it enlarged the importance of an element that really doesn’t do a lot to most teams’ RPI. That means that WP became more important again as a side effect, since over half of the RPI weighting is now taken up by something that moves very little between different teams. So the patterns mentioned above make sense.

Now, the exceptions to the pattern, and St. Lawrence is probably the best example. The Saints go from No. 16 to No. 16 again to No. 19 in the current version of the PWR. Why the drop and no rebound, as opposed to what happened with most teams? Well, SLU plays in ECAC Hockey, which collectively has a modest nonconference won-loss record, so most of its teams have middling OWPs. Cornell, for instance, has an OWP of 0.5030, and Princeton’s is 0.4816 — those are the two highest ECAC Hockey teams in the PWR.

Not St. Lawrence. SLU’s OWP is 0.5465, which is seventh-best nationally. That’s because of the Saints’ nonconference schedule, which includes a total of six games against Michigan, Boston University, Vermont and New Hampshire, all of which were NCAA tournament-bound as of this writing. But the current RPI weighting system gives only 21% value to OWP rather than the old number of 50%, which means that those games don’t help SLU as much as they once would have.

This is the primary effect of the change in weighting, even though it doesn’t show up very often: teams with challenging nonconference schedules relative to their league schedules don’t see the benefit they once would have. Whether that’s a good or bad thing I leave to you to decide.

  • Guest

    typo… “shout down”

  • Guest

    Hi, my name is Chris Lerch, and I write about RIT…and then I write about RIT.  And then when I’m done, I write about RIT.