This year we’re going to try to apply a Pythagorean approach to strength rating and apply the method to project 2017. The projection involves three steps:
- Pythagorean strength accounts for close game luck
- These strengths are used to isolate the draw effect in each team’s 2016 performance, to obtain a “fair draw strength” from their record
- These “fair draw strengths” are applied to the 2017 fixture to come up with expected wins in 2017
We do not account for personnel changes, for travel loads, for intangible changes to preparation or coaching. Those bits of knowledge and context are of course vital, and should be applied to the projections to make educated guesses about where the raw numbers are likely going to be wrong. All models are wrong, some are useful, and we think this approach adds some important context to the expectations of each team.
How we did last year
Below is a table showing our 2016 projections. We show both the simple “win loss” strength of schedule approach, and one where we accounted for close game luck. We’ve also gone back and retroactively applied the Pythagorean expectation method, and observed that it gets a bit closer to the pin on average than the simpler methods we used last year, on average missing by 1.6 wins versus the 1.9 and 1.7 wins of the previous methods.
Note that in all three measures we assumed the Jaguares were a 9-win team and the Sunwolves and Kings 2-win teams (in line with the bookies markets), in order to predict in an assumed strength for them.
All in all, the models did okay. The Lions emerged as a bolter last year – our methods which accounted for 2015 luck (they won a lot of close games) all underrated them as a result, because they sustained results rather than regressing to an assumed mean. The Jaguares were a bad miss, but this was largely a punditry and betting markets failure.
The Blues also improved markedly on our expectations. Having won 3 games in 2015, they were projected to win between 3 and 5. Instead they won eight and had a draw. This sort of change illustrates the limitations of projecting based solely on previous data – teams can and do improve or decline markedly for reasons not observable in fixture effects or scoring outputs.
Pythagorean expectations at the top end tended to be less optimistic than pure win-loss measures and this turned out to be more accurate since no team won more than 11 games. The Hurricanes and Stormers had projected to barely lose a game on pure strength of schedule measures, but accounting for luck and score outputs placed them more accurately.
The “k” value HPN has used for Super Rugby is the same as the NRL one outlined by Tony Corke here. As the number of teams, rule differences and competition make up of Super Rugby drastically shifts from year to year, it is hard to find a stable sample to analyse to find a more accurate “k” value. HPN opted for the NRL “k” after doing some initial testing, and noting that the rules, average scoring and win distributions of the two codes are similar.
There was also consistent over-estimation at the bottom end, with nearly every team in the bottom half of the standings expected by all our projections to win more than they actually did. This is kind of a feature of Pythgorean expectations – they tend to have trouble with teams who could quite reasonably be expected to barely win a single game, and right now, Super Rugby has several teams like that.
Below is a table showing the two-stage process of obtaining a strength figure for each team in the Super Rugby competition. First, we apply the Pythagorean expectation calculations to work out how many games each team “should” have won in the 2016 season (based on their scores achieved and conceded). Second, because the Super Rugby has a very uneven fixture (more on that in a moment), we take the derived strengths of every team’s opponent set and use that to adjust each team’s rating up or down to obtain a “fair draw strength”. Those figures are below.
The “fair draw strength” is an indicator of how each team would be expected to perform playing entirely even opponent sets. We can see, for example, that the Stormers and Blues were actually very similar, and both look like middling sides if tested in the vacuum of a fair draw. In reality, they were separated by two wins in actual results, they maybe should have been three wins apart given average luck, and of course the Stormers won their conference and hosted a quarter final while the Blues missed finals altogether.
Note that this calculation assumes a fair draw is 0.500 for all teams, meaning it doesn’t take into account the fact that teams don’t play themselves. Stronger teams inherently face a weaker draw in an even fixture because they don’t play themsleves. The Force and Chiefs both facing a 0.522 opponent set in 2016 therefore means that the Chiefs faced a fixture more skewed to the difficult side.
The Super Rugby format is inherently uneven due to the existence of double-up opponents and teams not playing each other. Anything other than a single round-robin with all teams playing each other once is going to result in skewed fixture difficulties.
Some of the blame for this falls on the conference system. As we can see below, the gulf between the collective strength of Australian and New Zealand teams is the driving factor in the fixture unevenness.
However, we can’t entirely blame the 18-team structure and the conferences, as these inequalities existed before the expansion due to New Zealand and Australian teams preferring to face their compatriots more often. New Zealand teams are basically handicapping themselves for the sake of commercial and travel considerations. This will persist as long as the strength imbalance remains and the league persists with double-up national derby games.
The chart above shows the gulf between the two countries. Only the Blues present as weaker than average, and only barely, while only the Brumbies and Waratahs shape as viable finalists in Australia. It’s entirely possible that if the Blues were based in Tasmania, they’d be a shot to win the Australian conference. Note that the incomplete double round-robin in each country means the Brumbies and Waratahs don’t play each other twice. That’s a big fixture advantage. Meanwhile the Rebels get the bad luck of playing both those teams twice – the hardest possible Australian fixture set.
The African pools have managed to be nearly exactly even in strength, but on a rotating basis, one of them gets a huge free ride by virtue of avoiding all New Zealand opponents. Last year the beneficiary was the Stormers, who won the conference with the soft Australia-focused draw. At least, they were the “beneficiary” until finals, when they were immediately minced by 39 points by the Chiefs, the first Kiwi team they faced all year. This year, the Lions, Sharks, Jaguares, and Kings are the teams set to have the fixture edge by avoiding any trips to New Zealand.
Spare a thought also for the Sunwolves, who in addition to facing an insane travel load (even their “home” stretches involve regular flights to Singapore, the equivalent of flying from Auckland to Perth), also face all the Kiwi teams this year.
With the method and the inequalities noted, we can now take our platonic ideal “fair draw” strengths, apply them to the imperfect kludge that is the Super Rugby fixture, and do some projections:
By a very small margin, our projections are putting in three New Zealand wildcards, as occurred last year, with all Kiwi teams except the Blues expected to shade both the Brumbies and Waratahs. The Highlanders came out of last year looking in a Pythagorean view as the strongest team (due to their lower points conceded), and they project as New Zealand champions here by a fraction of an expected win. As with last year, though, the competition among the Kiwi contenders for the home quarter final will likely come down to a few bonus points.
Australian rugby is in a bit of a state, and the competition for a finals spot is presumably between the Brumbies and Waratahs. Whoever finishes second out of those teams is going to be sweating on the results in New Zealand and how they impact the spread of the three Australasian wildcard spots.
Note that the Pythagorean expectation only knows about 2016 results and isn’t aware the Brumbies are expected to decline greatly on the back of personnel losses. It also isn’t aware the Rebels, the next most likely threat, were just crushed at home by the Blues. If a challenge were to come, look towards the Reds and Force, who both underperformed last year based on their “true” strength.
As noted above, the Stormers came out of this conference on top last year, and would be near certainties to do so again. This conference plays the New Zealand opponent set this year, meaning last year’s results with Australian opponents were probably better than the teams can expect this year.
All of Africa 1’s expected wins are thus reduced from expectations under a fair draw, except for the Stormers who, due to not playing themselves, actually have a pretty close to 0.500 fixture set. The Bulls are presumably the threat, but the Cheetahs were the biggest “underachievers” of Super Rugby 2016, and probably should have got 2-3 more wins than they did. The Sunwolves will just be looking for a bit of continuity and improvement.
The Lions shape as massive beneficiaries of their opponent set. After admirably navigating the harder draw last year, and defying our expectations that a good run of close 2015 results meant they’d slip back in 2016, they now look set to lead the competition in wins this year.
On last year’s results and with Australian opponents, the Sharks should be favourites for the African wildcard spot.
The Jaguares are intriguing. Our biggest miss last year was overrating the Jaguares based on betting markets and the international credentials of their players. So naturally we’re projecting them to rise this year. Their easier opponent set adds half a win to expectations, but also, despite winning only four games on debut, they scored and defended like a team who should have won six.