HPN Finals Preview – Can The Giants Clip The Crows’ Wings?

A lot of ink has been spilt about how GWS saw their season nearly wiped out by a massive injury toll, but this match is more likely to be shaped by a singular Adelaide injury.

Due to injury and suspension absences, it has been difficult to get a grip on just how good GWS actually are this year, with their middle of the season run looking more like a team who missed the finals than sneaking into fourth. The HPN Team Ratings have them as being the 9th best for midfield movement, 8th best at converting opportunities into points up forward and 4th best at defending when the ball gets down back.

GWSvAdel.JPG

They are almost certainly better than this.

How much better? We don’t know yet, but if they revert to last years’ form where they had the 2nd best Offensive and Defensive ratings, and 6th in the middle, they would be in with a fair shot at winning the whole damn thing this year.

Adelaide by contrast took a solid 2016 performance and improved significantly this year, finishing the year with the best Offensive rating, second best Midfield rating while sitting “only” seventh down back. On paper, Adelaide should both get the ball inside their forward 50 more often AND score more effectively when they do so. When adjusted for the expected opposition defence this week (GWS have been relatively stable in defence this year), Adelaide would be expected to score an extra point every ten inside-50s for each side – which might cause a blowout if GWS can’t batten down the hatches or win the fight in the middle.

We’ve taken an experimental step in forecasting the finals using the HPN Team Ratings, and these are predicting a win for Adelaide by about 15 points. Using this system we expect Adelaide to get around six extra inside-50s and to convert them on the scoreboard at the better rate they’ve achieved all year.

Team Selection

Crows PAVPG.JPG
Adelaide Crows Player Approximate Value (PAV) per game

 

The big opportunity for GWS is the absence of Rory Sloane, one of the league’s elite midfielders. According to PAV, our new player value system, Sloane had the third highest MidPAV per game of any player in the league this year – a massive hole to cover.

This statistical view is well supported by subjective perceptions. Whenever Sloane was tagged out of a game or otherwise ineffective, the Crows’ gameplan appeared to fall in a heap. Greenwood has been named to ostensibly replace Sloane, but effectively the entire midfield group will be asked carry the slack. Sloane is by far the most valuable player to the Crows. The Crouches are approaching his midfield output but the MidPAV per game difference between Sloane and a decent soldier like Richie Douglas is about 40% – enough of a window to give GWS a shot to win that battle. On a total PAV per game approach, the Crows have effectively selected their strongest possible team minus Sloane and Otten from their top 22 across the season. Structurally, however, Otten is the lowest rated of the taller Adelaide defenders, and Knight (who is effectively replacing Otten) has a higher DefPAV rating per game.

The one outlier from this bunch is Jake Kelly, a player considered by PAV to be the second least valuable to play at least 20 games this year. Kelly undoubtedly fills a critical role for the Crows, able to switch between smaller and taller defenders, and cover ground, but he struggles to hit the stat sheet with any impact unlike some others who fill that switch-defender role at other clubs. Adelaide haven’t found an upgrade for Kelly this year but we suspect they would like to do so.

GWS PAVPG.JPG
GWS Giants Player Approximate Value (PAV) per game

 

GWS are also picking near full strength side aside from some calls on the fringes, based on 2017 performances according to PAV. Interestingly, Josh Kelly has already become the Giants’ most valuable player in PAV terms.

18 of the selected GWS 22 fell within their top 22 according to PAV per game, with all of their top 15 selected. The absences are all likely explained by structural factors other than the loss of Devon Smith.

Dawson Simpson rated at 16th for GWS per game this year, on account of his “75% of Shane Mumford” routine, with Devon Smith (17th), Johnson (21th) and Taranto (22nd) the others from the top 22 to miss. Adam Tomlinson sits 23rd, but he plays a crucial structural role for the Giants as a tall and mobile defender able to slide to almost any mid-to-tall forward – expect to see him for spells on Tom Lynch this week. Johnson has had a well-documented difficult end to the season and his absence is understandable.

PAV-based selection (probably a while off being a thing anyone does) would have opted for Tim Taranto (22nd) over either Himmelberg (27th) or de Boer (32nd) to replace one of Smith or Johnson, but it is worth noting that both de Boer and Himmelberg are probably more versatile than the forward-oriented output Taranto has produced this season. With the six most forward-productive Giants already selected, that versatility may have swayed the selection table.

Overall, in spite of the selection of more players outside top-22 in PAV per games terms, it looks as though the Giants are running closer to their preferred strongest side with only some marginal calls at the fringes. The reason for this is simply that the Crows face a big question mark over how they will perform without Sloane, who is by a wide margin their most valuable player.

Advertisements

The 2017 PAV All Australian Team

Selection Rules

Instead of picking the top 40 players from 2017, and picking a team from there, we have decided to go down a slightly different path instead. Out of interest, here are the top 40 players according to Player Approximate Value (PAV):

AA40 2017.JPG

As you can see, there just aren’t enough defenders available to fill a team in this manner, and the number of small forwards is also a little lacking.

Similar to the AFL Coaches Association All Australian Team of 2016, we have implemented several selection rules to guide us. Firstly, we wanted to pick as versatile a team as possible, with a hybrid attack, leaning to the shorter side.

We have instituted a limit of 15 PAVs in order to make the side. That covers the top 96 players this year, with Dan Hannebery falling just on the wrong side.

We decided that the back six should be made up of two to three tall defenders, and three to four smaller defenders. In practice we will identify these by their DefPAV, however overall PAV will come into consideration for the smaller options.

The forward is selected with a slightly different mix – we wanted two or three tall forwards followed by a bunch of small/mid-sized options. We didn’t predetermine the small/mid mix because we have seen a number of different, versatile structures with small forwards come to the fore this year. The KPFs are chosen by OffPAVs, and the smaller options taken as a hybrid of OffPAV and total PAVs. More than anything, we have tried to pick a bunch of players than can rotate through the forward line and create mismatches, and can spell the first-choice midfield if required.

In the middle we don’t have pure wings, but the team shouldn’t lack pace/creativity on the outside. We think it contains a multitude of options through the middle, including from the forward line and from the bench.

The bench is filled by the next best available. We also tried to ensure that there is a second or pinch hit ruck option available to give the number one ruck a chop out.

The team

AAPAV 2017

The PAV AA side shares a lot of players with the true All Australian side, with 16 common members and six changes. Of those changes, a different structure or rules for selection would have put several of the official All Australians in our team.

2017 AA Team.JPG

Jeremy McGovern was a consideration, however he didn’t have a high enough pure DefPAV score, as he spent some time up forward in 2017. However he made several early drafts of the team. If a third tall defender were required, Daniel Talia had the third highest pure DefPAV rating, but only had 14 PAVs overall. Eliot Yeo was also a little unlucky, as he had a high number of total PAVs but lacked the gaudy defensive totals of those who made the final cut. In the end, the call was a direct decision between the stellar Hibberd and Yeo, and we opted for the more specialist defender, even if he had slightly lower total PAVs.

Looking to the midfield, Zach Merrett and Josh Kelly narrowly missed selection for this side, and both were in the top 20 players overall according to PAV. If more specialised outside mids were required, both of them would be the choices ahead of a couple of midfielders we’ve named. However, it should be noted that neither are “truly outside” – if you were looking for that, guys like Tom Scully would come into consideration.

Joel Selwood was a little further back in 33rd, however on a per-game basis he would have made the side. Matt Crouch ended up just one spot behind Selwood in 34th for the year. All of Kelly, Merrett, Selwood and Crouch had great years, but were just edged out by others.

Also in that unlucky mix are Taylor Adams, Nat Fyfe and Brad Ebert – one could argue a case for their inclusion, but we ended up sticking with the raw data. None are bad choices, are all are arguably worthy. Sydney’s Josh Kennedy would also have been close to selection had he played one or two more games as well. Clayton Oliver, considered unlucky not to make the real All-Australian team by many, was hurt by the influence of the Melbourne co-captains, Jones and Viney. Jones in particular was likely in line for a spot in the PAV All Australian side (and perhaps the real one) until injuries got the better of him.

Josh J Kennedy had the third highest OffPAV score (hampered by missing games), but the early decision to focus on a multi-dimensional forward line pushed him out. In a real game we imagine a rotation of Ryder and Kreuzer to play as the third tall forward, with Bontempelli, Martin, Parker and Dangerfield also able to fill a marking forward role depending on rotations. However, an alternate structure could be to move Martin to the centre, Wines to the bench, and Shiel out of the team.

We’ve picked Martin as a HFF because we wanted to fit an extra elite midfielder in the team, and both Dangerfield and Martin would also have qualified as small forwards. We took this liberty and ran with it, but Martin would be expected to run through the middle for most of the game.

Paddy Ryder easily makes the bench for the side, and forms a very athletic and versatile ruck duo with Kreuzer. Although Cotchin is rated higher than Wines overall, we opted for Wines in the middle in order to add grunt at the opening bounce (Wines also shades Cotchin for MidPAV). Bontempelli was a little down on last year but still had a year most would envy, and Shiel provides both grunt on the inside and class on the outside in the last spot on the pine.

Whilst we can’t say that this hypothetical side would beat the real hypothetical side (especially with 16 of them wearing two jumpers), we feel that they would give them a good run for their money.

McGrath may not have been the most valuable Rising Star

Young players have it pretty rough in footy. Learning a new level of game in a newly professional environment, many straight out of high school, it’s little wonder that even the best first-year kids don’t instantly end up in the upper echelons of the competition.

This makes evaluating young players very hard – we look for signs of future performance rather than just their present contributions – and the Rising Star award seems to do likewise. Voting for the Award is done on a 5-4-3-2-1 basis by a panel of experts and we have no clear idea why they vote the way they do, but we assume it’s a combination of both present output and intangible perceptions of potential, plus the bloke from South Australia voting for his former team’s nominee.

Andrew McGrath has today been awarded the prize, with 51 votes out of a possible 55 (nine of eleven judges gave him maximum) and the full leaderboard was as follows:

  1. Andrew McGrath – 51
  2. Ryan Burton – 41
  3. Sam Powell-Pepper – 35
  4. Charlie Curnow – 27
  5. Eric Hipwood – 10
  6. Sam Petrevski-Seton – 3
  7. Lewis Melican – 1
  8. Tom Phillips – 1

This post makes use of the Player Approximate Value, or PAV, method of player valuation which we unveiled yesterday. Below is a chart of the PAVs we have derived for each player nominated for the Rising Star this season, as well as some of the most notable non-nominees.

PAV RS.PNG

(We are still working on a “PAV per game” calculation that allows comparisons across seasons which contain different lengths due to finals, but here the simple calculation is valid because nobody has played finals in 2017 yet)

Applying the PAV to this year’s Rising Star candidates suggested that Sam Powell-Pepper was the most valuable to his side this year followed closely by Ryan Burton. The winner, Andrew McGrath from the Dons, performed less well. Sean Darcy, who wasn’t even nominated, was most valuable on a per-game basis in his stint as ruck for Fremantle and the other two who might have merited nominations for season output were Matthew Kennedy and Jarrod Berry. Only Jason Castagna played every game this year.

These scores aren’t necessarily great by league standards – SPP was 157th overall this year, while Burton was the 51st best in defensive PAV – which illustrates just how steep the learning curve and how hard the road ahead for even the best young players.

Why didn’t McGrath top the PAV for Rising Stars?

HPN thinks the answer to this question is that McGrath seems to have played as a non-rebounding mid-sized defender type, with a lot of “empty carb” disposals. His main notable characteristics were, according to the AFL website’s article, that he ranked among candidates “first for handballs, second for disposals and second for effective disposals”. A lot of voters for traditional awards, especially those decided post-season, look for counting stats as an easy indication of ability.

PAV doesn’t incorporate raw disposal counts into any of its valuations, and he has clearly he performed less well than some other Rising Star players in PAV-associated things like clearances, inside-50s, tackles, rebound-50s, etc. His most notable rating was a 4.9 in Defensive PAV, the fifth highest overall, suggesting he did pretty well in terms of one percenters, marks and avoiding giving free kicks. However, PAV suggests that if a defender should have been chosen, then that person should have been Burton.

With a more mature group of players around him, such as Heppell, Merrett, Hurley, Goddard, Kelly, and to an extent Watson, the critical disposals often fell to their hands, where Burton was asked to carry a far greater load for Hawthorn, and SPP was asked to do a lot in the centre of the field from day one for Port Adelaide.

We don’t doubt for a second that McGrath may end up the better player of the three vote leaders (he was pick one for a reason), but Essendon had the luxury of easing him into football as a cog with a less-damaging role, and giving him excellent support. McGrath has obviously performed the role with sufficient promise and aplomb to satisfy the voting judges.

Introducing Player Approximate Value (PAV)

One of the oldest questions in global team sport is: what is a player really worth?  To come up with a workable answer for this, we have leant heavily on work undertaken by Bill James, Doug Drinen and Chase Stuart, and looked at several different sporting codes and how they attribute player value within the team environment.

This post will describe in detail the player valuations we’ve derived under a method we’re calling Player Approximate Value (PAV). We’ve given hints of these valuations in past posts such as this one about recent retirees and this one running through statistical “awards”. We are planning to use the values we’ve derived here to replace earlier methods of trade and draft valuations, and will continue running other PAV-based analysis, so you’ll see a lot more of it in future.

Valuing players

Much of the modern advanced sport analysis can be traced back to one man: Bill James. From the publication of the first The Bill James Baseball Abstract in 1977, James has created a language to describe the sport beyond it’s base components, and has emphasised using statistics to support other obvious judgements.

In 1982 James introduced a concept called the value approximation method, a tool to produce something he called Approximate Value. He did so by stating:

“The value approximation method is a tool that is used to make judgements not about individual seasons, but about groups of seasons. The key word is approximation, as this is the one tool in our assortment which makes no attempt to measure anything precisely. The purpose of the value approximation method is to render things large and obvious in a mathemtatical statement, and thus capable of being put to use so as to reach other conclusions.”

The resultant product produced by James was inexact, but able to generally differentiate bad seasons from good seasons, and good seasons from great. James used basic achievements to apportion value, based on traditional baseball statistics. Over the years James experimented with a series of different player value measures, but he revisited Approximate Value several times, most notably in 2001. However, much of James’s later efforts focused around other methods of player valuation, and Approximate Value remains an often overlooked part of his prior work.

In 2008 Doug Drinen, of Pro-Football Reference, decided to adapt James’s original formula to evaluate which individual college postseason award was most predictive of future NFL success, but was confronted by a lack of comparable data for football players. This initial effort, while a noble attempt, was critized for using very basic statistics – games played, games started and Pro Bowls played. Whilst the results largely conformed with logic, notable outliers existed – ordinary players that saw out lengthy careers on poor teams.

Unwittingly, we created a similar method to both the original 1982 James formula and the first Drinen formula, which we used to create a Draft Pick Value chart. The method created a common currency that could be used to value the output of players drafted from 1993 to 2004, and to also predict the future output of players (1993 is considered by most to be the first true draft, as it comes two years after the cessation of the traditional under 19 competition and after the various AFL zones were wound back).

This produced this chart, as linked.

The most common criticism of the chart was, like the original Drinen analysis, it was too narrow in ignoring the quality of games versus the quantity of games played. For most players, the relationship between games played and the quality of the player is relatively linear – bad players tend not to play a lot of football before they are delisted. Due to the strict limitations placed on AFL lists, and the mandatory turnover of about 7% of each side each season, players who fail to perform tend not to stay in the AFL. A small modification we made in 2016 was to add a component of quality – namely a weighting by Brownlow Medal votes, which applied a weighting for Brownlow-implied value of players selected at each draft position above and beyond just games played.

However, the original formula still had the issue of valuing Doug Hawkins as having a better career than Michael Voss – which is patently ridiculous. And the modified formula, though doing a better job of valuation, still felt slightly incomplete.

Later in 2008 Drinen came up with the measure we know today as Approximate Value, by splitting contributions into positions and determining positional impact on overall success. Whilst it still is an approximate value measure, it was far more accurate than any other NFL value measure to date. Approximate Value is still used as a historical comparison tool of player value, worth and contribution across a variety of applications, not limited to draft pick value charts, trade evaluation and the relative worth of players across careers.

What have we done

Player Approximate Value, or PAV for short, is a partial application of the final Drinen version of AV, but applied to the AFL after a range of testing. In the vein of CARMELO and PECOTA, it is unashamedly named after Matthew Pavlich, who happens to be one of the most valuable performers in recent years under the PAV measurement now proudly bearing his name.

Basic AFL statistics are very good at determining a player’s involvement and interaction with play, but relatively poor in evaluating how effective that interaction was. On the other hand, basic statistics are reasonably effective at determining how good a team is both across a season and within each individual game. Drinen’s AV, and now PAV, both combine these two elements.

PAV consists of two components – Team Value and Player Contribution.

Team Value

When developing AV, PFR recognised that the team is the ultimate in a team sport, an approach that we fundamentally agree with. PFR split up an NFL team’s ability into two components – offence and defence. Both were evaluated on points per drive adjusted for league average.

Luckily, we accidentally stumbled on a similar approach in 2014 when trying to determine team strength, however we split strength into three categories corresponding with areas of the field – offence, midfield and defence. Unlike American Football, possession in the AFL does not alternate after a score, and turnovers aren’t always captured in basic statistics. However, after learning from Tony Corke that inside-50s are one of the stats which correlate most strongly with wins, we landed on an approach of utilising them to approximate the “drive” of the NFL.

The formulas, similar to those used in the HPN Team Ratings, which are all ratios measured as a percentage of league average:

  • Team Offence: (Team Points/Team Inside-50s) / League Average
  • Team Midfield: (Team Inside-50s/Opposition Inside-50s)
  • Team Defence: This is a little more complex.
    • Defence Number (DN) = (Team Points Conceded/Team Inside-50s Conceded)/ League Average
    • Team Defence = (100*((2*DN-DN^2)/(2*DN)))*2

All three categories are inherently pace-adjusted, and as such there is no advantage to quick or slow teams racking up or denying opposition stat counts.

Each season is apportioned a total number of PAV points (we’re just saying “PAVs”) in each category, at a rate of 100 * the number of teams in the competition. For example in 2017 there were 1800 Offence PAVs, 1800 Defence PAVs and 1800 Midfield PAVs, or 5400 PAVs overall. This ensures that individual seasons are comparable over time, regardless of the number of teams in the competition at any time.

Unfortunately, inside-50s have only been tracked since the 1998 season. For seasons before then, we have utilised points per disposal, which roughly approximates the team strengths of the inside 50 approach. There are some differences but they are relatively marginal overall – with very few club seasons moving by more than 3%.

We feel that these three basic statistics can articulate the strength of a team better than any other approach we have seen, and it happens to match the approach taken when creating AV.

Player Involvement

This is the part where HPN has deviated from the approach of Drinen and James. As positions are not strictly defined and recorded as tightly in Australian Rules as in the NFL, it would be impractical at best to use positions as a starting point for developing a player value system.

Instead, we considered that the best way for us as amateurs from the general public to identify a player’s involvement was through those same basic and public statistics. Whereas the team value as calculated above used a relatively small number of statistical categories, player involvement can be much more complicated.

To allocate value, we relied on a number of intuitive decisions, statistical comparisons and peer testing, refining until the results were satisfactory.

The first attempt we made with the guidance of Tony Corke’s statistical factors that correlate with winning margin, then making some subjective decisions made from there. This attempt produced “sensible” results and also correlated reasonably with Brownlow medal votes.

The formulae were then fine-tuned by testing subjective player rankings on a group of peers. The formulas were also tested further against Brownlow Medal votes, All Australian selections, selected best and fairest results and Champion Data’s Official AFL Player Ratings.

Although no source is perfect, PAV was largely able to replicate the judgements of these other sources, especially that of the Official Player Ratings. Generally, if a player has a higher PAV across a season, they will receive more Brownlow Medal votes:

BV v PAV

In the end, PAV and its results were tested on a wider scale via blind testing on the internet (stealing the approach taken by Drinen when he created AV), and the results largely confirmed the valuations taken by PAV. The formulae for each line are:

  • Offensive Score = Total Points + 0.25 x Hit Outs + 3 x Goal Assists + Inside 50s + Marks Inside 50 + Free Kick Differential
  • Defensive Score = 20 x Rebound 50s + 12 x One Percenters + (Marks – 4 x Marks Inside 50 + 2 x Free Kick Differential) – 2/3 x HitOuts
  • Midfield Score = 15 x Inside 50s + 20 x Clearances + 3 x Tackles + 1.5 x Hit Outs + Free Kick Differential

The weightings and multipliers used in each component formula will necessarily look a bit arbitrary, but are the results of adjustment and tweaking until the results lined up with other methods of ranking and evaluating players as described above.

As the collection of several of these measures only commenced in 1998, we have also adapted another formula for the pre-1998 seasons which correlates extremely strongly with the newer formula. Whilst we feel it is less accurate than the newer formula, it still largely conforms to the findings of the newer formula. This formula was created by trying to minimise the standard deviation for each player’s PAV across the last five seasons of AFL football. Around 5% of players have a difference in value of more than one PAV between the new and old formulas.

We will publish the pre-1998 formula in the not-too-distant future.

Putting It Together

The final step combines individual player scores and team strength calculations to produce the final PAV for each player. This is done in two steps.

Firstly, the individual component scores for each team are compiled. Each player’s individual player score is converted to a proportion of total team score, telling us the proportion of value they contributed to that area of the ground.

Secondly, the team value (i.e. team strength as outlined above) is multiplied by the proportion of the component score for each player.

An example will help illustrate this.

In 2016 the Blues midfield earned 96.71 Midfield PAVs across the whole side (being below league average). Bryce Gibbs accrued a Midfield Score of 3984, and the team tallied up 37702 in midfield score in total. As a result, Gibbs contributed 10.567% of the total Midfield Score for Carlton, and receives that part of the 96.71 Midfield PAVs that Carlton had gained – or 10.22 MidPAVs.

These calculations are done for every player in the league for every side. The overall PAV value for each player is merely the three component values added together. For Gibbs in 2016, this is his 10.22 MidPAVs with 6.86 OffPAVs and 3.21 DefPAVs for a total of 20.29 PAVs all up. Which is pretty good.

What PAV should be able to tell you

Two key advantages of PAV, we feel, are that it can be replicated based entirely on publicly available statistics, and that by using a pre-1998 method, we have derived a fairly long set of historical values.

While HPN intends to publish PAV to a finer degree than PFR, there still remains a great deal of approximation in the approach. This is especially the case for pre-1998 values, which rely on a far smaller statistical base. We cannot definitively state that these are the exact values of each player relative to other players; however we feel that the approximation is closer than any other method that has as long a time series made with publicly available data. It is possible, and indeed likely, that some lower-ranked players are better players than those above them in certain years.

What we are more confident of is that the values are indicative of player performance relative to others across a longer period of time. Or; in that given year, player X was likely more valuable than player Y, or at least to their team.

As it draws its fundamental values from team rankings, it is much harder to draw a higher value from a bad team than it is a good team. This scales player value to more highly rate performance in a good side, and specifically it highly rates players in good parts of the field in good sides.

As a group, players with a PAV of 18 should be better (or have had a better year) than those who are ranked with 16. As a rule of thumb a season with a PAV of over 20 should be considered to be a great season, with any PAV over 25 should be considered exceptional. This varies slightly for different positions – an All Australian key position defender may have a lower overall PAV than a non-All Australian midfielder, but with an extremely high rating in the defence component.

Below is a list of the players with the highest season long PAVs between 1988 and 2016:

TopPAVSeasons.JPG

2017 isn’t finalised yet, but the top end of the list to date is populated with Brownlow Medallist years and players considered to be the absolute elite of the league over the past two decades. While there are some year-on-year PAVs that conflict with common opinion, these top-end player-years do not contain any. Yes, that Stynes year was that good.

On a career basis, the top rated players should be fairly uncontroversial:

TopPAVSCareers.JPG

The top ten players on this list not only had successful careers, but also incredibly long careers as well. Note that this is current to 2016, so Gary Ablett Jr has more value to come.

Every player made multiple All Australian teams, and a majority were considered at different points of time to be the “best player in the game”. As such, PAV ends up being a measure of not only quantity of effort but also of quality.

What are the weaknesses of PAV?

Like almost any rating system, there are always blind spots – especially in early phases of development. Like in almost any rating system in any sport, there appears to be a slight blind spot in valuing truly pure negating defenders. Consider Darren Glass, possibly the finest shutdown KPD of the AFL era. He is somewhat overlooked from an overall perspective by PAV:

Glass.PNG

Glass’s Defence PAV still remains elite during this era, but he provided little to no value to any other part of the Eagles’ performance across the period. It’s worthwhile to compare Glass to the namesake of PAV for a comparison:

Pavlich.PNG

This is a lesson to sometimes look beyond the headline figure to the components that make it up. It’s worthwhile look beyond the Overall PAV figure for the relevant component figure for the player’s specific role, especially for specialist players. We can also see with a player like Pavlich that his shifting role over his career is revealed by PAV. Generally, a component PAV of more than 10 for a specialist player will place them in contention for an All Australian Squad selection (cf. Glass above), if not selection in the side itself.

Occasionally a season pops up that defies conventional wisdom, such as Shane Tuck’s highly rated 2005 season, or Adem Yze, who rates so highly via PAV as to suggest he was under-recognised throughout his career.

However, Insight Lane brought a very interesting observation to our attention this week, from Bill James himself:

As noted at the top, we’ll be applying this system throughout the draft and trade period to evaluate trades and draft picks, and probably in a lot of other analysis from here on out, as well. Stay tuned in the coming days for an All-Australian team based on PAV.

In our time developing and testing PAV, it has usually confirmed our conventional thinking, but occasionally surprised us. Which makes us think we might be on the right track. With system comes the ability to analyse, so the goal for us in developing this approach is to emulate and augment subjective judgments with a systematic valuation, rather than to create a value system alien to an actual “eye test”.

If you have any comments or questions about PAV, please feel free to contact us via twitter (@hurlingpeople), or email us at hurlingpeoplenow [at] gmail [dot] com. We are more than willing to take any feedback on board, and if you want to use or modify the formulas yourself, feel free to do so (just credit us).

Thanks to all that provided help, assistance and the reason for the development of PAV, namely Rob Younger, Matt Cowgill, Ryan Buckland, Tony Corke, James Coventry, Daniel Hoevanaars… and everyone we are forgetting here. We will add more when we remember who we have forgotten.

Alternate universes: the final rounds that might have been

We are into the final round of season 2017, and what a great time to look at the fixture that awaits us and see how those matchups would look if just a few things had broken a bit differently. Join us as we journey into the football multiverse and explore what might have been.

First up, the table below is the usual HPN team ratings.

ratings r22

We just want to note here first of all that Brisbane are currently, adjusted for opponent defensive strength (they don’t get to play themselves after all and they have a terrible defence) the best offence in the comp. That is, they have scored more per inside-50, adjusted for opponent, than any other side this year. What a weird season.

The top 8 here is the actual current top 8; bar Essendon being very slightly behind West Coast. In all likelihood the Bombers will make finals unless the Eagles can beat the Crows and jump either a losing Melbourne or Essendon via a loss or vaulting them on percentage.

The HPN team ratings over the year would expect to see the Swans in the top 4, we don’t need to rehash why that hasn’t happened. Geelong being outside the top 4 is about to be a recurring theme on our journey, alluded to in the title of the post.

So let’s go with some hypothetical ladders, from alternate universes:

What if every losing team had scored another goal?

Below is what the ladder would look like if every losing team had scored another goal, reversing a lot of results. We haven’t recalculated percentages but current percentages have been included as a guide:

plusagoaluniverse

The Tigers, who have been on the wrong side of a number of storied narrow defeats, would sit half a game clear heading into the final round, and they and Adelaide would have had the top two spots sewn up weeks ago. In this universe, Damien Barrett is floating the prospect of Richmond and Adelaide tanking to try to avoid GWS or Sydney and play Port Adelaide instead.

Down in tenth would sit Geelong, out of contention in finals as they rued last minute losses to Fremantle, Hawthorn, Port Adelaide and North Melbourne.

The current North Melbourne vs Brisbane Spoonbowl would instead see the Lions trying to jump Fremantle and yet again escape a wooden spoon.

What if we could bloody kick straight?

A simplistic and somewhat inaccurate measure of luck is scoring shot conversion. All things being equal, the expectation is that inaccuracy or accuracy regresses to the mean over time. Figuring Footy has done some wonderful work fleshing this out by adding scoring expectations, but for this exercise, let’s assume everyone coverts scoring shots at the same rate.

SSvictoruniverse

Port Adelaide now sit top 2, their accuracy having secured them wins over West Coast and Richmond at the cost of a loss to St Kilda. The Saints, naturally, make the 8 on this measure, as do a Hawthorn presumably not hobbled by Will Langford’s set shots.

The teams dumped from the finals, assuming everyone kicked straight, are Sydney (who would hypothetically still remain in contention this week), and Essendon (who would be long gone). The Bombers crash to 40 points, sitting well out of finals, thanks to draws with Hawthorn and the Bulldogs and losses to Geelong and Collingwood. This would be compensated only by the cold comfort of having beaten Brisbane, in an ever fading “revenge for the 2001 Grand Final” type manner.

We should note that that shot quality produced and conceded differs by team. Sydney for instance have conceded the equal 2nd-lowest quality chances (they’ve done similar for a few years) and Port Adelaide take a lot of low quality chances so it’s no surprising they’re kicking a higher number of behinds per goals.

Essendon generate and concede scoring shots of roughly average quality, so they’re probably more likely to have benefited from something approaching pure luck in scoring shot accuracy terms.

What if everyone only played each other once?

In this world, the season is 17 games long and starts in May or has time off for representative clashes or something. Or, as is looking more likely, is the front half of a 17-5 type scenario.

Below we’ve compiled the first result this year for every clash, ignoring double-up return games. We’ve also assumed the upcoming weekend of matches is Round 17, and excluded any previous clashes between teams playing this week (eg the previous GWS-Geelong draw is omitted).

17 game season uniberse

Here, we see teams down to Collingwood still in distant contention for finals, the Pies apparently having been bad in return games this year. They, here, need to beat Melbourne and rely on unlikely losses by those above them.

The top 8 hasn’t changed, and West Coast are still relying on beating Adelaide, but in this world the Crows need to win to lock down a top two spot while Richmond will know whether top 4 is up for grabs by Saturday night.

In a 17-5 world, the entire bottom six would have been long settled, with these clubs facing little to play for (assuming the points are reset for the final five matches). Additionally, the top 3 would have also faced several weeks of near meaningless footy before the split. If the points aren’t reset in this 17-5 world, several teams would have several more dead rubbersin the last few weeks of the season, and there would be a decent chance that 7th, 8th and maybe 9th would finish with more wins than 5th and 6th.

These are just some of the reasons that the 17-5 proposal is not a good thought bubble – we promise to look at more of them later down the track.

What if teams won exactly as many games as they “should” have?

Now we’re stepping into the realm of abstract footy geometry, where the laws of football premiership ladder physics such as “you can only win whole games” no longer apply.

Each year we run an analysis of the footy fixture’s imbalance incorporating a Pythagorean Expectation assessment of team strength as well as straight wins and losses. Pythagorean Expectation tell us how many games a team “should” have won based on their scores for and against. It’s probably best thought of as a quantification of the intuition that teams with a higher percentage are better. It’s another measure of luck, and tends to punish teams who only win by small margins. We used the method to help project the 2017 ladder as well and it had Hawthorn finishing 12th.

Here, we’ve used it to work out how far over or under each team in 2017 is from the expectations created by their scoring. That ladder is below.

pythaguniverse

Finally, we have a ladder which doesn’t put Brisbane last. Fremantle look like they’ve won three more games than they should have, and on Pythagorean expectations might be expected to have won just the five games this year. Spoonbowl in this world happened already and Freo lost.

Our current top eight remains the top eight in the Pythagorean ideal world.

Port Adelaide, by virtue of the extreme flat track tendencies we documented last week, appear in this universe to have won an extra 1.5 games, while Sydney also sit a game and probably percentage inside the top 4, their early season weakness reduced to the abstraction of a slightly dampened balance of scores for-and-against.

But of course there’s one final source of luck.

What if the fixture was completely fair?

Here, we’ve stuck with Pythagorean expectations but used it to work out the impact, in fractions of a win, of the uneven fixture.

The fixture in an 18 team, 22 game season is impossible to make fair, but in our final bizarre universe, it’s what’s happened.

Each team’s “expected wins impact” is the difference between the strength of their opponent sets (including double-ups) and what would be expected to happen if they played everyone the same number of times (ie, the average of every other team’s strength).

We’re still in “fractions of a win” territory here, but the table below is interesting.fair fixture universe.PNG

At the top of the ladder, Adelaide and GWS have faced difficult fixtures and would be expected to do even better if they faced the same strength teams as everyone else.

In this universe where wins come in fractions and the fixture is impossibly fair, St Kilda jump into the 8 by a full one third of a win thanks to a fair fixture, at the expense of the Bombers. West Coast still sit 9th, while the Bulldogs lurk closer to the eight than they do in reality, a win over the Hawks potentially enough to get them into the finals.

This ladder tells us that the teams most benefited by a soft fixture this season are Gold Coast, Richmond, North Melbourne, and Essendon, to the tune of about half a win each. We’ve noted Richmond’s bad luck with close games above, but perhaps this is balanced by having benefited from the softer draw they got as a bottom-6 team last year.

Port Adelaide have made liars of us

After round 21 there is little movement in relative rankings, but Sydney and GWS rise into our informally-defined historical “premiership” frame.

round 21 ratings

However, it’s the increasingly anomalous Port Adelaide, theoretically a contender, which we want to focus on here.

The popular opinion of Port Adelaide being unable to match it with other good sides is well and truly borne out when we dig into their performance on our strength ratings by opponent. We have in the past broken up statistics by top 8 and bottom 10 and used them to call Josh Kennedy (but also Dean Cox) a flat track bully back in 2015. Then in 2016 we ran an opponent-adjusted Coleman to see who was kicking the goals against tough opponents (turns out: Toby Greene and Josh Jenkins). This time we’ve looked at whole teams.

Simply put, Port Adelaide are the best side in the competition against weak opponents and they’re about as good as North Melbourne against the good teams.

Below is a chart where we have calculated strength ratings through the same method as we always do using whole-of-season data, but separate ratings are derived for matches against the top half and bottom half of the competition as determined by our ratings above.top and bottom.PNG

Most clubs, predictably, have done better against the bad sides than the good ones. Port Adelaide, however, take this to extremes. They rate as 120% of league average in their performance against the bottom nine sides. Not even Adelaide or Sydney look that good, over the year, in beating up on the weaker teams.

That’s why we’ve been rating Port so highly this year – their performance, even allowing for the scaling we apply for opponent sets, has been abnormally, bizarrely good to the extent that it’s actually outweighed and masked their weaknesses against quality teams. Their sub-97% rating against top sides is 13th in the league, ahead of only North, Carlton, Fremantle and the Queensland sides. This divergence is more than double the size of the variance for any other team.

It appears that the problem mostly strikes the Power in between the arcs. Against bottom sides, their midfield strength is streets ahead of any other side at 141% of the league average, meaning they get nearly three inside-50s for every two conceded. This opportunity imbalance makes their decent defence look better and papers over a struggling forward line. Against quality sides, that falls apart and they get less inside-50s than their opponents.

Looking elsewhere, Adelaide stands out as looking stronger against quality opposition, with their midfield and offence fairing substantially better than against weaker sides – a couple of whom have, of course, embarrassed them throughout the year

The Hawks and two strugglers in North Melbourne and Carlton also seem to acquit themselves better against the top sides than against their own weight class. For North, their inside-50 opportunities dry up against good sides but they make better use of the forward entries – they rate as above league average, offensively, against the top nine teams. For Carlton, unsurprisingly, it’s their stifling defence who step up, and the same is true of Hawthorn.

St Kilda’s forward efficiency and Richmond’s defensive efficiency have also been a lot higher against top sides, but the converse is true of the two teams’ opposite lines.

At the other end of the table, Geelong,  Sydney and especially the Bulldogs are the other finals contenders with the biggest worries about sustaining their output against quality opposition. Sydney’s midfield struggles to control territory, slightly losing the inside-50 battle on average against the top half of the competition while bullying weaker sides (their offensive efficiency is actually slightly higher however). The Bulldogs and Geelong share these midfield issues but their forward lines also struggle under quality defensive heat.

But it really is Port Adelaide who stand out here. Their output against weaker sides is really good and shouldn’t be written off. There’s obviously quality there, and they sit in striking distance of the top 4 with a healthy percentage. However,  it wouldn’t be a stretch to call their overall strength rating fraudulent given its composition and we will be regarding them with a bit of an asterisk from here.  Unless they can bridge the gap and produce something against their finals peers, even a top 4 berth is likely to end in ashes.

Some Rise, Some Fall: All Are Flawed in 2017

As the 2017 Home & Away season winds to its inevitable conclusion, movement returns to the HPN Team Ratings.

The Swans are beginning to crest towards the “Premiership Contender” part of the HPN Team Ratings, which we loosely define as an overall team rating of more than 105% and individual component ratings north of 100%. After an extremely sluggish start down back, Sydney is now the third best side in the competition defensively – with a fair chance of leaping over Port into second.

We’ve mentioned this before, but the return of Dane Rampe has played a critical role to improvement. Some defenders are versatile, some are extremely good at their job, but Rampe is the rare combination of the two. Rampe’s return has allowed Grundy to move to a more negating role, and taken some of the pressure of Lewis Melican, who has blossomed as a result. Having Rampe’s ability to cover ground and contest as a their man up has allowed the other Swan half backs a little more freedom to attack knowing that there is a safety net behind.

The Swans still have issues – namely in the non-Franklin, non-Papley parts of their forward line, but they are starting to approach their 2016 form.

Switching with Sydney this week is Geelong, who are a fundamentally different team without Dangerfield and Selwood. Duncan and Hawkins missing this week does not help either. The sprint towards finals has turned into a limp just as Geelong run into one of the harder parts of their schedule.

Port didn’t lose a place this week but they lost significant ground in everyone’s eyes including those of our ratings, with another loss to a top eight side on the resume. No-one doubts the raw talent of the Power forward line, but their ability to score against good defences is becoming concerning.

For that matter, on the form of the last two weeks, Melbourne looks more like the Demons of 2008 than earlier this year. The constant shifts of players around the ground has seemingly led to a loss of cohesiveness, either players running into each other and spoiling each other when they do. Time is not on the Demons side here either, and if they can’t turn it around against the undermanned Saints this week their season may over.

Every side left in the battle for the flag this year has a flaw, or several, that may stop them from hoisting the cup. From haphazard forward delivery leading to poor conversion up forward (Richmond), to a loss of the territory battle (Eagles and Bombers), to a forward set up that requires a side’s best midfielder to play forward for massive chunks instead (Bulldogs), each side has an Achilles Heel. Even Adelaide, of which we pointed out last week.

For many, the Giants present as the most evenly balanced team, but they are yet to get their best 22 on the park this year at the same time. On paper the Giants at full strength are probably the most formidable matchup – but as 2017 has shown football isn’t played on paper. Even at full strength the Giants seem susceptible to multiple quality tall forwards and quick spreading run, such as the set up employed by Adelaide so effectively.