Introducing Player Approximate Value (PAV)

One of the oldest questions in global team sport is: what is a player really worth?  To come up with a workable answer for this, we have leant heavily on work undertaken by Bill James, Doug Drinen and Chase Stuart, and looked at several different sporting codes and how they attribute player value within the team environment.

This post will describe in detail the player valuations we’ve derived under a method we’re calling Player Approximate Value (PAV). We’ve given hints of these valuations in past posts such as this one about recent retirees and this one running through statistical “awards”. We are planning to use the values we’ve derived here to replace earlier methods of trade and draft valuations, and will continue running other PAV-based analysis, so you’ll see a lot more of it in future.

Valuing players

Much of the modern advanced sport analysis can be traced back to one man: Bill James. From the publication of the first The Bill James Baseball Abstract in 1977, James has created a language to describe the sport beyond it’s base components, and has emphasised using statistics to support other obvious judgements.

In 1982 James introduced a concept called the value approximation method, a tool to produce something he called Approximate Value. He did so by stating:

“The value approximation method is a tool that is used to make judgements not about individual seasons, but about groups of seasons. The key word is approximation, as this is the one tool in our assortment which makes no attempt to measure anything precisely. The purpose of the value approximation method is to render things large and obvious in a mathemtatical statement, and thus capable of being put to use so as to reach other conclusions.”

The resultant product produced by James was inexact, but able to generally differentiate bad seasons from good seasons, and good seasons from great. James used basic achievements to apportion value, based on traditional baseball statistics. Over the years James experimented with a series of different player value measures, but he revisited Approximate Value several times, most notably in 2001. However, much of James’s later efforts focused around other methods of player valuation, and Approximate Value remains an often overlooked part of his prior work.

In 2008 Doug Drinen, of Pro-Football Reference, decided to adapt James’s original formula to evaluate which individual college postseason award was most predictive of future NFL success, but was confronted by a lack of comparable data for football players. This initial effort, while a noble attempt, was critized for using very basic statistics – games played, games started and Pro Bowls played. Whilst the results largely conformed with logic, notable outliers existed – ordinary players that saw out lengthy careers on poor teams.

Unwittingly, we created a similar method to both the original 1982 James formula and the first Drinen formula, which we used to create a Draft Pick Value chart. The method created a common currency that could be used to value the output of players drafted from 1993 to 2004, and to also predict the future output of players (1993 is considered by most to be the first true draft, as it comes two years after the cessation of the traditional under 19 competition and after the various AFL zones were wound back).

This produced this chart, as linked.

The most common criticism of the chart was, like the original Drinen analysis, it was too narrow in ignoring the quality of games versus the quantity of games played. For most players, the relationship between games played and the quality of the player is relatively linear – bad players tend not to play a lot of football before they are delisted. Due to the strict limitations placed on AFL lists, and the mandatory turnover of about 7% of each side each season, players who fail to perform tend not to stay in the AFL. A small modification we made in 2016 was to add a component of quality – namely a weighting by Brownlow Medal votes, which applied a weighting for Brownlow-implied value of players selected at each draft position above and beyond just games played.

However, the original formula still had the issue of valuing Doug Hawkins as having a better career than Michael Voss – which is patently ridiculous. And the modified formula, though doing a better job of valuation, still felt slightly incomplete.

Later in 2008 Drinen came up with the measure we know today as Approximate Value, by splitting contributions into positions and determining positional impact on overall success. Whilst it still is an approximate value measure, it was far more accurate than any other NFL value measure to date. Approximate Value is still used as a historical comparison tool of player value, worth and contribution across a variety of applications, not limited to draft pick value charts, trade evaluation and the relative worth of players across careers.

What have we done

Player Approximate Value, or PAV for short, is a partial application of the final Drinen version of AV, but applied to the AFL after a range of testing. In the vein of CARMELO and PECOTA, it is unashamedly named after Matthew Pavlich, who happens to be one of the most valuable performers in recent years under the PAV measurement now proudly bearing his name.

Basic AFL statistics are very good at determining a player’s involvement and interaction with play, but relatively poor in evaluating how effective that interaction was. On the other hand, basic statistics are reasonably effective at determining how good a team is both across a season and within each individual game. Drinen’s AV, and now PAV, both combine these two elements.

PAV consists of two components – Team Value and Player Contribution.

Team Value

When developing AV, PFR recognised that the team is the ultimate in a team sport, an approach that we fundamentally agree with. PFR split up an NFL team’s ability into two components – offence and defence. Both were evaluated on points per drive adjusted for league average.

Luckily, we accidentally stumbled on a similar approach in 2014 when trying to determine team strength, however we split strength into three categories corresponding with areas of the field – offence, midfield and defence. Unlike American Football, possession in the AFL does not alternate after a score, and turnovers aren’t always captured in basic statistics. However, after learning from Tony Corke that inside-50s are one of the stats which correlate most strongly with wins, we landed on an approach of utilising them to approximate the “drive” of the NFL.

The formulas, similar to those used in the HPN Team Ratings, which are all ratios measured as a percentage of league average:

  • Team Offence: (Team Points/Team Inside-50s) / League Average
  • Team Midfield: (Team Inside-50s/Opposition Inside-50s)
  • Team Defence: This is a little more complex.
    • Defence Number (DN) = (Team Points Conceded/Team Inside-50s Conceded)/ League Average
    • Team Defence = (100*((2*DN-DN^2)/(2*DN)))*2

All three categories are inherently pace-adjusted, and as such there is no advantage to quick or slow teams racking up or denying opposition stat counts.

Each season is apportioned a total number of PAV points (we’re just saying “PAVs”) in each category, at a rate of 100 * the number of teams in the competition. For example in 2017 there were 1800 Offence PAVs, 1800 Defence PAVs and 1800 Midfield PAVs, or 5400 PAVs overall. This ensures that individual seasons are comparable over time, regardless of the number of teams in the competition at any time.

Unfortunately, inside-50s have only been tracked since the 1998 season. For seasons before then, we have utilised points per disposal, which roughly approximates the team strengths of the inside 50 approach. There are some differences but they are relatively marginal overall – with very few club seasons moving by more than 3%.

We feel that these three basic statistics can articulate the strength of a team better than any other approach we have seen, and it happens to match the approach taken when creating AV.

Player Involvement

This is the part where HPN has deviated from the approach of Drinen and James. As positions are not strictly defined and recorded as tightly in Australian Rules as in the NFL, it would be impractical at best to use positions as a starting point for developing a player value system.

Instead, we considered that the best way for us as amateurs from the general public to identify a player’s involvement was through those same basic and public statistics. Whereas the team value as calculated above used a relatively small number of statistical categories, player involvement can be much more complicated.

To allocate value, we relied on a number of intuitive decisions, statistical comparisons and peer testing, refining until the results were satisfactory.

The first attempt we made with the guidance of Tony Corke’s statistical factors that correlate with winning margin, then making some subjective decisions made from there. This attempt produced “sensible” results and also correlated reasonably with Brownlow medal votes.

The formulae were then fine-tuned by testing subjective player rankings on a group of peers. The formulas were also tested further against Brownlow Medal votes, All Australian selections, selected best and fairest results and Champion Data’s Official AFL Player Ratings.

Although no source is perfect, PAV was largely able to replicate the judgements of these other sources, especially that of the Official Player Ratings. Generally, if a player has a higher PAV across a season, they will receive more Brownlow Medal votes:

BV v PAV

In the end, PAV and its results were tested on a wider scale via blind testing on the internet (stealing the approach taken by Drinen when he created AV), and the results largely confirmed the valuations taken by PAV. The formulae for each line are:

  • Offensive Score = Total Points + 0.25 x Hit Outs + 3 x Goal Assists + Inside 50s + Marks Inside 50 + Free Kick Differential
  • Defensive Score = 20 x Rebound 50s + 12 x One Percenters + (Marks – 4 x Marks Inside 50 + 2 x Free Kick Differential) – 2/3 x HitOuts
  • Midfield Score = 15 x Inside 50s + 20 x Clearances + 3 x Tackles + 1.5 x Hit Outs + Free Kick Differential

The weightings and multipliers used in each component formula will necessarily look a bit arbitrary, but are the results of adjustment and tweaking until the results lined up with other methods of ranking and evaluating players as described above.

As the collection of several of these measures only commenced in 1998, we have also adapted another formula for the pre-1998 seasons which correlates extremely strongly with the newer formula. Whilst we feel it is less accurate than the newer formula, it still largely conforms to the findings of the newer formula. This formula was created by trying to minimise the standard deviation for each player’s PAV across the last five seasons of AFL football. Around 5% of players have a difference in value of more than one PAV between the new and old formulas.

We will publish the pre-1998 formula in the not-too-distant future.

Putting It Together

The final step combines individual player scores and team strength calculations to produce the final PAV for each player. This is done in two steps.

Firstly, the individual component scores for each team are compiled. Each player’s individual player score is converted to a proportion of total team score, telling us the proportion of value they contributed to that area of the ground.

Secondly, the team value (i.e. team strength as outlined above) is multiplied by the proportion of the component score for each player.

An example will help illustrate this.

In 2016 the Blues midfield earned 96.71 Midfield PAVs across the whole side (being below league average). Bryce Gibbs accrued a Midfield Score of 3984, and the team tallied up 37702 in midfield score in total. As a result, Gibbs contributed 10.567% of the total Midfield Score for Carlton, and receives that part of the 96.71 Midfield PAVs that Carlton had gained – or 10.22 MidPAVs.

These calculations are done for every player in the league for every side. The overall PAV value for each player is merely the three component values added together. For Gibbs in 2016, this is his 10.22 MidPAVs with 6.86 OffPAVs and 3.21 DefPAVs for a total of 20.29 PAVs all up. Which is pretty good.

What PAV should be able to tell you

Two key advantages of PAV, we feel, are that it can be replicated based entirely on publicly available statistics, and that by using a pre-1998 method, we have derived a fairly long set of historical values.

While HPN intends to publish PAV to a finer degree than PFR, there still remains a great deal of approximation in the approach. This is especially the case for pre-1998 values, which rely on a far smaller statistical base. We cannot definitively state that these are the exact values of each player relative to other players; however we feel that the approximation is closer than any other method that has as long a time series made with publicly available data. It is possible, and indeed likely, that some lower-ranked players are better players than those above them in certain years.

What we are more confident of is that the values are indicative of player performance relative to others across a longer period of time. Or; in that given year, player X was likely more valuable than player Y, or at least to their team.

As it draws its fundamental values from team rankings, it is much harder to draw a higher value from a bad team than it is a good team. This scales player value to more highly rate performance in a good side, and specifically it highly rates players in good parts of the field in good sides.

As a group, players with a PAV of 18 should be better (or have had a better year) than those who are ranked with 16. As a rule of thumb a season with a PAV of over 20 should be considered to be a great season, with any PAV over 25 should be considered exceptional. This varies slightly for different positions – an All Australian key position defender may have a lower overall PAV than a non-All Australian midfielder, but with an extremely high rating in the defence component.

Below is a list of the players with the highest season long PAVs between 1988 and 2016:

TopPAVSeasons.JPG

2017 isn’t finalised yet, but the top end of the list to date is populated with Brownlow Medallist years and players considered to be the absolute elite of the league over the past two decades. While there are some year-on-year PAVs that conflict with common opinion, these top-end player-years do not contain any. Yes, that Stynes year was that good.

On a career basis, the top rated players should be fairly uncontroversial:

TopPAVSCareers.JPG

The top ten players on this list not only had successful careers, but also incredibly long careers as well. Note that this is current to 2016, so Gary Ablett Jr has more value to come.

Every player made multiple All Australian teams, and a majority were considered at different points of time to be the “best player in the game”. As such, PAV ends up being a measure of not only quantity of effort but also of quality.

What are the weaknesses of PAV?

Like almost any rating system, there are always blind spots – especially in early phases of development. Like in almost any rating system in any sport, there appears to be a slight blind spot in valuing truly pure negating defenders. Consider Darren Glass, possibly the finest shutdown KPD of the AFL era. He is somewhat overlooked from an overall perspective by PAV:

Glass.PNG

Glass’s Defence PAV still remains elite during this era, but he provided little to no value to any other part of the Eagles’ performance across the period. It’s worthwhile to compare Glass to the namesake of PAV for a comparison:

Pavlich.PNG

This is a lesson to sometimes look beyond the headline figure to the components that make it up. It’s worthwhile look beyond the Overall PAV figure for the relevant component figure for the player’s specific role, especially for specialist players. We can also see with a player like Pavlich that his shifting role over his career is revealed by PAV. Generally, a component PAV of more than 10 for a specialist player will place them in contention for an All Australian Squad selection (cf. Glass above), if not selection in the side itself.

Occasionally a season pops up that defies conventional wisdom, such as Shane Tuck’s highly rated 2005 season, or Adem Yze, who rates so highly via PAV as to suggest he was under-recognised throughout his career.

However, Insight Lane brought a very interesting observation to our attention this week, from Bill James himself:

As noted at the top, we’ll be applying this system throughout the draft and trade period to evaluate trades and draft picks, and probably in a lot of other analysis from here on out, as well. Stay tuned in the coming days for an All-Australian team based on PAV.

In our time developing and testing PAV, it has usually confirmed our conventional thinking, but occasionally surprised us. Which makes us think we might be on the right track. With system comes the ability to analyse, so the goal for us in developing this approach is to emulate and augment subjective judgments with a systematic valuation, rather than to create a value system alien to an actual “eye test”.

If you have any comments or questions about PAV, please feel free to contact us via twitter (@hurlingpeople), or email us at hurlingpeoplenow [at] gmail [dot] com. We are more than willing to take any feedback on board, and if you want to use or modify the formulas yourself, feel free to do so (just credit us).

Thanks to all that provided help, assistance and the reason for the development of PAV, namely Rob Younger, Matt Cowgill, Ryan Buckland, Tony Corke, James Coventry, Daniel Hoevanaars… and everyone we are forgetting here. We will add more when we remember who we have forgotten.

Alternate universes: the final rounds that might have been

We are into the final round of season 2017, and what a great time to look at the fixture that awaits us and see how those matchups would look if just a few things had broken a bit differently. Join us as we journey into the football multiverse and explore what might have been.

First up, the table below is the usual HPN team ratings.

ratings r22

We just want to note here first of all that Brisbane are currently, adjusted for opponent defensive strength (they don’t get to play themselves after all and they have a terrible defence) the best offence in the comp. That is, they have scored more per inside-50, adjusted for opponent, than any other side this year. What a weird season.

The top 8 here is the actual current top 8; bar Essendon being very slightly behind West Coast. In all likelihood the Bombers will make finals unless the Eagles can beat the Crows and jump either a losing Melbourne or Essendon via a loss or vaulting them on percentage.

The HPN team ratings over the year would expect to see the Swans in the top 4, we don’t need to rehash why that hasn’t happened. Geelong being outside the top 4 is about to be a recurring theme on our journey, alluded to in the title of the post.

So let’s go with some hypothetical ladders, from alternate universes:

What if every losing team had scored another goal?

Below is what the ladder would look like if every losing team had scored another goal, reversing a lot of results. We haven’t recalculated percentages but current percentages have been included as a guide:

plusagoaluniverse

The Tigers, who have been on the wrong side of a number of storied narrow defeats, would sit half a game clear heading into the final round, and they and Adelaide would have had the top two spots sewn up weeks ago. In this universe, Damien Barrett is floating the prospect of Richmond and Adelaide tanking to try to avoid GWS or Sydney and play Port Adelaide instead.

Down in tenth would sit Geelong, out of contention in finals as they rued last minute losses to Fremantle, Hawthorn, Port Adelaide and North Melbourne.

The current North Melbourne vs Brisbane Spoonbowl would instead see the Lions trying to jump Fremantle and yet again escape a wooden spoon.

What if we could bloody kick straight?

A simplistic and somewhat inaccurate measure of luck is scoring shot conversion. All things being equal, the expectation is that inaccuracy or accuracy regresses to the mean over time. Figuring Footy has done some wonderful work fleshing this out by adding scoring expectations, but for this exercise, let’s assume everyone coverts scoring shots at the same rate.

SSvictoruniverse

Port Adelaide now sit top 2, their accuracy having secured them wins over West Coast and Richmond at the cost of a loss to St Kilda. The Saints, naturally, make the 8 on this measure, as do a Hawthorn presumably not hobbled by Will Langford’s set shots.

The teams dumped from the finals, assuming everyone kicked straight, are Sydney (who would hypothetically still remain in contention this week), and Essendon (who would be long gone). The Bombers crash to 40 points, sitting well out of finals, thanks to draws with Hawthorn and the Bulldogs and losses to Geelong and Collingwood. This would be compensated only by the cold comfort of having beaten Brisbane, in an ever fading “revenge for the 2001 Grand Final” type manner.

We should note that that shot quality produced and conceded differs by team. Sydney for instance have conceded the equal 2nd-lowest quality chances (they’ve done similar for a few years) and Port Adelaide take a lot of low quality chances so it’s no surprising they’re kicking a higher number of behinds per goals.

Essendon generate and concede scoring shots of roughly average quality, so they’re probably more likely to have benefited from something approaching pure luck in scoring shot accuracy terms.

What if everyone only played each other once?

In this world, the season is 17 games long and starts in May or has time off for representative clashes or something. Or, as is looking more likely, is the front half of a 17-5 type scenario.

Below we’ve compiled the first result this year for every clash, ignoring double-up return games. We’ve also assumed the upcoming weekend of matches is Round 17, and excluded any previous clashes between teams playing this week (eg the previous GWS-Geelong draw is omitted).

17 game season uniberse

Here, we see teams down to Collingwood still in distant contention for finals, the Pies apparently having been bad in return games this year. They, here, need to beat Melbourne and rely on unlikely losses by those above them.

The top 8 hasn’t changed, and West Coast are still relying on beating Adelaide, but in this world the Crows need to win to lock down a top two spot while Richmond will know whether top 4 is up for grabs by Saturday night.

In a 17-5 world, the entire bottom six would have been long settled, with these clubs facing little to play for (assuming the points are reset for the final five matches). Additionally, the top 3 would have also faced several weeks of near meaningless footy before the split. If the points aren’t reset in this 17-5 world, several teams would have several more dead rubbersin the last few weeks of the season, and there would be a decent chance that 7th, 8th and maybe 9th would finish with more wins than 5th and 6th.

These are just some of the reasons that the 17-5 proposal is not a good thought bubble – we promise to look at more of them later down the track.

What if teams won exactly as many games as they “should” have?

Now we’re stepping into the realm of abstract footy geometry, where the laws of football premiership ladder physics such as “you can only win whole games” no longer apply.

Each year we run an analysis of the footy fixture’s imbalance incorporating a Pythagorean Expectation assessment of team strength as well as straight wins and losses. Pythagorean Expectation tell us how many games a team “should” have won based on their scores for and against. It’s probably best thought of as a quantification of the intuition that teams with a higher percentage are better. It’s another measure of luck, and tends to punish teams who only win by small margins. We used the method to help project the 2017 ladder as well and it had Hawthorn finishing 12th.

Here, we’ve used it to work out how far over or under each team in 2017 is from the expectations created by their scoring. That ladder is below.

pythaguniverse

Finally, we have a ladder which doesn’t put Brisbane last. Fremantle look like they’ve won three more games than they should have, and on Pythagorean expectations might be expected to have won just the five games this year. Spoonbowl in this world happened already and Freo lost.

Our current top eight remains the top eight in the Pythagorean ideal world.

Port Adelaide, by virtue of the extreme flat track tendencies we documented last week, appear in this universe to have won an extra 1.5 games, while Sydney also sit a game and probably percentage inside the top 4, their early season weakness reduced to the abstraction of a slightly dampened balance of scores for-and-against.

But of course there’s one final source of luck.

What if the fixture was completely fair?

Here, we’ve stuck with Pythagorean expectations but used it to work out the impact, in fractions of a win, of the uneven fixture.

The fixture in an 18 team, 22 game season is impossible to make fair, but in our final bizarre universe, it’s what’s happened.

Each team’s “expected wins impact” is the difference between the strength of their opponent sets (including double-ups) and what would be expected to happen if they played everyone the same number of times (ie, the average of every other team’s strength).

We’re still in “fractions of a win” territory here, but the table below is interesting.fair fixture universe.PNG

At the top of the ladder, Adelaide and GWS have faced difficult fixtures and would be expected to do even better if they faced the same strength teams as everyone else.

In this universe where wins come in fractions and the fixture is impossibly fair, St Kilda jump into the 8 by a full one third of a win thanks to a fair fixture, at the expense of the Bombers. West Coast still sit 9th, while the Bulldogs lurk closer to the eight than they do in reality, a win over the Hawks potentially enough to get them into the finals.

This ladder tells us that the teams most benefited by a soft fixture this season are Gold Coast, Richmond, North Melbourne, and Essendon, to the tune of about half a win each. We’ve noted Richmond’s bad luck with close games above, but perhaps this is balanced by having benefited from the softer draw they got as a bottom-6 team last year.

Port Adelaide have made liars of us

After round 21 there is little movement in relative rankings, but Sydney and GWS rise into our informally-defined historical “premiership” frame.

round 21 ratings

However, it’s the increasingly anomalous Port Adelaide, theoretically a contender, which we want to focus on here.

The popular opinion of Port Adelaide being unable to match it with other good sides is well and truly borne out when we dig into their performance on our strength ratings by opponent. We have in the past broken up statistics by top 8 and bottom 10 and used them to call Josh Kennedy (but also Dean Cox) a flat track bully back in 2015. Then in 2016 we ran an opponent-adjusted Coleman to see who was kicking the goals against tough opponents (turns out: Toby Greene and Josh Jenkins). This time we’ve looked at whole teams.

Simply put, Port Adelaide are the best side in the competition against weak opponents and they’re about as good as North Melbourne against the good teams.

Below is a chart where we have calculated strength ratings through the same method as we always do using whole-of-season data, but separate ratings are derived for matches against the top half and bottom half of the competition as determined by our ratings above.top and bottom.PNG

Most clubs, predictably, have done better against the bad sides than the good ones. Port Adelaide, however, take this to extremes. They rate as 120% of league average in their performance against the bottom nine sides. Not even Adelaide or Sydney look that good, over the year, in beating up on the weaker teams.

That’s why we’ve been rating Port so highly this year – their performance, even allowing for the scaling we apply for opponent sets, has been abnormally, bizarrely good to the extent that it’s actually outweighed and masked their weaknesses against quality teams. Their sub-97% rating against top sides is 13th in the league, ahead of only North, Carlton, Fremantle and the Queensland sides. This divergence is more than double the size of the variance for any other team.

It appears that the problem mostly strikes the Power in between the arcs. Against bottom sides, their midfield strength is streets ahead of any other side at 141% of the league average, meaning they get nearly three inside-50s for every two conceded. This opportunity imbalance makes their decent defence look better and papers over a struggling forward line. Against quality sides, that falls apart and they get less inside-50s than their opponents.

Looking elsewhere, Adelaide stands out as looking stronger against quality opposition, with their midfield and offence fairing substantially better than against weaker sides – a couple of whom have, of course, embarrassed them throughout the year

The Hawks and two strugglers in North Melbourne and Carlton also seem to acquit themselves better against the top sides than against their own weight class. For North, their inside-50 opportunities dry up against good sides but they make better use of the forward entries – they rate as above league average, offensively, against the top nine teams. For Carlton, unsurprisingly, it’s their stifling defence who step up, and the same is true of Hawthorn.

St Kilda’s forward efficiency and Richmond’s defensive efficiency have also been a lot higher against top sides, but the converse is true of the two teams’ opposite lines.

At the other end of the table, Geelong,  Sydney and especially the Bulldogs are the other finals contenders with the biggest worries about sustaining their output against quality opposition. Sydney’s midfield struggles to control territory, slightly losing the inside-50 battle on average against the top half of the competition while bullying weaker sides (their offensive efficiency is actually slightly higher however). The Bulldogs and Geelong share these midfield issues but their forward lines also struggle under quality defensive heat.

But it really is Port Adelaide who stand out here. Their output against weaker sides is really good and shouldn’t be written off. There’s obviously quality there, and they sit in striking distance of the top 4 with a healthy percentage. However,  it wouldn’t be a stretch to call their overall strength rating fraudulent given its composition and we will be regarding them with a bit of an asterisk from here.  Unless they can bridge the gap and produce something against their finals peers, even a top 4 berth is likely to end in ashes.

Some Rise, Some Fall: All Are Flawed in 2017

As the 2017 Home & Away season winds to its inevitable conclusion, movement returns to the HPN Team Ratings.

The Swans are beginning to crest towards the “Premiership Contender” part of the HPN Team Ratings, which we loosely define as an overall team rating of more than 105% and individual component ratings north of 100%. After an extremely sluggish start down back, Sydney is now the third best side in the competition defensively – with a fair chance of leaping over Port into second.

We’ve mentioned this before, but the return of Dane Rampe has played a critical role to improvement. Some defenders are versatile, some are extremely good at their job, but Rampe is the rare combination of the two. Rampe’s return has allowed Grundy to move to a more negating role, and taken some of the pressure of Lewis Melican, who has blossomed as a result. Having Rampe’s ability to cover ground and contest as a their man up has allowed the other Swan half backs a little more freedom to attack knowing that there is a safety net behind.

The Swans still have issues – namely in the non-Franklin, non-Papley parts of their forward line, but they are starting to approach their 2016 form.

Switching with Sydney this week is Geelong, who are a fundamentally different team without Dangerfield and Selwood. Duncan and Hawkins missing this week does not help either. The sprint towards finals has turned into a limp just as Geelong run into one of the harder parts of their schedule.

Port didn’t lose a place this week but they lost significant ground in everyone’s eyes including those of our ratings, with another loss to a top eight side on the resume. No-one doubts the raw talent of the Power forward line, but their ability to score against good defences is becoming concerning.

For that matter, on the form of the last two weeks, Melbourne looks more like the Demons of 2008 than earlier this year. The constant shifts of players around the ground has seemingly led to a loss of cohesiveness, either players running into each other and spoiling each other when they do. Time is not on the Demons side here either, and if they can’t turn it around against the undermanned Saints this week their season may over.

Every side left in the battle for the flag this year has a flaw, or several, that may stop them from hoisting the cup. From haphazard forward delivery leading to poor conversion up forward (Richmond), to a loss of the territory battle (Eagles and Bombers), to a forward set up that requires a side’s best midfielder to play forward for massive chunks instead (Bulldogs), each side has an Achilles Heel. Even Adelaide, of which we pointed out last week.

For many, the Giants present as the most evenly balanced team, but they are yet to get their best 22 on the park this year at the same time. On paper the Giants at full strength are probably the most formidable matchup – but as 2017 has shown football isn’t played on paper. Even at full strength the Giants seem susceptible to multiple quality tall forwards and quick spreading run, such as the set up employed by Adelaide so effectively.

Who was the best retiree: Riewoldt, Hodge, Mitchell, Thompson or Priddis?

While there were a number of interesting results and upsets last week, the HPN Team Ratings largely stayed unchanged from a ranking point of view. At this stage of the season our method of rating teams gets quite firm in its views, comparing as it does the entire season’s work of each club in order to provide a good basis for historical comparison.

Perhaps the biggest change at the top end is Geelong slowly closing the gap on the top two, who soften a little bit at the top. GWS continue to lose touch with the top end, and missing almost their entire first choice forward line this week they have a hard assignment against a mostly fit Melbourne. As Matt Cowgill from The Arc/ESPN outlined this week, the Manuka match-up shapes as one of the most pivotal games this week, alongside almost every match this round.

In related news: it’s a fantastic time to be a footy fan.

ratings.PNG

Richmond made the biggest leap this week, from 8th into 6th, leapfrogging a disappointing Melbourne and swapping places with the Dons. West Coast is only a fraction outside the top 8 teams, and the St Kilda match-up this week looms as a de-facto elimination game for both sides.

Now onto the question posed at the top of the column.

The best of the 2017 retirees

There’s a high calibre group of already-announced retires, all undisputed champions of the game who nonetheless vary quite markedly in the types of achievements and qualities for which they are recognised.

This week, HPN has decided (with the help of a few friends) to look at different ways to split the careers of these five great players, and try to work out who was the best of the bunch, once all is said and done.

Team Success

Many among us (including Michael Jordan) consider a championship title to be the most relevant thing when determining who was truly the best player of a group. The goal of almost all professional sport is to win at the peak level of competition, with all else being ancillary to this pursuit.

To determine this with these five players, we have graded the players on the most simple of scales: two points for a premiership, one point for a grand final loss, none for a draw (sorry Nick).

  1. (tie): Mitchell, Hodge (9 points)
  2. Riewoldt (2 points)
  3. Priddis (1 point)
  4. Thompson (0 points)

Mitchell and Hodge are tied at the top here, as a result of both being teammates during the Hawks’ ultra-successful run between 2008 and 2015. As all Saints fans can remember, St Kilda lost two Grand Finals under the captaincy of Nick Riewoldt, including one that they definitely should have won. Matt Priddis missed out on the Eagles’ 2006 premiership win, even if he was on the list at the time, but played in the 2015 loss to the Hawks. And Scott Thompson has never tasted the limelight on the last Saturday in September (or October).

Individual Awards

Brownlow Medal

Surprisingly, of these group of five players, only two have had the Brownlow Medal hung from their necks. To split all and any ties, we have used total Brownlow Medal votes as the tiebreaker.

  1. Mitchell (1 medal, 220 votes)
  2. Priddis (1 medal, 146 votes)
  3. Thompson (155 votes)
  4. Riewoldt (149 votes)
  5. Hodge (131 votes)

It turns out that the midfielders’ award is really a midfielders’ award. At the start of the 2016 season Sam Mitchell sat in a tie for first all time (with Gary Ablett Jr) for most Brownlow Medal votes (adjusting for the crazy voting system in the mid-1970s). Mitchell had an incredibly long and consistent career, one which was often masked by the excellence of his teammates. Priddis somehow jagged the 2014 medal in what might not have been his best season, but the medal is his nonetheless.

Among all players who never won a Brownlow, Scott Thompson is one of the highest career vote getters; behind luminaries such as Leigh Matthews, Brent Harvey, Scott West, Garry Wilson and Kevin Bartlett. That is very good company to be in, and perhaps the dreaded Victorian media bias means Thompson isn’t getting the recognition that he deserved through his career.

Nick Riewoldt has polled as well as almost any key position forward in history, although he only peaked at a high of 17 votes in any one year. And Luke Hodge, who often did his best work off a half back flank, was often ignored by the umpires in the minds of the Brownlow in favour of star teammates.

Club B&F
  1. Riewoldt (7 wins)
  2. Mitchell (5 wins)
  3. (tie) Priddis, Hodge, Thompson (2 wins)

Each club votes differently and may judge their best and fairest awards on different criteria, but they are still a good way to see how clubs value their own player. All five players took home at least two club champion awards, but Riewoldt is way ahead of the pack with seven.

All-Australian
  1. Riewoldt (five-time AA, three-time AA squad)
  2. Mitchell (three-time AA, four-time AA squad)
  3. Hodge (three-time AA, two-time AA squad)
  4. Priddis (one-time AA, two-time AA squad)
  5. Thompson (one-time AA, one-time AA squad)

Riewoldt stands alone here again, with his performances up forward regularly being recognised as being the best in the game. Thompson suffers here from the glut of elite midfielders that were in the league recently.

Statistics

As we have alluded to in recent weeks, HPN have been developed a player value system over the last year named PAV (after Matthew Pavlich). It is derived entirely from publicly available stats on afltables. We have been teasing it for the past few weeks, and we will drop the methods and formulas after the season is wrapped up and we have some time on our hands.

But for now, we can look at PAV (which is determined on a player’s contribution to a team’s effort in 3 areas of the ground, weighted by the strength of the team in that area that year) for each of the retirees. Here’s the data and graph for the five players across their careers.

retirees chartretirees table

For context: a perfectly average team will have 300 PAV across its list in a given year. A season above 20 is generally a sign of All-Australian contention (depending on position). A PAV north of 12 is generally an average contributor. Seasons of 25 PAV or more are relatively rare and outstanding.

Peak PAV
  1. Hodge
  2. Riewoldt
  3. Mitchell
  4. Thompson
  5. Priddis

According to PAV ratings, not only was Luke Hodge’s 2005 season is the best single season by any season of the retirees, but his 2010 and 2006 seasons were distant second and third and ahead of any other player-season here. Unlike Brownlow Medal voting, PAV is more agnostic when it comes to rating the value and impact of defenders and forwards because it assigns values for all three parts of the ground and sums them. This is demonstrated by the relatively high Hodge and Riewoldt placings.

Below are the component ratings for Hodge, Riewoldt and Mitchell, showing the relative contribution of midfield, offence and defence ratings to each season’s total. Note the shifting roles played by Hodge over the years as defence or midfield contribution rises and falls, compared to the purer midfield and forward roles of Riewoldt and Mitchell.

Riewoldt’s best year, his 2004 season, saw him walk away with multiple media and other voting awards for best player, but he was stiffed by the umpires in the Brownlow (PAV had him as the 3rd best player that year, behind Judd and Akermanis).

In their Brownlow years of 2012 and 2014, PAV rated Mitchell and Priddis as the 12th and 13th most valuable players in the league respectively. Mitchell’s Brownlow was of course the 2012 medal, awarded in retrospect. 2012 was also Thompson’s best year, and he was just shaded by Mitchell, rating 13th. We should note, however, that in a lot of these ratings the differences were fairly minimal and since PAV stands for “player approximate value”, when scores are similar the exact order is not necessarily meaningful – a 21.8 versus a 21.6 has very minimal difference, and could even come down to the mistaken compilation of given statistics.

Incidentally, during that 2012 season, Mitchell’s eventual co-medallist Trent Cotchin was 4th for PAV that year, and the ultimately ineligible Jobe Watson rated 5th.

Career PAV
  1. Mitchell
  2. Hodge
  3. Riewoldt
  4. Thompson
  5. Priddis

We have imputed a final 2017 value based on the season to date – these may shift with the final few games of each career, but the shift shouldn’t be significant since most of the season has been played. The margins between the top three are quite slim, but the results should hold.

With the shortest career of the bunch, Priddis was always going to struggle with respect to total career value produced, however he still produced more than the average value of number one draft picks. Of the five players, Thompson got off to the slowest start, but had the longest stretch of “good-to-great” seasons, with nine straight years where he should have been in All Australian squad contention. This slow start, along with the longest tail of the five players, meant that the other three greats would shave him.

Subjective Ratings

For this measure, we asked three of our favourite football writers/analysts to rank the players from one to five, on whatever grounds or method they choose. They had no idea of our work above. They are:

Collectively these three ranked them:

  1. Mitchell
  2. Riewoldt
  3. Hodge
  4. Priddis
  5. Thompson

But we note that two of the three are West Coast supporters, so take the last two spots with a small grain of salt. All three we surveyed unanimously had Mitchell-Riewoldt at 1st and 2nd in that order – as did the other dozen or so people we asked in our day to day lives.

In summary

We seem to keep coming back to there being two clear tiers here – Mitchell, Hodge and Riewoldt in some order, then Priddis and Thompson. Mitchell comes out as the closest thing to a consensus “best” but Riewoldt isn’t far behind

overall.PNG

The biggest outlier method – even more so than team success – turns out to be the Brownlow Medal which we have no compunction about saying quite simply undervalues both Riewoldt and Hodge.

By the same token however, when we look at various uses of our PAV, it becomes apparent that the inclusion of Priddis and Thompson in this comparison isn’t spurious and they aren’t really out of place, even if their recognition as individual greats hasn’t been as forthcoming. As we noted, a potential Victorian media bias – which has foundations in media theory and international sporting debate – may have an impact on the public perceptions of non-Victorian based players.

Something we like about the PAV approach as we’ve tested and analysed it is the way it identifies lesser-lights who had careers or seasons which were comparable to better recognised and more widely noted achievements. That has certainly happened here.

Thompson had a very long career of consistently high value to his (second) club while Priddis, a late starter, still came in and performed at a similar level almost immediately. Every one of these five players had careers which outperformed the expectations of a number one draft pick and it’s no insult to say that Priddis or Thompson are fourth or fifth among this group.