Expected Value for NRL Supercoach

Preface: this was posted on NRL SupercoachTalk in early 2019.

Part of the allure of NRL Supercoach for many, including myself, is the fact it is number based. You can’t argue with numbers, unless you’re one of those obsessives who complain about review scores (ONLY AN 8.8!?).

Thanks to this, we can unequivocally state that Damian Cook is better than Michael Lichaa as we have the numbers to prove it. But what if we wanted to prove that someone was scoring better than the average player in the same spot? Could we quantify if a player outperformed their own situation, if they were just making up the numbers or if they were actually a negative on the field?

Enter the first draft of my Expected Value model for NRL Supercoach. This model looks at the last five years of Supercoach scores, specifically individual player score, opponent, minutes played, and venue (home/away). The good news is that points and minutes played are positively correlated, in that the more minutes you play the more points you score. Then we use that correlation to work out an expected score for each player per game.

The basic idea is that for X number of minutes in a specific position against a certain team at home or away, what would the average expected Supercoach score be? This isn’t to be confused with a projected score which would be looking at a specific player, this is looking at what the average player from the same position would have scored given the same minutes, opponent and venue.

To get started here’s the top 17 by score difference for 2018 (who have played five games or more) based on this model:

Top 17

PlayerGamesMins/GameSupercoach AverageExpected AverageDifference
Tom Trbojevic2277.973.953.720.2
James Tedesco2279.573.654.019.6
Damien Cook2279.177.659.318.3
Latrell Mitchell2277.463.645.518.1
Blake Ferguson2480.065.447.418.0
Valentine Holmes2380.067.451.116.3
Kalyn Ponga2076.565.750.615.1
Martin Taupau2450.864.750.014.7
Esan Marsters2480.360.846.714.1
Robert Jennings1876.156.843.313.6
Nathan Cleary1576.961.748.812.9
Waqa Blake1479.658.846.012.8
Jason Taumalolo2363.770.357.512.7
Shaun Johnson1879.564.452.012.5
Jai Arrow2154.563.951.512.4
Rhyse Martin1474.667.855.512.3
Manase Fainu967.362.250.611.6

This instantly this passes the eye test for me. The top six were unequivocally the best going around last season and if you didn’t have at least five of them you were in trouble. The rest of the list is filled with some guns and keepers and a few players who finished the season strongly. Jarome Luai would have placed inside the top 10 but only played four games.

Now let’s look at the bottom 17 by difference over the same period:

PlayerGamesMins/GameSupercoach AverageExpected AverageDifference
Bradley Abbey977.721.846.8-25.1
Jack Bird879.625.849.0-23.2
Lachlan Coote978.036.755.3-18.6
Brock Lamb863.824.542.9-18.4
Javid Bowen665.820.338.5-18.1
Christian Crichton1880.027.644.9-17.3
Cory Denniss1180.030.547.7-17.1
Bradley Parker1774.426.243.2-17.1
Bevan French1874.925.943.0-17.1
Will Matthews1550.023.540.6-17.0
Akuila Uate1580.629.546.3-16.8
Aidan Sezer1876.734.851.6-16.8
Jack Cogger1280.033.249.6-16.4
Benji Marshall2179.333.149.5-16.4
Will Smith1254.129.345.1-15.8
Nick Meaney580.037.653.3-15.7
Kane Elgey1180.434.750.2-15.5

Again, the bottom seventeen players pass the eye test. Brad Abbey didn’t live up to the Supercoach hype that started in 2017, Jack Bird had a horrible season for many reasons, Christian Crichton was very ordinary as a cash cow as an Eels fan the less said about Bevan French the better (not that he was the only Eel to disappoint in 2018). If you’re trying to talk yourself in to Sezer or Elgey as a point of difference you may need to think again.

Despite the above table appearing to show that guns always outperform their expected score, there’s a few cases of that not happening. Jake Trbojevic actually scored 0.3 lower than his expected score. This isn’t really a negative as his expected score was 71 and still one of the top players in the game. Locks generally have a higher workload and play longer minutes resulting in a higher expected score.

Sam Burgess didn’t set the world on fire last year and was -3.7 down on his predicted scores. His last four weeks of the season were atrocious, -15.1ppg down on expected scores which would have placed him just outside the bottom seventeen above. You could argue that Burgess averaged fewer minutes over those four weeks – 66 compared to his season average of over 70. One of the advantages of using expected value though is that it adjusts for minutes played, you’re comparing actual output versus expected output in the same number of minutes. Meaning that while Sam’s minutes were down, his performances were still down compared to a typical lock playing those minutes.

Josh Papalii was reasonably close to his with just a 4.6ppg difference on his 63.7 average. This mostly came from averaging 12 less than expected in four starts in the second row for the Raiders, starting at lock he was 7.0ppg above his likely score.

Even with a spectacular month of scores in the middle of the season, Ryan James didn’t get close to his expected average. His 62.6 points per game fell below his expected output of 69.3, which probably highlights just how ordinary he was when he didn’t cross the line.

To illustrate this further, let’s take the specific example of David Nofoaluma in Rounds 6-10 last year. His games opponents were Manly (away), Newcastle (home), Parramatta (away), New Zealand (away), and North Queensland (home). Here’s how he scored and what would have been expected of a winger in the same situation:

OpponentPositionVenueMinutesExpected ScoreActual Score
ManlyWingAway804835
NewcastleWingHome805635
ParramattaWingAway804949
New ZealandWingAway703632
North QueenslandWingHome804361

Looking at this run of games, Nofoaluma started out poorly, scoring below the expected value of an 80 minute winger in the games against Manly and Newcastle. However, he was dead on his expected value against the Eels and close in a sin bin affected his score against the Warriors but as he played fewer minutes his score wasn’t envisaged to be as high. Finally, he knocked out a solid score of 63, 18 above what the average winger would score in 80 minutes against the Cowboys.

Over this period, his expected average was 46.4 but Nofoaluma only averaged 42.4, indicating he slightly underperformed during this period, with his price bottoming out for the year after the Cowboys game at $419,200. Looking at the season as a whole, his Supercoach average was 51.7 and his expected average was 44.3, a differential of +7.4 which ranked him seventh among wingers last year who played five or more games. If you jumped on him after that Cowboys game, it most likely paid off as he averaged 56.4 for the back half of the season which was over 12ppg above expected (for the sake of this example lets ignore his month on the sidelines after he was injured after Round 13).

If you’re still awake, I know what you’re thinking. “Carlos, I’ve read over 1200 words of this so far and it sounds like a lot of work to tell us that guns are guns and plodders are plodders”. And you’d be right. Sometimes confirming what you already know can be the most important thing to get back from any research or analysis. It also allows us to quantify just how much better these guns have been performing.

Additionally, this model has confirmed that Waqa Blake was really a low key keeper last season after a slow start and injury. Robert Jennings too, although if you took out his three 100+ performances he was basically average, scoring 1.0ppg less than expected. Sounds like the very definition of rocks and diamonds.

It confirmed that whilst Sam Burgess was a popular pick, his season wasn’t really that impressive and probably wouldn’t have been in many squads if he wasn’t dual position as there were plenty of better options. And it also confirmed that Robbie Rochow (-9.9ppg) was only useful until he stopped making money, and even then, he wasn’t that useful.

But there is some use outside of confirming keepers and spuds. It can help to further identify players who might be worth taking a punt on this year as they’re undervalued (as noted with Nofoaluma above), or players who should be avoided due to low output for their place. Let’s take a look a few noteworthy players for 2019.

Joe Stimson played 13 games in the second row for Melbourne last year, averaging 73.8 minutes. His expected average over those games was 52.7, and his actual average was 53.6. That’s without any attacking stats either. He’s very hard to ignore priced at under $400k and possibly playing big minutes.

With dual position eligibility this year, Elliot Whitehead has raised some interest from Supercoaches, but is he worth a spot in your 17? He averaged a tick under 80 minutes in 20 games in the second row last year, for an average of 50.7. Not bad considering the average Supercoach player scores about 44 points a game, right? Unfortunately, Whiteheads expected average from those games was 57.6. And that’s not counting the two games he played at lock where he averaged 53 against an expected average of 68. He’ll most likely be a solid addition to your CTW position but nothing more and the cash is probably better spent elsewhere.

New Parramatta recruit Shaun Lane is an interesting proposition, averaging 65 per game from 13 starts in the backrow last year. That was well above the 48.6 that was expected of him in those games. He even outperformed in his stints on the interchange bench, averaging 40.3 against 36.3 from players in the same situation. At a shade over $500k he looks very tempting, but is he likely to continue this form with the Eels?

Joe Ofahengaue could be an interesting mid pricer this year. Unfortunately, he underperformed in three games in the front row (45.7 vs 48.5) and probably won’t get the lock position where he excelled (five games with an average of 70.4 vs expected 60.2). It’s definitely too small of a sample size to draw any real conclusions though, but he’s definitely one to keep an eye on.

In putting this together, I’ve discovered there’s some apparent drawbacks to this method and some areas to improve on moving forward. The top guns – like Cook, Trbojevic and Tedesco – all score significantly higher than their expected scores. Cook in particular didn’t have an expected value higher than 68 all of last season and only had 8 games where he didn’t score at least 10 above his expected score.

This is mainly because the model is comparing Cook’s performances against that of an average hooker in the same situation, and as we incorporate more games the expected scores will move closer to the overall average. This makes it impractical to use in projecting scores without further modifications to factor in higher scores.

Another downside in this method is that it’s retrospective and treats all data points as equal. This is fine as the data doesn’t (usually) lie, it. Someone facing the Roosters of 2018 probably shouldn’t be weighted the same as someone facing the Roosters of 2016. Any future revisions would probably look at degrading season weighting the further away they are from the current season.

The final issue is that I’m probably closer to a numerologist than a statistician, and you could poke some holes in this model. Please do, I’d love to improve it and there’s still work that could be done. Feel free to tear it to shreds in the comments below.

Like every piece of Supercoach analysis, it’s not overly useful in isolation and shouldn’t be the only thing you use to make a decision for your team. But as a proof of concept, I feel this has potential as an additional tool to help evaluate players and try to put in context just how good or poor performances are rather than letting them exist in a vacuum.