The new month brings a new metagame update. And this time, there's nothing disruptive to report. No bannings, no weird and still-unexplained data gaps; just a perfectly normal bit of data gathering. Which means that I'll be delivering a straightforward metagame update. Which will be nice, since there's another Modern Horizons coming, and that might throw everything into chaos. It also may not, but doomsaying gets more hits than caution.
The data’s down from January, but significantly up from March. There were 515 decks in April, almost 100 more than March's 420 but down from January's 552. Which is still a respectable size, but nothing spectacular. However, it feels very odd, as this was the first month since December to feature a Preliminary with 5 rounds. I believe that the Showcase Challenge pushed out at least one normal Challenge, leading to the lower total, but I can't prove this. It's the only thing I've got since April was another All-Access month, which should have increased MTGO play. On the other hand, there may be nothing wrong and MTGO play is simply down, either due to fatigue or something happening on Arena.
I'll also note that I didn't include any non-Wizards events this month. I didn't need them, unlike in March, and I also didn't see any that appeared to be equivalent to a Challenge or even a Preliminary. If I missed something, do let me know. I don't know what I don't know, after all.
To make the tier list, a given deck has to beat the overall average population for the month. The average is my estimate for how many results a given deck “should” produce on MTGO. Being a tiered deck requires being better than “good enough;” in April the average population was 7.92, meaning a deck needed 8 results to beat the average and make Tier 3. This is a pretty standard average as these go. Then we go one standard deviation above average to set the limit of Tier 3 and cutoff for Tier 2. The STdev was 10.85, so that means Tier 3 runs to 19, and Tier 2 starts with 20 results and runs to 31. Subsequently, to make Tier 1, 32 decks are required.
I've been approached a few times about using a confidence interval instead. That's what the old system used, and it is somewhat more statistical. I'm not opposed in theory, and will probably have to use the confidence interval once paper events come back. For now though, it's a bit more work for no real gain. A couple extra decks may sneak into Tier 3 depending on the data, and the exact tier composition will likely change, but the order of the decks will not, and that's more important.
The Tier Data
April's data being more complete than March's means that the individual decks were up from 61 to 65. Not a large increase, but the data isn't back to January's level, much less earlier months. Along with the total archetypes increasing, the tiered decks are up from 17 to 20, again just shy of January's mark. I'm constantly wondering if the wild swings in the number of archetypes are indicative of actual metagame shifts or player bias. I'm hoping it's the former because that's the whole point of this exercise. However, I can't discount players simply preferring certain decks regardless of the metagame nor that they're recursively metagaming. MTGO's competitive players are a pretty small and self-selecting group, after all. Not at all impossible that this is just measuring the biases of a small population. But there's nothing better at the moment. Hopefully that will change soon.
|Deck Name||Total #||Total %|
|Jund Death Shadow||34||6.60|
|Niv 2 Light||16||3.11|
So, yeah, Heliod Company was on top. By a lot. Enough to be statistically and convincingly Tier 0. I actually checked to see if it qualified as an outlier, and the results were inconclusive. The typical mathematical measurements put all of Tier 1 into outlier territory, the more narrow ones put Heliod right on the edge, and the regressions said yes or no depending on how I entered the data. The best-fit lines had Heliod pegged as a clear outlier, but on a hunch I checked, and it was right on the exponential decay line. When I removed Heliod Company from the data, the average and STDev didn't change enough to make a difference. I want to say that yes, Heliod Company is an outlier in April based on intuition more than anything else. But that doesn't mean anything. However, I also have evidence of something odd about Heliod from the other metagame measurements.
Elsewhere in Modern
So, what's happening with the decks that aren't Heliod Company? Prowess, primarily. Izzet Prowess was the second best deck, followed closely by Jund Shadow, a deck with many Prowess elements. In Tier 3 there's Mono-Red Prowess and the newly minted Boros Prowess. There were also a few Rakdos and Grixis versions that didn't make the list. Put all the Prowess together and they'd exceed even Heliod Company by quite a bit. I'm inclined to think that the real power in the metagame is Monastery Swiftspear, not Heliod, Sun Crowned. Prowess being so popular means that Eldrazi Tron is back in a big way. The central deck strategy isn't particularly good against Prowess, but E-Tron is the only deck maindecking Chalice of the Void, which is very good against Prowess. This is a typical fluctuation; E-Tron always does well when Prowess is up and falls as soon as Chalice stops being good.
Speaking of aggregating decks, if I put the 5-Color decks together they would have just missed Tier 1 with 31 results. This is not an entirely ridiculous thing to do as 5-Color Scapeshift is Niv 2 Light, but without Niv-Mizzet Reborn, 80 cards, and Yorion, the Sky Nomad. And there was a very sudden switch between the decks. Up until the 16th, Niv was on track for Tier 1 placement. Then it seems players just got tired of playing 80 cards and dropped down to a more streamlined Bring to Light package serving Scapeshift. By the 22nd, Niv stopped appearing at all, and only Scapeshift remained. Had Niv remained the 5-Color deck or the diet begun earlier, then one of the decks would have been more than mid-Tier 3. Something to watch.
Tracking the metagame in terms of population is standard practice. However, how do results actually factor in? Better decks should also have better results. In an effort to measure this, I use a power ranking system in addition to the prevalence list. By doing so I measure the relative strengths of each deck within the metagame. The population method gives a decks that consistently just squeaks into Top 32 the same weight as one that Top 8’s. Using a power ranking rewards good results and moves the winningest decks to the top of the pile and better reflects its metagame potential. Of course, the more popular decks will necessarily earn more points, but the difference in scale between the
Points are awarded based on the population of the event. Preliminaries award points for record (1 for 3 wins, 2 for 4 wins) and Challenges are scored 3 points for Top 8, 2 for Top 16, 1 for Top 32. If I can find them, non-Wizards events will be awarded points according to how similar they are to Challenges or Preliminaries. Super Qualifiers and similar level events get an extra point if they’re over 200 players, and a fifth for over 400 players. There were 2 events that awarded 4 points in April and one which awarded 5 points. And that Super Qualifier had an outsized impact on the data.
The Power Tiers
The total points in April were up from March as I'd expect, from 760 to 928. Just like the population data, that's pretty average. The average points were 14.28, so 15 makes Tier 3. The STDev was 20.29, up noticeably from March, so Tier 3 runs to 36 points. Tier 2 starts with 37 points and runs to 58. Tier 1 requires at least 59 points. The new Jeskai Underworld Breach deck, Mentor Breach, which snuck onto the population tier was mainly a 3-1 Preliminary deck and so didn't get the points necessary to make the power tier. There was nothing to replace it, so this tier list is smaller.
|Deck Name||Total Points||Total %|
|Jund Death Shadow||66||7.11|
|Niv 2 Light||34||3.66|
I'm tempted to copy-paste everything I said about Heliod's absurd lead from the population section. It's to be expected that the most popular deck by a lot would also win the most points by a lot.
What's more interesting is how the rest of the list has changed. E-Tron was kicked out of Tier 1 and UW fell into Tier 3, indicating decks that are popular but not especially successful. UW is just under the cut to Tier 2, but considering that it was just over the line for Tier 2 in population I think the point stands. These are predatory decks and when their prey is sparse, they don't do well.
There was so much turmoil in Tier 3 that I can't really track it all. However, I find it interesting that Niv 2 Light has substantially more points than 5-Color Scapeshift considering that Niv only had one more deck place in April. This suggests that it was the more powerful deck or at least the deck that more rewarded gifted or dedicated pilots, which does muddy the waters for me about it being replaced. I will note that Niv is much harder to pilot than the newer version.
Average Power Rankings
Finally, we come to the average power rankings. These are found by taking total points earned and dividing it by total decks, which measures points per deck. I use this to measure strength vs. popularity. Measuring deck strength is hard. Using the power rankings certainly helps, and serves to show how justified a deck’s popularity is.
However, more popular decks will still necessarily earn a lot of points. This is where the averaging comes in. Decks that earn a lot of points because they get a lot of results will do worse than decks that win more events, indicating which deck actually performs better. A higher average indicates lots of high finishes, where low averages result from mediocre performances and high population. Lower-tier decks typically do very well here, likely due to their pilots being enthusiasts. So be careful about reading too much into the results.
The Real Story
When considering the average points, the key is to look at how far-off a deck is from the baseline stat (the overall average of points/population). The closer a deck’s performance to the baseline, the more likely it is to be performing close to its “true” potential. A deck that is exactly average would therefore perform exactly as well as expected. The further away the greater the deviation from average, the more a deck under- or over-performs. On the low end, the deck’s placing was mainly due to population rather than power, which suggests it’s overrated. A high-scoring deck is the opposite.
|Deck Name||Average Points||Tier|
|Niv 2 Light||2.13||3|
|Jund Death Shadow||1.94||1|
The baseline is up from March, which is consistent with the higher population and point totals. As usual, the top slots are occupied mainly by Tier 3 decks. However, Niv 2 Light was very close to Tier 2, which further muddies the waters of it apparently being replaced, especially when 5-Color Scapeshift is just above baseline. Burn being the second-best deck was also surprising, but makes sense in retrospect since Eidolon of the Great Revel is quite strong against Prowess. I'd also like to call attention to Boros Prowess's utterly abysmal showing. The deck is not living up to its hype.
The Truth about Heliod
However, the big story is the one I've been building up to this entire article. Heliod Company, the deck that I suspected to be an outlier from the population and power rankings, didn't make the first page of the average power chart. It's a thoroughly medium performance, and it slotted in just above the baseline. By itself, this would indicate that Heliod was only on top due to insane popularity. However, I had a hunch to investigate, and it proved correct: the April 5 Super Qualifier had an outsized influence on Heliod Company's performance.
Heliod Company put three pilots into the Top 8 of that tournament, one more into Top 16, and two in Top 32. That's 25 points from one event, and is by far the best single day performance for a deck since I started this new system. And it was also the absolute high point of the month for Heliod Company. After that it lost a lot of steam and average points began falling. Rather than lots of 3-point performances, it was gathering single points. I don't know why that happened, but it absolutely happened. This made me suspect that Heliod isn't really an outlier so much as that event was.
So I tested my theory by making a copy of the overall data but removing every result from the April 5 Super Qualifier. And my suspicions were confirmed. All the top decks are impacted by the fall, but at 82 points, Company's points stop being a potential outlier. Izzet Prowess is still number 2 with 65 points, which remains a big gap, though not necessarily an atypical one. The more significant finding was that average power. Every deck lost a few points and the baseline fell to 1.71, but Heliod's average power fell to 1.54, the second worst result in the data. It's clear that Heliod is a winners' deck and that a lot of its success comes down not to the deck, but to who's been playing it. As such, I'd worry more about the high performances of Jund Shadow or Izzet Prowess.
A Snapshot in Time
That's it for April's update. MTGO's metagame continues to churn, but I'm starting to tease out the evidence that it isn't actually representative. Hopefully, the pandemic will be sufficiently under control that paper Magic can start to return in May, just in time for Modern Horizons 2. And then we get to see how different the metagame truly is.