First Tuesday of a new month? Must be time for the metagame update. Yes, I realize that there wasn't one last time, but those were special circumstances. A major ban invalidated more than half of February's data. There's really nothing statistically valid from ten days of results. Of course, March wasn't much better. Thanks to a glitch, Wizards failed to post results for thirteen days. This did affect the data, but I found some work arounds.
I also have a public service announcement: Mythic Event tokens are back on MTGO. For those unaware, these $25 tokens unlock every card on MTGO for a limited time (except a few promos). This will last until April 14 and is an excellent chance to really explore Modern. Play Money Tribal or that weird deck you'd never pay money for just to see how it works. Or jump on the bandwagon and play the best deck. In any case, this is the best chance to brew and mess around with everything Modern can offer. Or be me and fail to learn Vintage. I understand less about it after a week of playing than I did when I started.
The Big Hole
So first of all, I need to address the aforementioned hole. For reasons Wizards never explained, no decks from MTGO were posted from March 11 until March 23. Thirteen days is a huge chunk of data to lose. I was genuinely worried that all of March's data would be lost and I'd have to skip the update just like in February. Even when the glitch was fixed, the gap was large enough to bring validity into question. Fortunately, it didn't come to that, as others were as frustrated as I. u/bamzing on reddit apparently went onto Twitter to track down players from the Challenges to find out their decklists, which I would have never even considered. Their work means that at least some of the lost data has been recovered.
I also went on a stroll through Google results, looking for private MTGO Modern tournaments which were posting results. I found a few that seemed both competitive and open enough to use, and they fleshed out the missing weeks at least somewhat. The overall results are still well down from January, but at least I have enough to feel confident presenting the data. Just keep in mind that this isn't as robust and descriptive data as I'm used to.
A hole in the data is a big problem in terms of statistical concerns, but it's manageable. What isn't replaceable is the story that the missing Preliminaries would have told. Prior to March 10, Jund Shadow was by far and away the most popular deck in Modern, with Burn and Amulet Titan hot on its heels. Heliod Company had been putting up results in the Challenges, but it was absent from the Preliminaries. It did very well in those Challenges, but there was nothing to indicate that Company was good elsewhere, suggesting it was a metagame deck against the Premier players but not the overall metagame.
All that changed with the gap. With only Challenge results to go on, Heliod Company shot up the rankings (particularly the power rankings). Once the data returned, the Preliminary results began to increasingly mirror the Challenges and with that Company stayed in the upper tier. Thus, I'm left wondering if this shift is the direct result of the gap or a natural metagame evolution. If it's the latter, then I should have seen a gradual increase of Heliod decks in Prelim results. The result is that the current results reflect the "true" metagame as it evolved. If it's the former, then what happened is that players saw Heliod do well in the only available results, assumed that it was the best deck and reacted accordingly. So the results I'm recording for April reflect an "artificial" metagame. In other words, the metagame's gone recursive.
Unless Wizards releases the missing data or I conduct an MTGO-wide survey of Modern players, I'll never know which is correct. I'm bringing this up for players to be aware of as I actually discuss the results so that they can take an appropriate grain of salt before digesting them.
As mentioned, the data's down from January. There were 552 decks in January, but thanks to the gap I only have 420 in March (nice!). This is the smallest data set for a full month I've worked with, which again isn't the end of the world. It just means that there will be more questions this time than in previous updates.
To make the tier list, a given deck has to beat the overall average population for the month. The average is my estimate for how many results a given deck “should” produce on MTGO. To be a tiered deck requires being better than “good enough;” in January the average population was 6.89, meaning a deck needed 7 results to beat the average and make Tier 3. It's odd that this is the same threshold as January's, and is low by the standards of previous months. Then we go one standard deviation above average to set the limit of Tier 3 and cutoff to Tier 2. The STdev was 10.05, so that means Tier 3 runs to 17, and Tier 2 starts with 18 results and runs to 28. Subsequently, to make Tier 1, 29 decks are required.
The Tier Data
Along with the total population being down 132 decks, the individual archetypes are down, though not as much as I'd expected. 61 distinct decks were recorded and 17 crossed the threshold to make the lists. I'm certain that more decks would have qualified without the data hole, but I also doubt that the archetype gap would have closed. Given the typical Preliminary, I'd have needed to see at least one distinct deck in every Prelim to meet January's total, which is a testament to how diverse that meta actually was.
|Deck Name||Total #||Total %|
|Jund Death's Shadow||50||11.90|
|Death and Taxes||19||4.52|
|Niv 2 Light||12||2.86|
Tier 1 is only two decks, and they're leading everyone else by a lot. This does suggest there's a winner's metagame on MTGO, because the rest of the data is fairly normal. Mono-Green Tron just missed the cutoff for Tier 1, and I'm inclined to think that, with more data, both it and Amulet Titan would have made it. I think Burn and Eldrazi Tron are actually Tier 2 instead of Tier 3 for the same reason. Both Infect and Crab Mill just missed making Tier 3, but I'm less certain that either would have made it with more data. Crab Mill had a few results early on then disappeared while Infect just appeared every so often. Mill missing Tier 3 is therefore likely correct (remember, more data changes the thresholds) while Infect is a random bullet so who knows?
It's interesting to note that 4-Color Omnath is still hanging around despite being nuked. Apparently, Money Tribal really is that powerful. What's interesting is that it's standing separate from Niv 2 Light despite a tremendous amount of overlap. I would guess that the reasoning is that Omnath has a (slightly) more stable manabase in exchange for Niv's higher power, but considering that both decks are leaning on Wrenn and Six to make it work, that seems less likely. Maybe inertia is to blame because Niv is so much more powerful than Omnath while sharing the manabase concerns. And Yorion, Sky Nomad forgives many slow-deck sins.
A Winner's Tier 1?
Jund Shadow and Heliod Company are effectively tied for most popular deck. The next-most popular deck posted just over half as many results and missed the cut for Tier 1. There's clear polarization here, especially since lower Tier 2 and Tier 3 are a nice gradual decline. The trend line looks kinda like a reversed asymptote. Which naturally made me ask why, and while I can't say with certainty (as previously noted), I do have a theory: I think this is a Pro's vs Joes scenario and not the "real" metagame. See, I think that there's an element of recursive metagaming and small population dynamics which is warping MTGO. In essence, there's a limited number of consistent Premier level players who are certain that Jund Shadow and Heliod are the best decks, and they're driving the data. If there were paper events or more non-MTGO data, this apparent warp might disappear.
To understand where I'm coming from, first read this article by Frank Karsten. The key thing is that in a Rock, Paper, Scissors metagame where Rock is paramount, the correct deck to pick to make Top 8 is Paper, but the best to win is Scissors. Thus, my decision is not based on which deck is actually the best deck, but on what deck I think I need to win the event. Take that logic and apply it to a metagame with a relatively low population. Right after the bans, red decks were everywhere. This meant that Auriok Champion spiked in popularity and the deck which ran it maindeck surged. In response to this, Jund Shadow changed itself to mitigate Champion while not giving up anything against the Prowess decks. As a result, the top players began to first gravitate then fixate on those two decks, and then anticipated this move and any countermove because that smallerish group of players can (theoretically) keep tabs on what they're doing. Without outsiders to challenge their narrative or provide a contrary point, that narrative reigns and becomes the metagame even if in a more open metagame with a more diverse population it would not be the case.
I believe that the internal metagame of the premier players is driving the data because my observations in League play don't back up the Jund Shadow vs. Heliod vs. Everything Else narrative that the data suggests. I've been playing Heliod Company (thanks to the Mythic token) and playing against it. Heliod's felt good, but not phenomenal. The deck is hard to play online and a lot of lines aren't particularly overpowering. It's the whole being greater combined with some Oops, I Win! combos that make it good. However, I can also see how a more experienced player could improve the deck's win percentage, and why better players would pick up the deck. Thus a self-fulfilling prophecy is born. I can't prove it, of course, but this is the theory I'm working under.
Tracking the metagame in terms of population is standard practice. However, how do results actually factor in? Better decks should also have better results. In an effort to measure this, I use a power ranking system in addition to the prevalence list. By doing so I measure the relative strengths of each deck within the metagame. The population method gives a decks that consistently just squeaks into Top 32 the same weight as one that Top 8’s. Using a power ranking rewards good results and moves the winningest decks to the top of the pile.
Points are awarded based on the population of the event. Preliminaries award points for record (1 for 3 wins, 2 for 4 wins) and Challenges are scored 3 points for Top 8, 2 for Top 16, 1 for Top 32. For March, the non-Wizards events I found were most similar to Challenges and awarded points accordingly. Super Qualifiers and similar events get an extra point if they’re over 200 players, and another one for over 400. There was only one event that awarded 5 points in March.
The Power Tiers
The total points in March were also down thanks more to the loss of events, from 1017 to 760. The average points were 12.46, so 13 makes Tier 3. The STDev was 19.65, down noticeably from January, so Tier 3 runs to 32 points. Tier 2 starts with 33 points and runs to 46. Tier 1 requires at least 47 points. As is a bit of a tradition, the total number of decks stayed the same but one deck fell off Tier 3 and was replaced.
|Deck Name||Total Points||Total %|
|Jund Death's Shadow||93||12.10|
|Death and Taxes||44||5.79|
|Niv 2 Light||26||3.42|
|4-Color Living End||13||1.71|
Thanks to some very good Challenge results, 4-Color Living End just made Tier 3 despite being well under the population cutoff. Keep an eye on this deck; it's angling to play spoiler for Heliod Company. Dredge fell off, which is surprising given that graveyard hate is down. Interestingly, Tron is still just below the Tier 1 cutoff while Amulet actually cleared the hurdle. I think this speaks to the dedication of Amulet's player base more than any positioning advantages.
Heliod manages to beat out Jund Shadow for top place, thanks again to above-average Challenge results. I don't think this actuallymeans that Heliod is better performing given the population results and the context of Heliod's points (specifically, Jund Shadow puts up more results on average, but Heliod places higher on average), but I could be wrong. It also tends to reinforce my winner's metagame theory.
Average Power Rankings
Finally, we come to the average power rankings. These are found by taking total points earned and dividing it by total decks, which measures points per deck. I use this to measure strength vs. popularity. Measuring deck strength is hard. Using the power rankings certainly helps, and serves to show how justified a deck’s popularity is.
However, more popular decks will still necessarily earn a lot of points. This is where the averaging comes in. Decks that earn a lot of points because they get a lot of results will do worse than decks that win more events, indicating which deck actually performs better. A higher average indicates lots of high finishes, where low averages result from mediocre performances and high population. Lower-tier decks typically do very well here, likely due to their pilots being enthusiasts. So be careful about reading too much into the results.
The Real Story
When considering the average points, the key is looking at how far-off a deck is from the Baseline stat (the overall average of points/population). The closer a deck’s performance to the Baseline, the more likely it is to be performing close to its “true” potential. A deck that is exactly average would therefore perform exactly as well as expected. The further away the greater the deviation from average, the more a deck under- or over-performs. On the low end, the deck’s placing was mainly due to population rather than power, which suggests it’s overrated. A high-scoring deck is the opposite.
|Deck Name||Average Points||Tier|
|4-Color Living End||2.6||3|
|Death and Taxes||2.32||2|
|Niv 2 Light||2.17||3|
|Jund Death's Shadow||1.84||1|
Again, the baseline is quite low, both in absolute and relative terms. The latter is going to happen when I'm awarding more points for high results. The latter is less explainable, especially given that singleton point decks were down quite a bit in March.
As previously mentioned, 4-Color Living End did disproportionately well in a few events which pumped up its average points a lot. Of the more popular decks, Death and Taxes did extremely well and that makes me all warm and fuzzy. Niv 2 Light's significantly outshone 4-C Omnath's here as well, which strongly suggests that those on the "Ignore Blood Moon" plan are likely to move towards just getting all the value soon. Also, congratulations are in order to UW Control players! You performed exactly average in March. That takes talent /s.
And Now, We Watch
With the metagame starting to take shape and a glimmer of hope that in-person events can return soon, we just have to wait and see what happens. Can Heliod prove that it really is the new format boogeyman, or will the metagame unite to drive it off? I'll have the answer with the next update.