April ’24 Metagame Update: Holding Pattern

Are you a Quiet Speculation member?

If not, now is a perfect time to join up! Our powerful tools, breaking-news analysis, and exclusive Discord channel will make sure you stay up to date and ahead of the curve.

April was a weird month for Modern. On the one hand, it remains the most popular competitive format on Magic Online (MTGO) by a very wide margin. On the other hand, it's Pioneer RCQ season and most of the paper attention is away. The data reflects this dichotomy, as the online results are far greater than paper. However, as always, one needs to take the online results with a grain of salt. There are only a few thousand total players, and the events are dominated by a very small number of players. Their results are therefore heavily affected by groupthink.

Outliers Return

The March ban and the ensuing chaos and uncertainty prevented MTGO from having any outliers. Now that the chaos has largely subsided, the outliers are back in force with the top four decks being clear outliers. When you see the data, it won't be very surprising. Per my longstanding policy, outliers are removed from the data analysis but reported in their correct position on the Tier List. I almost wish this wasn't the case, as when you include the MTGO outliers the data looks more even and normalized.

As for the paper results, I've removed Yawgmoth as an outlier, but this one is less clear. I run a number of different outlier tests on the data and remove the ones the majority agree on. Normally the tests are quite clear and completely agree with each other. This time, none of them agreed. I could have removed between 0 and 4 decks, depending on the test. As Yawgmoth was the only one that the tests that identified as outliers agreed on, I removed it. However, this data set is clearly a weird one.

March Population Metagame

To make the tier list, a given deck has to beat the overall average population for the month. The average is my estimate for how many results a given deck "should" produce in a given month. Being a tiered deck requires being better than "good enough". Every deck that posts at least the average number of results is "good enough" and makes the tier list.

Then we go one standard deviation (STdev) above average to set the limit of Tier 3 and the cutoff for Tier 2. This mathematically defines Tier 3 as those decks clustered near the average. Tier 2 goes from the cutoff to the next standard deviation. These are decks that perform well above average. Tier 1 consists of those decks at least two standard deviations above the mean result, encompassing the truly exceptional performing decks.

The MTGO data nearly exclusively comes from official Preliminary and Challenge results. Leagues are excluded, as they add analytically useless bulk data to both the population and power tiers. The paper data comes from any source I can find, with all reported events being counted.

While the MTGO events report predictable numbers, paper events can report anything from only the winner to all the results. In the latter case, if match results aren't included, I'll take as much of the Top 32 as possible. If match results are reported, I'll take winning record up to Top 32, and then any additional decks tied with 32nd place, as tiebreakers are a magic most foul and black.

A Note on the Data

Daybreak is now releasing the total results from every MTGO Preliminary, Challenge, and League 5-0. After some experimentation, I'm sticking to just using the Challenge Top 32 results and 3-1 or better from the Preliminaries. The first reason is that, ultimately, nothing changed. The population metagame list didn't change between my normal method and the experimental versions. Various treatments for the power metagame did change the order of the tier list, but the composition varied only marginally.

The second reason was that dealing with all that data is significantly more work, even with automation. I'm not a great programmer but setting up and training the bots and then auditing the results took significantly longer than my current system, and I'd have to redo it monthly. Since it made little difference, I'm not going to make more work for myself. There are other sites that put together winrates with all the new data anyway, so I don't feel that anything's being lost. It also means that comparing the paper to MTGO results is easier.

The MTGO Population Data

March's adjusted average population for MTGO was 10.43, setting the Tier 3 cutoff at 12 decks. I always round down if the decimal is less than .20. Tier 3, therefore, begins with decks posting 11 results. The STdev was 18.04, so add 18 and that means Tier 3 runs to 29 results. Again, it's the starting point to the cutoff, then the next whole number for the next Tier. Therefore Tier 2 starts with 30 results and runs to 48. Subsequently, to make Tier 1, 49 decks are required.

April is the largest MTGO sample this year. January had 1,400 decks, February was 1225, March hit 1042, but April is a whopping 1664. Daybreak continues to add events, so the numbers continue to rise. I'm now taking bets on when MTGO's sample size hits 2,000.

However, that high sample size did not translate to real diversity. The total number of decks in my data set is up from 86 to 103, but the unique deck ratio fell from .082 to .062. The number of decks is therefore relatively down compared to March. 26 decks made the Tier List up from 21 in March, which again isn't a great increase given the higher overall population.

Deck NameTotal #Total %
Tier 1
Rakdos Scam21813.10
Counter Cat17110.28
Amulet Titan1408.41
Living End875.23
Mono-Green Tron744.45
4-Color Creativity684.09
Izzet Prowess603.61
Izzet Murktide573.43
Jund Creativity543.25
Goryo Blink513.06
Tier 2
UW Control402.40
Rack Scam342.04
Domain Zoo311.86
Tier 3
Bant Rhinos261.56
Temur Prowess241.44
Hardened Scales211.26
Wrenn White Blue191.14
Domain Murktide181.08
Mono-White Emeria181.08
Jund Saga120.72
UW Urzablade110.66
This is the worst distribution ever, but then again, there are four outliers. Don't read into this one too much.

If I was as lazy as the other sites and lumped together Counter Cat, Domain Murktide, Domain Zoo, and everything else under the Domain Zoo banner, then that deck would be #1 by quite a large margin. Despite keeping the Murktide and Counter Cat categories separate, there were so many hard to classify similar decks that I gave up and dumped them in Domain Zoo this month. Despite this, Counter Cat remains the most popular variant by far, and many players seem to be abandoning the Murktide variant for Counter Cat, as I predicted in March.

Rakdos Scam regained the top spot on MTGO, though I think that's a fluke. The pattern I saw for MTGO was players falling back on their old warhorses, as witnessed by the return of Creativity decks. MTGO's spikes don't have a clear Best Deck anymore, so they're seeking familiarity. We'll see if this lasts until Modern Horizons 3.

The Paper Population Data

Paper's dataset is recovering but is still down. January had 803 decks, February 890, March had 311, and April is up to 559. The real diversity is also solid, though because of the weirdness of March it looks bad. March had 63 unique decks and a ratio of .203 while April has 88 and .16. This doesn't represent an actual fall in diversity since March's numbers were a reflection of low population.

Tiered decks rose from 17 to 24, which is where I'd expect a healthy metagame of this size to be. The adjusted average population was 5.84, so 6 results make the list. The adjusted STDev was 9.75, so the increment is 10. Therefore, Tier 3 runs from 6 to 16, Tier 2 is 17 to 27, and Tier 1 is 28 and over.

Deck NameTotal #Total %
Tier 1
Amulet Titan498.77
Counter Cat478.41
Rakdos Scam468.23
Goryo Blink325.72
Tier 2
4-Color Creativity244.29
Izzet Murktide223.94
UW Control193.40
Tier 3
Hammer Time132.32
Jund Creativity132.32
Mono-Green Tron122.15
Mono-White Emeria111.97
Living End111.97
Bant Rhinos101.79
Hardened Scales101.79
Domain Zoo91.61
Wrenn White Blue91.61
Izzet Prowess91.61
Domain Murktide71.25
Bring to Light61.07
Mono-Black Scam61.07
4-Color Control61.07
This is a more typical distribution, highlighting the outlier problem on MTGO.

I didn't expect the outlier tests to have any outliers pop out. However, only one had no outliers. Stats can be wild, yo. Goryo Blink is rapidly falling from Tier 1 in both play mediums and I expect that to continue. Its most powerful plan is easy prey for graveyard hate and the Ephemerate elementals plan isn't enough to carry a deck. Bant Blink and/or Griefblade would have been top tier decks by now if it were.

March Power Metagame

Tracking the metagame in terms of population is standard practice. But how do results actually factor in? Better decks should also have better results. In an effort to measure this, I use a power ranking system in addition to the prevalence list. By doing so, I measure the relative strengths of each deck within the metagame so that a deck that just squeaks into Top 32 isn't valued the same as one that Top 8's. This better reflects metagame potential.

For the MTGO data, points are awarded based on the population of the event. Preliminaries award points based on record (1 for 3 wins, 2 for 4 wins, 3 for 5), and Challenges are scored 3 points for the Top 8, 2 for Top 16, and 1 for Top 32. If I can find them, non-Wizards events will be awarded points the same as Challenges or Preliminaries depending on what the event in question reports/behaves like. Super Qualifiers and similar higher-level events get an extra point and so do other events if they’re over 200 players, with a fifth point for going over 400 players.

Due to paper reporting being inconsistent and frequently full of data gaps compared to MTGO, its points work differently. I award points based on the size of the tournament rather than placement. For events with no reported starting population or up to 32 players, one point is awarded to every deck. Events with 33 players up to 128 players get two points. From 129 players up to 512 players get three. Above 512 is four points, and five points will be reserved for Modern Pro Tours.

The MTGO Power Tiers

As with the population numbers, total points are up, from 1674 in March to 2770. The adjusted average points were 17.05, therefore 17 points made Tier 3. The STDev was 29.80, so add 30 to the starting point, and Tier 3 runs to 47 points. Tier 2 starts with 48 points and runs to 78. Tier 1 requires at least 79 points. Jund Saga failed to make the power tier and wasn't replaced.

Deck NameTotal PointsTotal %
Tier 1
Rakdos Scam38213.79
Counter Cat2769.96
Amulet Titan2358.48
Living End1475.31
Mono-Green Tron1144.12
Izzet Prowess1053.79
4-Color Creativity1033.72
Izzet Murktide983.54
Goryo Blink853.07
Jund Creativity843.03
Tier 2
UW Control652.35
Rack Scam592.13
Domain Zoo541.95
Tier 3
Temur Prowess451.62
Bant Rhinos441.59
Hardened Scales341.23
Domain Murktide311.12
Wrenn White Blue301.08
Mono-White Emeria270.97
UW Urzablade210.76
As usual, the distribution gets worse when points are considered, but again, four outliers.

There's a lot of movement inside the tiers but no movement between them. Given the huge gaps that's not surprising. I've noticed that big gaps are quite common on MTGO but not so much in paper. While the usual suspects of groupthink and small playerbase could be the entire problem, I do wonder how much to blame the rental services. Players get a deck for a month and play them constantly, far more than you usually see in paper. I wonder how much they're locked into their deck for the whole month and if that is a reason you see huge swings and gaps all the time in the MTGO data.

The Paper Power Tiers

Points are massively up from 519 to 1163. There were a lot of 3-point events in April. The adjusted average points were 12.23, setting the cutoff at 13 points. The STDev was 21.14, thus add 21 to the starting point and Tier 3 runs to 34 points. Tier 2 starts with 35 points and runs to 56. Tier 1 requires at least 57 points. There's a lot of movement in the tiers, while Bring to Light and 4-Color Control fell off the list.

Deck NameTotal PointsTotal %
Tier 1
Amulet Titan1099.37
Counter Cat1079.20
Rakdos Scam958.17
Goryo Blink635.42
Tier 2
4-Color Creativity534.56
Izzet Murktide474.04
UW Control383.27
Tier 3
Jund Creativity312.67
Hammer Time302.58
Mono-White Emeria231.98
Domain Zoo231.98
Hardened Scales221.89
Izzet Prowess221.89
Wrenn White Blue221.89
Living End201.72
Bant Rhinos201.72
Mono-Green Tron161.38
Mono-Black Scam151.29
Domain Murktide141.20
Ok, low tiers: if you're going to grow, do so at Tier 1's expense, not Tier 2's.

Composite Metagame

That's a lot of data, but what does it all mean? When Modern Nexus was first started, we had a statistical method to combine the MTGO and paper data, but the math of that system doesn't work without big paper events. I tried. Instead, I'm using an averaging system to combine the data. I take the MTGO results and average the tier, then separately average the paper results, then average the paper and MTGO results together for final tier placement.

This generates a lot of partial Tiers. That's not a bug, but a feature. The nuance separates the solidly Tiered decks from the more flexible ones and shows the true relative power differences between the decks. Every deck in the paper and MTGO results is on the table, and when they don't appear in a given category, they're marked N/A. This is treated as a 4 for averaging purposes.

Deck NameMTGO Pop TierMTGO Power TierMTGO Average TierPaper Pop TierPaper Power TierPaper Average TierComposite Tier
Rakdos Scam1111111.00
Counter Cat1111111.00
Amulet Titan1111111.00
Goryo Blink1111111.00
4-Color Creativity1112221.50
Izzet Murktide1112221.50
Living End1113332.00
Mono-Green Tron1113332.00
Izzet Prowess1113332.00
Jund Creativity1113332.00
UW Control2222222.00
Domain Zoo2223332.50
Rack Scam222N/AN/AN/A3.00
Bant Rhinos3333333.00
Hardened Scales3333333.00
Wrenn White Blue3333333.00
Domain Murktide3333333.00
Mono-White Emeria3333333.00
Temur Prowess333N/AN/AN/A3.50
UW Urzablade333N/AN/AN/A3.50
Hammer TimeN/AN/AN/A3333.50
Bring to LightN/AN/AN/A3333.50
Jund Saga3N/A3.5N/AN/AN/A3.75
Mono-Black ScamN/AN/AN/A3N/A3.53.75
4-Color ControlN/AN/AN/A3N/A3.53.75

Average Power Rankings

Finally, we come to the average power rankings. These are found by taking the total points earned and dividing them by total decks, to measure points per deck. I use this to measure strength vs. popularity. Measuring deck strength is hard. There is no Wins-Above-Replacement metric for Magic, and I'm not certain that one could be credibly devised. The game is too complex, and even then, power is very contextual.

Using the power rankings certainly helps and serves to show how justified a deck’s popularity is. However, more popular decks will still necessarily earn a lot of points. Therefore, the top tier doesn't move much between population and power and obscures whether its decks really earned their position. 

This is where the averaging comes in. Decks that earn a lot of points because they get a lot of results will do worse than decks that win more events, indicating which deck actually performs better.

A higher average indicates lots of high finishes, whereas low averages result from mediocre performances and a high population. Lower-tier decks typically do very well here, likely due to their pilots being enthusiasts. Bear this in mind and be careful about reading too much into these results. However, as a general rule, decks that place above the baseline average are over-performing, and vice versa.

How far above or below that average a deck sits justifies its position on the power tiers. Decks well above baseline are undervalued, while decks well below baseline are very popular, but aren't necessarily good.

The Real Story

When considering the average points, the key is looking at how far off a deck is from the Baseline stat (the overall average of points/population). The closer a deck’s performance to the Baseline, the more likely it is to be performing close to its "true" potential.

A deck that is exactly average would therefore perform exactly as well as expected. The greater the deviation from the average, the more a deck under or over-performs. On the low end, a deck’s placing was mainly due to population rather than power, which suggests it’s overrated. A high-scoring deck is the opposite of this.

I'll begin with the averages for MTGO

Deck NameTotal PointsPower Tier
UW Urzablade1.913
Temur Prowess1.883
Rakdos Scam1.751
Izzet Prowess1.751
Domain Zoo1.742
Rack Scam1.732
Izzet Murktide1.721
Domain Murktide1.723
Living End1.691
Bant Rhinos1.693
Amulet Titan1.681
Goryo Blink1.671
UW Control1.622
Hardened Scales1.623
Counter Cat1.611
Wrenn White Blue1.583
Jund Creativity1.561
Mono-Green Tron1.541
4-Color Creativity1.511
Mono-White Emeria1.503
Jund Saga1.33N/A

Yawgmoth just barely beats Scam to be April's MTGO Deck of the Month.

Now the paper averages:

Deck NameAverage PointsPower Tier
Domain Zoo2.563
Mono-Black Scam2.503
Izzet Prowess2.443
Wrenn White Blue2.443
Jund Creativity2.383
Hammer Time2.313
Counter Cat2.281
Amulet Titan2.221
4-Color Creativity2.212
Hardened Scales2.203
Izzet Murktide2.142
Mono-White Emeria2.093
Rakdos Scam2.061
UW Control2.002
Bant Rhinos2.003
Domain Murktide2.003
Goryo Blink1.971
Bring to Light1.83N/A
Living End1.823
4-Color Control1.50N/A
Mono-Green Tron1.333

Counter Cat takes home the crown for paper for the second month running. I thought that was a Living End thing, but here we are.


April's metagame appears to me to be a holding pattern. MTGO moved back to old favorites while with less competitive pressure, players in paper branched out and were more experimental. This is no bad thing. However, it does mean that the utility of this data is a bit limited. The bottom line is that Modern is on hold until MH3 comes out, and so I expect metagame stagnation until then.

The most significant development is the return of Prowess to prominence. Slickshot Show-Off drove a lot of players back to the deck and is probably behind the fall-off in Izzet Murktide numbers. There were a number of variants, but Izzet Prowess outperformed them by a wide margin. Ignore how Goldfish lists the deck, a sideboard card and Jegantha, the Wellspring don't count as a color identity.

I wouldn't expect Prowess to gain the place it held prior to MH2. All the new removal that was responsible for its downfall then is still around, and once players adjust to the deck's new playstyle it will lose some ground. However, the deck's best plot turns are sufficiently broken that I don't think it will be a flash in the pan, though again we'll have to wait and see what MH3 does.

The Leaks

On that note, Wizards had to acknowledge a number of MH3 leaks recently. There's been a ton of speculation already, but it's important to temper expectations. We've only seen about two dozen confirmed cards, and there's plenty of room for something far more interesting to drop. That said, we have confirmation of a strong Eldrazi and artifact theme in the set. This will cause a major shakeup and a lot of brewing to happen, which is why current Modern players are apparently twiddling their thumbs.

Financial Considerations

Obviously, any Modern finance decisions must be made with an eye towards MH3. The aren't a lot of opportunities in taking advantage of the current metagame. With Kappa Cannoneer confirmed, players are understandably focused on Affinity-type decks. Thus, any staple for that deck is a good speculative buy before the big rush around MH3's release. However, I would also make sure to stock Shatterstorm and Kataki, War's Wage as they're the best counters to that theoretical deck.

The other big opportunity is Isochron Scepter. Orim's Chant was its best friend back in Extended and is coming to Modern in MH3. With the band back together, old-timers will be wanting to relive the glory days. I don't think it will work out well for them, as there are more answers now than in the bygone days. However, it will certainly be a sought-after card in the near future. Pick up some while the price is stable and be ready to move them quickly. I think the bubble on Scepter will burst quickly post release.

Join the conversation

Want Prices?

Browse thousands of prices with the first and most comprehensive MTG Finance tool around.

Trader Tools lists both buylist and retail prices for every MTG card, going back a decade.

Quiet Speculation