Comments on: Testing Punishing Fire: Quantitative Data https://www.quietspeculation.com/2019/05/testing-punishing-fire-quantitative-data/ Play More, Win More, Pay Less Sat, 18 May 2019 17:07:26 +0000 hourly 1 By: David Ernenwein https://www.quietspeculation.com/2019/05/testing-punishing-fire-quantitative-data/#comment-2130070 Sat, 18 May 2019 17:07:26 +0000 http://quietspeculation.com/?p=20049#comment-2130070 In reply to Cale Winslow.

I’ll be going into detail in the next article, but in short: the risk is very high and the reward very low. In the right, super grindy shell Fire will be very good. Even if the deck itself is just A Deck in Modern, such a deck would just eat into tournament time. This means that it might not be oppressive to the metagame but to tournament Magic, and that’s a pretty good reason to keep it banned.

]]>
By: Cale Winslow https://www.quietspeculation.com/2019/05/testing-punishing-fire-quantitative-data/#comment-2130069 Sat, 18 May 2019 15:04:06 +0000 http://quietspeculation.com/?p=20049#comment-2130069 This is the insanely high quality content that Modern Nexus has come to be known for.

Would you agree an unban would be Thopter/Sword 2.0? Hope we don’t see it too much but basically irrelevant to the format now, safe to release as a extra tool for one tier 4 brew?

Or is your distaste for the card sufficient to make you think it’s not worth the risk ? (I felt this way about bitterblossom)

]]>
By: David Ernenwein https://www.quietspeculation.com/2019/05/testing-punishing-fire-quantitative-data/#comment-2130068 Fri, 17 May 2019 01:23:07 +0000 http://quietspeculation.com/?p=20049#comment-2130068 In reply to Ben Buyer.

A distinction, I report a z-test. I do a number of statistical tests once the data’s in, including t-tests, and use a number of data analysis tools to deal with the variance/sdev problems. To some extent I don’t have to know the “true” deviation because it’s a controlled experiment with a hypothesis test, but I am aware of the statistical issues so I try and confirm my test results.

Most people have very limited experience with statistics and the z-test is usually the one they actually know or at least vaguely remember from their one semester in high school. It makes sense to use the one that readers are most likely familiar with, regardless of what I actually do.

I’ve also never had a situation where switching tests yielded divergent results (i.e. significant became non-significant). Instead it’s been differences in magnitude like p=.04854 instead of p=.049, so it makes no practical difference.

In the beginning, I had some trouble where the t-test gave weird results because the data set was binary. The z-tests didn’t care so I used them exclusively, but I’ve since learned how to fix that programming error so these days it’s all about ease of understanding.

]]>
By: Ben Buyer https://www.quietspeculation.com/2019/05/testing-punishing-fire-quantitative-data/#comment-2130067 Thu, 16 May 2019 21:46:19 +0000 http://quietspeculation.com/?p=20049#comment-2130067 Although this was an excellent read, as always, I had a statistical question. Why did you decide to do a z-test on your means instead of a t-test? I cannot possibly believe that you have the true/population standard deviation of both control and Punishing Jund’s win-rate (and I would be surprised if you had even one’s) and as such your z-test would yield inaccurate results. I know that this is a relatively informal project for which extreme precision is not necessarily paramount, but seeing as you took the time to carefully explain p-values I would think this sort of thing is important to you. Thanks in advance for any reply you offer.

]]>
By: Jeffrey Kabbe https://www.quietspeculation.com/2019/05/testing-punishing-fire-quantitative-data/#comment-2130066 Tue, 14 May 2019 20:13:04 +0000 http://quietspeculation.com/?p=20049#comment-2130066 Hey. Thanks for the analysis. FYI, there are a few times when you wrote p ) that I think you meant p < 0.1 (particularly in the first part of your article)

]]>