Source: Deep Learning on Medium

This is cool work! One quick note…

> Ask a thousand humans to guess how many jelly beans are in a big jar of jelly beans. Average the absolute error of each guess. Then average the guesses, and take the absolute error of *that*. The latter will be less wrong.

http://wisdomofcrowds.blogspot.com/2009/12/chapter-one-part-i.html

The “Wisdom of the Crowds” has less to do with the crowd, and everything to do with the loss function you cite. When it comes to guessing the number of jelly beans, the average absolute error of any two numbers cannot be more than the absolute error of the average. It’s just a simple mathematical inequality.

This is intuitively obvious to us because we so often work with concepts of error that work like this. If you average two things, you can’t be *further* than both of them individually! However, there’s no reason that this *has* to be the case.

In fact, I would assume draft picks are a pretty perfect example of when this *wouldn’t* always hold. If player A has a leak where they hard force aggro, and overvalue Ill Gotten Gains in Orzhov, and player B has a leak where they hard force control, and overvalue Clear the Mind, a draft program that *averages* the two strategies will be significantly worse off than taking either player, idiosyncrasies and all. We tend to be biased in consistent ways that likely make some sort of cohesive sense.

Obviously, in total, this barely matters. This may be a somewhat minor effect. And more importantly, the main use of draft bots is to simulate human players. The fact that the bot ends up with an iffy deck is less important than the fact that it picks and prioritizes cards in a fashion that feels consistent with a human draft pod (two bots which average player A and player B above probably feel very similar at the table as the players themselves). That bit just stuck out to me because I wouldn’t cite the “wisdom of the crowds” when MTG Drafting is basically a perfect counterexample to this (usually true) idea.