"In fact, if you read what's been written in the past ten years, it's hard to find anything that doesn't advocate a Bayesian approach."
- Nate Silver1
Mathematical Weekly Rankings.
Bayesian inference is a widely heralded branch of statistics that deals with updating beliefs using new data. It's a natural fit for fantasy football -- each season starts with a set of beliefs (predraft rankings) that need to be updated with new data (each week's games). During the season, it's the perfect approach to help you:
- Acquire and start the right guys.
- Stop chasing after performance2.
- Distinguish between quality, repeatable performances and lucky breaks3.
This site gives you you Bayesian derived projections, rankings and distributions for every player every week. It's not a crystal ball -- the Bayesian approach embraces uncertainty -- but it is a mathematically sound way of reducing in-season guesswork and distinguishing between luck and skill.
Keep reading to learn how it all works, or dive into our weekly rankings to get started. Interested in how we've done historically4? Check out the archive for every prediction and actual result going back to 2009.
How it works.
STEP #1 - START W/ PRESEASON EXPECTATIONS.
Bayesian Fantasy Football starts with the basic assumption that the higher a player is drafted, the better they'll perform5. In mathematical terms, these are our priors, our initial best guesses at how players will do heading into the season.
For example, two years ago Arian Foster was, on average, the first player picked. What does that mean for week one? Historically, given what we know -- he's the number one fantasy pick, he's playing a home game against a mediocre Miami defense -- we guess Arian has a distribution (assortment of point outcomes) that looks like something like this (note: for more on this see the fantasy probability primer here).
That's going into week one. He actually ended up with 20.5 points6. Not bad. But that raises another question -- what does this 20.5 points mean for week two?
STEP #2 - OBSERVE DATA / UPDATE BELIEFS.
Once we start observing actual games, we need to reconcile what we're seeing (e.g. Arian Foster scored 20.5 points) with what we thought initially (e.g. I drafted Arian Foster #1 overall). For most owners, this generally involves a lot of second guessing and stumbling in the dark. "Is Reggie Bush really that good or did he just get lucky? How much longer do I have to keep starting Chris Johnson?"
In short -- "I expected this guy to do this, and now he did that. What does it mean going forward?"
Bayesian Inference provides a mathematical framework to answer this very question. It's a natural way to update expectations over time. We take (1) our initial guess about player performance (preseason rankings), and (2) the actual week's results, and get (3) a new best guess distribution to use the following week.
So we do this for week one and end up with a new set of distributions, which we use to decide who to start, trade, add or drop going into week two. Then we observe week two's results. What next?
STEP #3 - REPEAT.
Nothing has changed. We take our previous Bayesian-derived distribution, observe another set of weekly results, and incorporate the two into a new, even better estimate, and so on. That's the beauty of Bayesian projections, they're set up to get better as the season goes on.