In April, a writer for the Bleacher Report (BR) attempted to critique a new pick value chart for the NFL draft that I created and that HSAC published. There are many legitimate criticisms of my work in that article that are worth exploring, and that many of our commenters have brought up: the need to include replacement level, the strength of Career Approximate Value (CAV) as a statistic, or the skew of the underlying data. These are not the criticisms levied in BR. This post counters the various critiques made by BR in hopes of ending what some have called “HSAC’s first beef”.
BR’s article has four main components. First, they seem to question the logic of creating a draft value chart before levying three claims: that pick x is worth more than the career performance of the average player picked at pick x; that past failures at pick x have no bearing on the value of pick x; and that predicting a player’s reaction to his first contract is hard, but does not diminish the value of rookies.
BR appears to question the validity of having a system to evaluate draft picks because “every team will value picks differently”. They argue that because teams have different rankings of players on their board, certain picks are worth more to some teams than others. This claim may be true in very specific cases: the Colts probably valued the first overall pick more than the Packers did this year because the Colts needed a quarterback (Andrew Luck) and the Packers already had one (Aaron Rodgers). However, BR’s claim implies that teams deviate significantly from Jimmy Johnson’s chart, which lists the accepted market values of draft picks. As Cade Massey and Richard Thaler have shown, teams do not stray from the chart. So while the true value of NFL draft picks (what I tried to find) has been “elusive”, the market value of those picks has not. The problem now is that the market values are systematically wrong.
In their next criticism, BR draws the wrong conclusion from an important phenomenon. They write that a team with the 10th overall pick expects the player they pick to perform better than the historically average 10th overall pick. This expectation is called overconfidence, and is rampant in the NFL draft. In a different article, I found that players selected as the result of a trade up (players that teams are especially confident in) performed worse than players who were selected normally (for details, read the article). Thus teams tend to be overconfident in their abilities to draft the best players, and overvalue the right to choose early in the draft. Teams should not use “the expected value of the player acquired in the mind of the team” because their mind is probably overconfident. Teams can avoid this bias by using the average value of the player picked in that slot.
While BR argues otherwise, past failures at a pick slot are important in large part because they help combat the overconfidence bias. I will borrow BR’s analogy: downtown real estate. If a string of bars rent a prime spot and all fail, the underlying potential of the real estate is not impacted. However, the high rent surely contributed to those bars failing; if the rent were half its price, those bars would probably still be in business. If I am an entrepreneur thinking of starting a bar on this piece of real estate, I will want to know the base rate of success for bars here. If the base rate of success is low, then the risk is higher, and visa versa.
To put this analogy in football terms, while the underlying potential of high picks remains high even if teams fail to fulfill that potential, teams should remember prior failures so that they are not overconfident in themselves.
Finally, BR again draws a strange conclusion from a real experience: it is difficult to predict a player’s reaction to his first big contract. As BR writes, “predicting how young, talented football players will perform in the pros is an inexact science.” For this reason, there is more uncertainty around rookies, which should lessen the value of the earliest picks (as I found).
Instead of drawing this conclusion, however, BR muddles through two separate, unrelated claims: 1) the uncertainty of a rookie’s first contract doesn’t lessen the value of early draft picks as much as I found, and 2) because teams generally pick better players earlier, the entire process is valid. To support claim 1, BR needs to provide evidence, which they fail to do – aside from simply stating claim 2, which is unrelated. Aside from being tangential, claim 2 seems naive. Yes, teams generally pick better players earlier. However, you can almost make a Pro Bowl team out of undrafted players.* Their second claim ignores that the scouting process remains far from perfect.
One phrase from BR’s critique sums up the underlying issue in the article: “[the HSAC chart] just feels wrong on its face”. Just because something is counterintuitive does not make it false. The introduction of sabermetrics into Major League Baseball just felt wrong on its face to many people inside the MLB because of their traditional beliefs in how to run a baseball team. That’s the beauty of empiricism: even if a conclusion feels wrong, through careful analysis, one can demonstrate that it’s right.
*Roster of undrafted players:
QB: Tony Romo, RB: Arian Foster, Fred Jackson, FB: Vonta Leach, WR: Wes Welker, Brandon Lloyd, Miles Austin, Victor Cruz, TE: Antonio Gates, LT, Jason Peters, G: Brain Waters, Brandon Moore C: Jeff Saturday, RT: Tyson Clabo
DT: Pat Williams, DE: Cullen Jenkins, Mike Devito, LB: James Harrison, Antonio Pierce, London Fletcher, Cameron Wake, CB: Brent Grimes, Tramon Williams S: Jim Leonhard, Ryan Clark
K: Adam Vinatieri, P: Britton Colquitt, KR: Josh Cribbs