revolutionary tools.  groundbreaking articles.  proven results.

Predicting QB draft position: Developing a model, and testing it against the 2014 draft

spreadsheet

They Say It’s Your Birthday

Age can be important when trying to forecast an NFL prospect’s draft position. On average, younger skill position players are selected before older players. But scouts and GMs have a lot more information available to them than a guy’s birthday when they’re making decisions about who to draft. Incorporating some of that data into our prediction models could improve our forecasts significantly.

Unfortunately, most of the good info on NFL prospects isn’t available to the public. But there are still some options. Over at cfbstats.com, there are several sets of data breaking down every play of every NCAA FBS and FCS game from 2005-2013. With these data, we can use multiple regression modeling to try and predict a prospect’s draft position. Let’s start with the QBs.

The Method

We’re going to compare a series of models1 predicting a QB’s draft position. In each model, we’ll add a new predictor or set of predictors, so that we can get an idea of how much these new predictors change the accuracy of our predictions. Then, once we’ve developed a model that seems reasonable, we’re going to test its predictions for the QBs from the 2014 draft class.

For each model, a table will present the following: the predictor; the parameter estimate for the effect; an F-value (an effect size that accounts for parsimony; bigger F’s mean stronger predictors); the unique percentage of error reduced by the inclusion of each predictor; and the p-value, which is the probability of finding an F-value or R-squared as big as we did purely because of chance (lower p’s are good).

A couple of notes about the data and models: first, all of the data was collected from cfbstats.com; all continuous variables have been log-transformed, which ensures that they better follow a normal distribution;2quadratic and interaction terms will be entered into the models when appropriate, in order to test whether the effects of a predictor differ across different levels of that predictor, or across different levels of other predictors;3third, undrafted players were included and assigned pick 330 – Terrell Pryor, the only QB in our sample from the supplemental draft, was assigned pick 98, to reflect the fact that Oakland had to give up a 3rd round pick in the following draft.

Model 1: Age, Height, Weight

Our sample included 113 QBs from Division I colleges, who went on to be signed by an NFL team for at least one regular season game. The first model included age, height, and weight. All predictors with p<.05 are reported in the table below, along with the model’s R-squared value.

Effect Estimate F-value %error reduced P
Weight -9.232 11.402 8.00% 0.0011
Age 12.365 11.038 7.70% 0.0013
Age2 -72.915 5.572 3.90% 0.0203
R-squared = .337, p<.0001

Not a bad start! An R-squared of .336 means the model reduced about 33.7% more error than the baseline average model. The age effects accounted for most of the error reduced by this model, confirming that younger QBs were drafted before older QBs (especially for QBs who are younger than average). In case you didn’t read my previous article predicting draft position by age, you can interpret the effect “Age,” also referred to as the linear effect of age, as meaning that when QBs get older, they get drafted later. The effect “Age2,” also referred to as the quadratic or curvilinear effect of age, means that the relationship between a QB’s age and draft position changes across the range of ages.

In this case, because the estimate for the linear effect of age is positive, but the quadratic effect is negative, we can say that on average, younger QBs were drafted before older QBs, but the effect was smaller in older players than younger players. For example, age effects like these would predict that a QB’s age wouldn’t be that big of a factor in determining whether a 28 year old QB will get drafted before a 29 year old, compared to the importance of a QB’s age when predicting if a 23 year old will be drafted before a 25 year old.

A QB’s weight was also important: heavier QBs were drafted earlier than lighter QBs. JaMarcus Russell may have something to do with that, but it probably also has something to do with a QB’s strength, i.e. heavier QBs are probably stronger as well. Benjamin Morris (@skepticalsports) just had a post on FiveThirtyEight where he also found that weight was more important than height in predicting draft position.

Finally, based on this first model, height was not a significant predictor of a QB’s draft position. There’s probably some kind of selection bias, since shorter guys aren’t generally the ones playing QB in the first place, but it also means that this model wouldn’t penalize guys like Russell Wilson for being shorter than average.

Model 2: Adding Combine Stats

A correlation analysis revealed that the Combine/Pro Day measures, with the exception of Bench Press Reps (which the vast majority of QBs don’t complete) were significantly correlated with one another.4This is problematic, because it can be very challenging to disentangle the influence of effects that are all related to one another. In cases like this, it is not appropriate to enter each of the predictors separately into a prediction model.

The solution to this is to do some dimension reduction, i.e. reducing a bunch of predictors into a single predictor. In this case, the drills most strongly correlated with one another were the 40 yard dash, vertical jump, and broad jump. We can simply combine those terms to create a composite measure that we’ll call Combine Stats5This allows the model to account for more of the variability in QB draft position captured by the Combine/Pro Day measures, without weighing it down with unnecessary predictors.

Effect Estimate F-value %error reduced P
Age 12.879 8.753 7.90% 0.0044
Age2 -116.589 10.085 9.10% 0.0024
Combine Stats 2.17 4.624 4.20% 0.0356
Weight -6.563 4.209 3.80% 0.0446
R-squared = .457, p<.0001

The Combine Stats accounted for a significant amount of the error reduced from the previous model, and took a bite out of the weight effect in the process. This isn’t too surprising, since weight could be acting as a proxy for strength/athleticism. This model was still mostly driven by a QB’s age, however. After controlling for measures of size and athleticism, younger QBs were drafted before older QBs.

Model 3: Adding Automatic BCS School, Eligibility

This next model (which will be the last model without college performance predictors) includes some categorical predictors, coded 1 for yes and 0 for no: AutoBCS, if the QB went to a college in a conference with an automatic BCS bid; and Eligibility, if that QB had any years of NCAA eligibility remaining before he left school.

Presumably, QBs from schools with automatic BCS bids would be drafted before QBs from less prestigious programs. The same might be true of prospects who left college with eligibility remaining – if they were good enough to play in the pros, they didn’t have a reason to stay in school. Though it could go the other way instead – QBs with eligibility remaining might be problem cases who didn’t exactly choose to enter the draft when they did. Here’s the table:

Effect Estimate F-value %error reduced P
Age 12.829 6.868 6.00% 0.0112
Age2 -115.837 9.449 8.30% 0.0032
Combine Stats 2.185 4.791 4.20% 0.0326
AutoBCS 0.345 4.02 3.50% 0.0496
R-squared = .493, p<.0001

The R-squared value increased a bit, but not very much. Knowing whether a QB came from a school in a conference with an automatic BCS bid reduced a significant chunk of error, but not as much as the Combine Stats. The fact that eligibility was not a significant predictor is important, especially in the presence of strong age effects. Younger QBs were drafted before older QBs, irrespective of whether they had any remaining NCAA eligibility. This may not be true for other position groups – it makes a lot more sense for a RB to leave college with eligibility on the table, since RBs tend to accumulate wear and tear at a faster rate than QBs, and staying in school for too long would hurt their long-term value. But for QBs at least, staying that extra year may not really impact draft value that much.

The disappearance of weight is a little peculiar; most of its variability appears to have been captured by Combine Stats and AutoBCS, suggesting that the heavy QBs who are often drafted early probably came from BCS conferences and were Combine/Pro Day freaks.

Model 4: Adding college experience

Most of those effects in the previous models showed up in an earlier analysis predicting draft position by age for all prospects across position groups, so they probably weren’t too surprising. The previous article didn’t include any stats from prospects’ college careers, however, so the next few models could be more exciting. The first model will test if overall experience (measured in games played) has an effect on draft position. Here’s the table:

Effect Estimate F-value %error reduced P
Age 12.02 7.136 5.50% 0.0097
Age2 -93.876 8.927 6.80% 0.0041
Games2 0.62 8.372 6.40% 0.0053
Weight -7.765 7.356 5.60% 0.0087
AutoBCS 0.369 5.422 4.20% 0.0233
Combine Stats 1.758 4.407 3.40% 0.04
R-squared = .540, p<.0001

The R-squared increased, indicating that this model reduced 54% more error than the baseline. There wasn’t a linear effect of games, but there was a significant quadratic effect. This means that QBs who were on the fringes for number of games played (i.e. they played a lot more or a lot fewer games than the average) were drafted later than average QBs. There could be a number of reasons for this: QBs with too little experience may have been injured in college; QBs with too much experience may have more wear on their tires. In any case, the model prefers QBs with closer to average levels of game experience.

Once again, age effects held up strong. And after controlling for games played, weight returned as a significant predictor, in addition to Combine Stats and AutoBCS. To summarize the findings so far, this model prefers younger, heavier prospects from schools in conferences with automatic BCS bids, who also did well on Combine/Pro Day drills, and who had close to average number of game appearances.

Model 5: Adding Yards per passing attempt (YPA)

Playing in more games was a significant predictor of draft position – does it matter how the QB actually performed in those games? Before just throwing in a bunch of predictors, though, remember what happened with Combine Stats – the Combine/Pro Day predictors were all related to each other, so we had to create a new predictor, forged from the others. This is also true of individual QB offensive statistics. Pass attempts, completions, yards, TDs, INTs, and almost all statistics derived from them are all strongly correlated, even more so than the Combine/Pro Day measures.6 Adding them all in together would be an exercise in futility.

The one statistic that stuck out was yards per attempt (YPA). YPA accounts for usage and production, and isn’t correlated with all of the other passing measures. It is likely that YPA captures something unique about a QB’s ability, which makes it a good candidate for a useful predictor. Here’s the table:

Effect Estimate F-value %error reduced P
Age 13.947 10.921 7.90% 0.0016
Age2 -102.773 11.25 8.10% 0.0014
Weight -7.764 13.525 9.80% 0.0005
YPA -2.083 5.233 3.80% 0.0257
R-squared = .566, p<.0001

This model was a slight improvement over the previous model, accounting for 56.6% more error than baseline. YPA was a significant predictor of draft position – QBs who were more efficient converting their pass attempts into yards were selected earlier in the draft. This seems pretty obvious, but it’s important to see that the effect showed up even after controlling for a mess of other predictors. The effect of YPA also accounted for much of the effects of games played and Combine Stats, though AutoBCS was marginally significant (p=.0521).

Age and weight STILL drove this model, however. This doesn’t necessarily mean that a QB’s college stats aren’t important. Rather, it means that the way this model tried to use those stats didn’t improve its accuracy beyond our previous model iterations.

Model 6: Adding team stats

It’s possible that QBs on high-powered offenses are drafted before QBs from lesser offenses, irrespective of that QB’s own performance within the offense. In this model – the last model where we’ll add new predictors – team stats will be included. Specifically, total team offensive plays, yards, and TDs. Here’s the table.

Effect Estimate F-value %error reduced P
Age 12.4 8.713 6.00% 0.0046
Age2 -89.662 8.33 5.80% 0.0055
Weight -8.326 15.927 11% 0.0002
Team TDs -2.791 4.604 3.20% 0.0361
R-squared = .598, p<.0001

Controlling for team offensive stats brought the R-squared up to .598, reducing nearly 60% of the error from baseline. Total team TDs turned out as a significant predictor. QBs from teams with more prolific offenses were drafted earlier, on average. This is probably because of a couple factors. First, QBs touch the ball on every play, and so better QBs probably run offenses that score more TDs, whether or not the QB creates them directly. Second, teams that score a lot of TDs probably run game plans and schemes that are more efficient or innovative than the competition, so GMs and coaches may be motivated to grab up QBs from those systems in earlier rounds in hopes of bringing some of that efficiency/innovation to the NFL. Total team yards and AutoBCS were marginally significant effects (p’s = .0601 and .0703, respectively), again supporting the idea that QBs from good systems were drafted earlier.

The interesting finding is that weight and age still account for most of the error reduced by this model. Even after controlling for a slew of non-football measures, along with individual and team college stats, these models consistently predicted that younger and heavier QBs were taken earlier than older and skinnier QBs.

Model 7: Just let the computer do it

The model comparison method employed up to this point is good science: we had hypotheses about our predictors, and only included predictors for a good reason. That’s the way researchers in the social sciences are taught to build models, but it isn’t the only way. Most statistics software packages have automatic regression modeling tools, which test different combinations of predictors until finding a model with the best fit.

These models often have hard to interpret predictors based on complicated interactions or transformed predictors; in contrast, the highest order interaction terms included in the previous models were quadratic, which have fairly straightforward interpretations, and the predictors never underwent any additional transformations after being converted to natural logarithms.

Let’s see if a model using computer-selected predictors makes any more sense than the models we’ve made so far. We’ll dump all of the predictors listed above into a program that will figure out which combination will yield the model with the highest R-squared achieved with the fewest number of predictors. Here’s the table.

Effect Estimate F-value %error reduced P
Age 19.197 17.289 10.80% 0.0001
Age2 -90.851 7.022 4.40% 0.0105
Weight -7.941 15.523 9.70% 0.0002
Team Total TD -1.707 10.547 6.60% 0.002
Combine Stats * Games -8.43 6.503 3.80% 0.017
Eligibility * Total Team Plays 0.597 5.531 3.50% 0.0223
AutoBCS 0.354 5.184 3.20% 0.0267
R-squared = .657, p<.0001

The computer model included a lot more terms than the models above, which is probably why the R-squared increased, indicating that this model accounted for nearly 66% more error than baseline. Age and weight are still the most important factors, followed by Total Team TDs, just like in the last model. AutoBCS was a significant predictor in this model as well. This is some confirmation that the previous models were moving in the right direction.

However, this model now includes some interaction terms. The first, Combine Stats * Games, has a negative parameter estimate, meaning that experienced players with good Combine Stats were drafted earlier, on average. The second interaction, Eligibility * Total Team Plays, means that players who left prolific offenses with eligibility remaining were also drafted earlier. This actually goes against the conclusion we drew from model 3. Age is still a stronger predictor than remaining eligibility, but it does play a significant part.

Testing the models against the 2014 draft

Since we didn’t include QBs from this last draft when constructing the model, we can use them as an objective test of how the model works on new data. Here’s a figure, plotting the predictions of the best model we built and the computerized model, along with the actual draft positions for each QB selected this year.

qbregress

Neither of the models did great. Both got two of the first three picks correct, but in the wrong order: one liked Manziel to go first, the other liked Bridgewater. Both models predicted Derek Carr to go a lot later than he actually did; both predicted Zach Mettenberger would go much earlier than he did (injuries probably played a big part in that); and both models predicted David Fales would go undrafted, probably driven by the fact that he came from smaller schools without automatic BCS bids. The biggest disagreement between the two models came on Tajh Boyd: the computer thought he was drafted too soon, and the best model we built predicted he would go much later. Here’s a table with the same info.

QB Actual Draft Position Best Model we built Computer Model
BlakeBortles 3 27 85
JohnnyManziel 22 12 19
TeddyBridgewater 32 20 17
DerekCarr 36 203 173
JimmyGaroppolo 62 66 66
LoganThomas 120 37 59
TomSavage 135 80 127
AaronMurray 163 193 178
AJMcCarron 164 161 84
ZachMettenberger 178 43 47
DavidFales 183 330 330
KeithWenning 194 330 306
TajhBoyd 213 159 274
GarrettGilbert 214 149 230

[Ed. note: fixed to include Mettenberger.]

What we learned

We slowly built up a model predicting QB draft slot from college performance and non-football measurables, and found that age and weight, along with the caliber of the team’s offense, were the best predictors of when QBs would be drafted. A computer generated model predicted essentially the same thing, though it paid more attention to non-football factors like eligibility remaining and Combine stats.

Neither the best model we built nor the computer model put much stock on individual QB performance statistics from their college careers: the only individual statistic that surfaced as a significant predictor of draft position was yards per attempt, and even YPA didn’t matter much after controlling for a team’s offensive production.

Moving forward

Obviously, these models weren’t perfect – you saw how they did for the 2014 draft class! The R-squared values of the models suggest that there’s still about 35% of the error from baseline left to be reduced. Fortunately, there are many ways to improve these models in the future.

To start, these models didn’t include every last iota of publicly available information. For instance, hand and arm measurements, direct measures of strength or throwing velocity, and high school statistics are data that could have been scraped from various websites. Including more data would probably go a long way in reducing error in future models.

Also, we can do a better job of efficiently incorporating all of the information we have available. The major reason that the models didn’t improve much after adding individual college statistics like passing attempts, yards, yards per game, TDs, TD:INT ratio, completion percentage, etc. is because all of those factors are highly related to one another: QBs with more attempts tend to have more yards, which means more TDs, which generally means more INTs as well; QBs with higher completion percentages get to play in more games, which means more attempts, more yards, and the cycle continues.

We got around that problem by using YPA, because it incorporated elements of several different facets of the passing game (yards and attempts, which necessarily includes completions and is highly correlated with TDs). But other, smarter people have devised more clever statistics to gauge a QB’s performance, like passer rating, QBR, value over average stats, etc. Using one of those statistics, or developing a new one, would probably go a long way toward improving future models.

Finally, these models failed to account for QB demand. Teams at the top of the draft don’t always need QBs, and the number of teams interested in drafting a QB varies from year to year as well. This probably won’t help the prediction models too much, since the year to year variability probably isn’t that great, but it is more than zero. Future models could adjust the probability of a QB being taken in a given spot based on the specific team in that draft position.

No model will ever be perfect, especially a model of something that we can’t predict with 100% accuracy (unless you’re the person actually making the pick, there will always be some prediction error). But this exercise set a reasonable bar for future models to try and surpass.

  1. You can check out the RotoViz glossary for a primer on regression modeling.  (back)
  2. But which makes the parameter estimates harder to interpret, since they’re using log-transformed predictors to model a log-transformed dependent variable  (back)
  3. Quadratic and interaction effects will be discussed when they are significant predictors.  (back)
  4. Absolute values of correlation coefficients ranged from .402-.702, p’s<.0005, with the exception of the correlation between vertical jump and cone drill, absolute value of correlation coefficient=.370, p=.0021.  (back)
  5. Since higher jump scores are good, and lower 40 times are good, we’re subtracting the jump scores from the 40 times to create this composite score.  (back)
  6. Correlation coefficients > .95, p’s<.0001 for passing measures. In fact, this extends beyond passing stats, into total yards, proportion of offense statistics, yards per game, etc.  (back)

recent and related...

in case you missed it...

Dynasty Fallout From the End of the Philip Rivers Era

Last week, the quarterback taken by the San Diego Chargers with the first overall selection in the 2004 NFL Draft announced his retirement. On Monday, Jay Glazer reported that the Chargers were set to move on from the QB they got when they traded the first to the New York

Read More

Dynasty Trade Targets 2020 – Running Back Edition

In Dynasty Trade Targets 2020 – Running Back Edition, Curtis Patrick gives away the names of three running backs he’s buying right now. As rookie fever sets in across the dynasty community, the offseason trade window also begins to crack open. I’m sharing three of my top dynasty trade targets

Read More
Connect
Support

rotovizmain@gmail.com

Sign-up today for our free Premium Email subscription!

© 2019 RotoViz. All rights Reserved.