Overdispersion in learning to play darts

Suppose a player plays T=100 games of darts, each game consisting of ten throws. Assume the player gets better at hitting the dartboard over time, learning in an S-curve over the 100 games, but holding success probability constant over the course of a game.


In [183]:
n=10
T = 100
x = rep(0,T);
linear = (-50:50)/10
ps=1/(1+exp(-linear))
for(i in 0:T)
{
  x[i]<-rbinom(1,n,ps[i]);
}
par(mfrow=c(1, 2)); plot(x, main="no. of hits", xlab="Game number"); plot(ps,main="prob of hit", xlab="Game number")


Warning message:
In rbinom(1, n, ps[i]): NAs produced

In [184]:
phat = mean(x)/n; phat


Out[184]:
0.48

Assuming that the number of hits in each of the 100 games is binomial(n,.5), we'd expect a variance of


In [185]:
n*phat*(1-phat)


Out[185]:
2.496

But the observed actual variance is higher:


In [186]:
var(x)


Out[186]:
15.7373737373737

In [187]:
hist(x)



In [ ]: