Chatper 13. Bayesian Estimation in Hierarchical Models

  • 싸이그래머 / 인지모델링 : 파트2 - 수리심리학 [1]
  • 김무성

Contents

  • The Ideas of Hierarchical Bayesian Estimation
  • Example: Shrinkage and Multiple Comparisons of Baseball Batting Abilities
  • Example: Clinical Individual Differences in Attention Allocation
  • Model Comparison as a Case of Estimation in Hierarchical Models
  • Conclusion

The Ideas of Hierarchical Bayesian Estimation

  • Hierarchical Models Have Parameters with Hierarchical Meaning
  • Advantages of the Bayesian Approach
  • Some Mathematic and Mechanics of Bayesian Estimation

Bayesian estimation provides an entire distribution of credibility over the space of parameter values, not merely a single “best” value.

  • The distribution precisely captures our uncertainty about the parameter estimate.
  • The essence of Bayesian estimation is to formally describe how uncertainty changes when new data are taken into account.

Hierarchical Models Have Parameters with Hierarchical Meaning

Examples

  • a type of trick coin, manufactured by the Acme Toy Company
  • childhood obesity - weights of children, different schools, different school lunch programs, unknown socioeconomic statuses.

In general, a model is hierarchical if the probability of one parameter can be conceived to depend on the value of another parameter.

  • Expressed formally, suppose the observed data, denoted D, are described by a model with two parameters, denoted α and β.
  • likelihood - p(D|α,β)
  • prior - p(α,β)
  • p(D|α,β)p(α,β)
    • --> hierarchical
      • p(D|α,β)p(α,β) = p(D|α)p(α|β)p(β)

One of the primary applications of hierarchical models is describing data from individuals within groups.

  • individual-level
  • group-level
  • The individual- level and group-level parameters are estimated simultaneously.

Advantages of the Bayesian Approach

  • Bayesian methods provide tremendous flexibility in designing models that are appropriate for describing the data at hand, and Bayesian methods provide a complete representation of parameter uncertainty (i.e., the posterior distribution) that can be directly interpreted.
  • In a frequentist approach, although it may be possible to find a maximum-likelihood estimate (MLE) of parameter values in a hierarchical nonlinear model, the subsequent task of interpreting the uncertainty of the MLE can be very difficult.

Some Mathematics and Mechanics of Bayesian Estimation

  • In some simple situations, the mathematical form of the posterior distribution can be analytically derived.
  • A large class of algorithms for generating a representative random sample from a distribution is called Markov chain Monte Carlo (MCMC) methods.

Example: Shrinkage and Multiple Comparisons of Baseball Batting Abilities

  • The Data
  • The Descriptive Model with Its Meaningful Parameters
  • Results: Interpreting the Posterior Distribution
  • Shrinkage and Multiple Comparisons

An important goal for enthusiasts of baseball is estimating each player’s ability to bat the ball.

There are nine players in the field at once, who specialize in different positions.

Therefore, based on the structure of the game, we know that players with different primary positions are likely to have different batting abilities.

The Data

  • The data consist of records from
    • 948 players
    • in the 2012 regular season of Major League Baseball
    • who had at least one at-bat.2
    • For player i,
      • we have his number of opportunities at bat, ABi ,
      • his number of hits Hi, and
      • his primary position when in the field pp(i).
  • In the data, there were
    • 324 pitchers
      • with a median of 4.0 at-bats,
    • 103 catchers
      • with a median of 170.0 at-bats, and
    • 60 right fielders
      • with a median of 340.5 at-bats,
    • along with 461 players in six other positions.

The Descriptive Model with Its Meaningful Parameters

  • We want to estimate, for each player, his underlying probability θi of hitting the ball when at bat.
  • The primary data to inform our estimate of θi are
    • the player’s number of hits, Hi, and
    • his number of opportunities at bat, ABi.
  • But the estimate will also be informed by
    • our knowledge of the player’s primary position, pp(i), and
    • by the data from all the other players (i.e., their hits, at- bats, and positions).
  • For example,
    • if we know that player i is a pitcher,
    • and we know that pitchers tend to have θ values around 0.13 (because of all the other data),
    • then our estimate of θi should be anchored near 0.13 and
    • adjusted by the specific hits and at-bats of the individual player.

We will construct a hierarchical model that

- rationally shares information 
    - across players within positions,and 
    - across positions within all major league players

  • We denote the ith player’s underlying probability of getting a hit as θi.
    • Then the number of hits Hi out of ABi at-bats is a random draw from a binomial distribution that has success rate θi, as illustrated at the bottom of Figure 13.1.
    • The arrow pointing to Hi is labeled with a “∼” symbol to indicate that the number of hits is a random variable distributed as a binomial distribution.
  • To formally express our prior belief that
    • different primary positions emphasize
      • different skills and hence have
      • different batting abilities,
        • we assume that the player abilities θi come from
          • distributions specific to each position.
  • We model the distribution of θi’s for a position as a beta distribution,
    • which is a natural distribution for describing values that fall between zero and one, and is often used in this sort of application
    • The mean of the beta distribution for primary position pp is denoted μpp, and
    • the narrowness of the distribution is denoted κpp.
    • The value of μpp represents the typical batting ability of players in primary position pp,
    • and the value of κpp represents how tightly clustered the abilities are across players in primary position pp.

There are 970 parameters in the model alto- gether: 948 individual θi , plus μpp , κpp for each of nine primary positions, plus μμ, κμ across positions, plus sκ and rκ. The Bayesian analysis yields credible combinations of the parameters in the 970-dimensional joint parameter space.

Results: Interpreting the Posterior Distribution

  • check of robustness against changes in top-level prior constants
  • comparisons of positions
  • comparisons of individual players

참고

MCMC

  • We used MCMC chains with total saved length of 15,000 after adaptation of 1,000 steps and burn- in of 1,000 steps, using 3 parallel chains called from the runjags package (Denwood, 2013), thinned by 30 merely to keep a modest file size for the saved chain.

posterior

  • The diagnostics (see Box 1) assured us that the chains were adequate to provide an accurate and high-resolution representation of the posterior distribution.

ESS

  • The effective sample size (ESS) for all the reported parameters and differences exceeded 6,000, with nearly all exceeding 10,000.

check of robustness against changes in top-level prior constants

  • Because we wanted the top-level prior distribution to be noncommittal and have minimal influence on the posterior distribution, we checked whether the choice of prior had any notable effect on the posterior.
  • We conducted the analysis with different constants in the top-level gamma distri- butions, to check whether they had any notable influence on the resulting posterior distribution.
    • Whether all gamma distributions used shape and rate constants of 0.1 and 0.1, or 0.001 and 0.001, the results were essentially identical. The results reported here are for gamma constants of 0.001 and 0.001.

comparisons of positions

comparisons of individual players

Shrinkage and Multiple Comparisons

Example: Clinical Individual Differences in Attention Allocation

  • The Data
  • The Descriptive Model with Its Meaningful Parameters
  • Results: Interpreting the Posterior Distribution

The Data

The Descriptive Model with Its Meaningful Parameters

  • hierarchical structure

hierarchical structure

Results: Interpreting the Posterior Distribution

  • check of robustness against changes in top-level prior constants
  • comparison across groups of attention to body size
  • comparisons across individual women’s attention to body size

check of robustness against changes in top-level prior constants

comparison across groups of attention to body size

comparisons across individual women’s attention to body size

Model Comparison as a Case of Estimation in Hierarchical Models

Conclusion

참고자료