Limits...
A Gibbs Sampler for the (Extended) Marginal Rasch Model.

Maris G, Bechger T, San Martin E - Psychometrika (2015)

Bottom Line: In their seminal work on characterizing the manifest probabilities of latent trait models, Cressie and Holland give a theoretically important characterization of the marginal Rasch model.Such an approach would be highly efficient as its computational cost does not depend on the number of respondents, which makes it suitable for large-scale educational measurement.In this paper, such an approach will be developed and its operating characteristics illustrated with simulated data.

View Article: PubMed Central - PubMed

ABSTRACT
In their seminal work on characterizing the manifest probabilities of latent trait models, Cressie and Holland give a theoretically important characterization of the marginal Rasch model. Because their representation of the marginal Rasch model does not involve any latent trait, nor any specific distribution of a latent trait, it opens up the possibility for constructing a Markov chain - Monte Carlo method for Bayesian inference for the marginal Rasch model that does not rely on data augmentation. Such an approach would be highly efficient as its computational cost does not depend on the number of respondents, which makes it suitable for large-scale educational measurement. In this paper, such an approach will be developed and its operating characteristics illustrated with simulated data.

Show MeSH
The solid line (in both panels) gives the log full conditional in a NEAT design. In the left panel, the dashed line gives the log of our proposal. In the right panel, the dashed line gives the upper hull and the dotted line the lower hull for adaptive rejection sampling density.
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4644215&req=5

Fig5: The solid line (in both panels) gives the log full conditional in a NEAT design. In the left panel, the dashed line gives the log of our proposal. In the right panel, the dashed line gives the upper hull and the dotted line the lower hull for adaptive rejection sampling density.

Mentions: As a proposal distribution we consider the following distribution:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} g(\delta _i) \propto \frac{\exp (-[x_{+i}+\alpha _i]\delta _i)}{\left( 1+c \exp (-\delta _i)\right) ^{m_{xy}+m_{xz}}} \end{aligned}$$\end{document}g(δi)∝exp(-[x+i+αi]δi)1+cexp(-δi)mxy+mxzthe logarithm of which has linear tails with the same slope, which is recognized to be of the same form as the full conditional distribution for found with a complete design (i.e. Eq. 16 with a transformation of variables). We propose to choose the parameter c in such a way that the derivative of the logarithm of the proposal distribution with respect to matches the value found for the target full conditional distribution, at its current value in the Markov chain. This proposal distribution closely matches the target full conditional distribution, as is illustrated in Figure 5 (left panel), which ensures that the resulting Metropolis-within-Gibbs algorithm will converge rapidly to its invariant distribution. For comparison, the right panel in Figure 5 gives the outer and inner hull for an adaptive rejection sampler based on three support points. Based on this comparison, we expect our Metropolis algorithm to outperform the adaptive rejection sampler, although either algorithm will work.Fig. 5


A Gibbs Sampler for the (Extended) Marginal Rasch Model.

Maris G, Bechger T, San Martin E - Psychometrika (2015)

The solid line (in both panels) gives the log full conditional in a NEAT design. In the left panel, the dashed line gives the log of our proposal. In the right panel, the dashed line gives the upper hull and the dotted line the lower hull for adaptive rejection sampling density.
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4644215&req=5

Fig5: The solid line (in both panels) gives the log full conditional in a NEAT design. In the left panel, the dashed line gives the log of our proposal. In the right panel, the dashed line gives the upper hull and the dotted line the lower hull for adaptive rejection sampling density.
Mentions: As a proposal distribution we consider the following distribution:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} g(\delta _i) \propto \frac{\exp (-[x_{+i}+\alpha _i]\delta _i)}{\left( 1+c \exp (-\delta _i)\right) ^{m_{xy}+m_{xz}}} \end{aligned}$$\end{document}g(δi)∝exp(-[x+i+αi]δi)1+cexp(-δi)mxy+mxzthe logarithm of which has linear tails with the same slope, which is recognized to be of the same form as the full conditional distribution for found with a complete design (i.e. Eq. 16 with a transformation of variables). We propose to choose the parameter c in such a way that the derivative of the logarithm of the proposal distribution with respect to matches the value found for the target full conditional distribution, at its current value in the Markov chain. This proposal distribution closely matches the target full conditional distribution, as is illustrated in Figure 5 (left panel), which ensures that the resulting Metropolis-within-Gibbs algorithm will converge rapidly to its invariant distribution. For comparison, the right panel in Figure 5 gives the outer and inner hull for an adaptive rejection sampler based on three support points. Based on this comparison, we expect our Metropolis algorithm to outperform the adaptive rejection sampler, although either algorithm will work.Fig. 5

Bottom Line: In their seminal work on characterizing the manifest probabilities of latent trait models, Cressie and Holland give a theoretically important characterization of the marginal Rasch model.Such an approach would be highly efficient as its computational cost does not depend on the number of respondents, which makes it suitable for large-scale educational measurement.In this paper, such an approach will be developed and its operating characteristics illustrated with simulated data.

View Article: PubMed Central - PubMed

ABSTRACT
In their seminal work on characterizing the manifest probabilities of latent trait models, Cressie and Holland give a theoretically important characterization of the marginal Rasch model. Because their representation of the marginal Rasch model does not involve any latent trait, nor any specific distribution of a latent trait, it opens up the possibility for constructing a Markov chain - Monte Carlo method for Bayesian inference for the marginal Rasch model that does not rely on data augmentation. Such an approach would be highly efficient as its computational cost does not depend on the number of respondents, which makes it suitable for large-scale educational measurement. In this paper, such an approach will be developed and its operating characteristics illustrated with simulated data.

Show MeSH