Diffusion-based spatial priors for imaging.
Bottom Line:
This can furnish a non-stationary smoothing process that preserves features, which would otherwise be lost with a fixed Gaussian kernel.We describe a Bayesian framework that incorporates non-stationary, adaptive smoothing into a generative model to extract spatial features in parameter estimates.Critically, this means adaptive smoothing becomes an integral part of estimation and inference.
View Article:
PubMed Central - PubMed
Affiliation: The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London, WC1N 3BG, UK. l.harrison@fil.ion.ucl.ac.uk
ABSTRACT
Show MeSH
We describe a Bayesian scheme to analyze images, which uses spatial priors encoded by a diffusion kernel, based on a weighted graph Laplacian. This provides a general framework to formulate a spatial model, whose parameters can be optimized. The application we have in mind is a spatiotemporal model for imaging data. We illustrate the method on a random effects analysis of fMRI contrast images from multiple subjects; this simplifies exposition of the model and enables a clear description of its salient features. Typically, imaging data are smoothed using a fixed Gaussian kernel as a pre-processing step before applying a mass-univariate statistical model (e.g., a general linear model) to provide images of parameter estimates. An alternative is to include smoothness in a multivariate statistical model (Penny, W.D., Trujillo-Barreto, N.J., Friston, K.J., 2005. Bayesian fMRI time series analysis with spatial priors. Neuroimage 24, 350-362). The advantage of the latter is that each parameter field is smoothed automatically, according to a measure of uncertainty, given the data. In this work, we investigate the use of diffusion kernels to encode spatial correlations among parameter estimates. Nonlinear diffusion has a long history in image processing; in particular, flows that depend on local image geometry (Romeny, B.M.T., 1994. Geometry-driven Diffusion in Computer Vision. Kluwer Academic Publishers) can be used as adaptive filters. This can furnish a non-stationary smoothing process that preserves features, which would otherwise be lost with a fixed Gaussian kernel. We describe a Bayesian framework that incorporates non-stationary, adaptive smoothing into a generative model to extract spatial features in parameter estimates. Critically, this means adaptive smoothing becomes an integral part of estimation and inference. We illustrate the method using synthetic and real fMRI data. Related in: MedlinePlus |
Related In:
Results -
Collection
License getmorefigures.php?uid=PMC2643839&req=5
Mentions: We consider the parameter estimates as a function on a graph, Γ, with vertices, edges and weights, Γ = (V, E, W). The vertices are indexed 1 to N and pairs are connected by edges, Ekn, where (k, n) ∈ V. If two vertices are connected, i.e., are neighbors, we write k ∼ n. Consider a regular 2D mesh with spatial coordinates u1 and u2. Fig. 1a shows a surface plot of OLS parameter estimates of synthetic data described later (see Fig. 4) to illustrate the construction of a weighted graph Laplacian. To simplify the discussion we concentrate on a small region over a 3 × 3 grid, or stencil (see inset). Pairs of numbers u1, u2 indicate a vertex or pixel location, where each number corresponds to a spatial dimension. The function has a value at each pixel (voxel if in 3D anatomical space) given by its parameter estimate μ(u), so that three numbers locate a pixel at which a parameter has a specific value, u1, u2, μ(u1, u2). These are coordinates of the parameter estimate at a pixel in Euclidean space, , which decomposes into ‘anatomical’ and ‘feature’ space coordinates (lower right of Fig. 1a). In this case these are 2 and 1 respectively. The 2D image is considered as a 2D sub-manifold of this 3D embedding space (Sochen et al., 1998), which provides a general framework that is easily extended to 3D anatomical space and feature dimensions greater than one. We represent the kth pixel by vk. Distance between two pixels is taken as the shortest distance along the 2D sub-manifold of parameter estimates embedded in . This is a geodesic distance between points on the sub-manifold, ds(vk, vn). This is shown schematically in Fig. 1c between neighboring pixels. The shortest distance is easy to compute for direct neighbors (example shown in red), however, if the stencil were larger then fast marching algorithms (Sethian, 1999) may be used to compute the shortest path between two points on the sub-manifold. Note that the displacement along the feature coordinates is scaled by , such that if a = 0, then ds is reduced to distance on the 2D domain and is no longer a function of image intensity (see subsection on special cases). The construction of a weighted graph Laplacian starts by specifying weights of edges between vertices, wkn. These are a function of the geodesic distance, ds(vk, vn), and are important for specifying non-stationary diffusion. This is shown in Fig. 1b for the 3 × 3 stencil in Fig. 1a. |
View Article: PubMed Central - PubMed
Affiliation: The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London, WC1N 3BG, UK. l.harrison@fil.ion.ucl.ac.uk