Limits...
Nonnegative matrix factorization with Gaussian process priors.

Schmidt MN, Laurberg H - Comput Intell Neurosci (2008)

Bottom Line: We present a general method for including prior knowledge in a nonnegative matrix factorization (NMF), based on Gaussian process priors.We assume that the nonnegative factors in the NMF are linked by a strictly increasing function to an underlying Gaussian process specified by its covariance function.This allows us to find NMF decompositions that agree with our prior knowledge of the distribution of the factors, such as sparseness, smoothness, and symmetries.

View Article: PubMed Central - PubMed

Affiliation: Department of Informatics and Mathematical Modelling, Technical University of Denmark, Richard Petersens Plads, DTU-Building 321, 2800 Lyngby, Denmark. mns@imm.dtu.dk

ABSTRACT
We present a general method for including prior knowledge in a nonnegative matrix factorization (NMF), based on Gaussian process priors. We assume that the nonnegative factors in the NMF are linked by a strictly increasing function to an underlying Gaussian process specified by its covariance function. This allows us to find NMF decompositions that agree with our prior knowledge of the distribution of the factors, such as sparseness, smoothness, and symmetries. The method is demonstrated with an example from chemical shift brain imaging.

No MeSH data available.


Related in: MedlinePlus

Toy example:root mean squared error (RMSE) with respect to the noisy data, the underlyingnoise-free data, and the true underlying nonnegative factors. The CNMF solutionfits the noisy data slightly better, but the GPP-NMF solution fits theunderlying data much better.
© Copyright Policy - open-access
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2367383&req=5

fig3: Toy example:root mean squared error (RMSE) with respect to the noisy data, the underlyingnoise-free data, and the true underlying nonnegative factors. The CNMF solutionfits the noisy data slightly better, but the GPP-NMF solution fits theunderlying data much better.

Mentions: Measures of root mean squared error (RMSE) of the fourdecompositions are given in Figure 3. All four methods fit the noisy dataalmost equally well. (Note that, due to the additive noise with variance 25, aperfect fit to the underlying factors would result in an RMSE of 5 with respect to the noisy data.) The LS-NMFfits the data worst due to the truncation of negative data points, and the CNMFfits the data best, due to overfitting. With respect to the noise-free data andthe underlying factors, the RMSE is worst for the LS-NMF and best for theGPP-NMF with correct prior. The GPP-NMF with incorrect prior is better thanboth LS-NMF and CNMF in this case. This shows that in this situation itis better to use a prior which is not perfectlycorrect, compared to using no prior as in the LS-NMF and CNMF methods, (whichcorresponds to a flat prior over the nonnegative reals and no correlations).


Nonnegative matrix factorization with Gaussian process priors.

Schmidt MN, Laurberg H - Comput Intell Neurosci (2008)

Toy example:root mean squared error (RMSE) with respect to the noisy data, the underlyingnoise-free data, and the true underlying nonnegative factors. The CNMF solutionfits the noisy data slightly better, but the GPP-NMF solution fits theunderlying data much better.
© Copyright Policy - open-access
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2367383&req=5

fig3: Toy example:root mean squared error (RMSE) with respect to the noisy data, the underlyingnoise-free data, and the true underlying nonnegative factors. The CNMF solutionfits the noisy data slightly better, but the GPP-NMF solution fits theunderlying data much better.
Mentions: Measures of root mean squared error (RMSE) of the fourdecompositions are given in Figure 3. All four methods fit the noisy dataalmost equally well. (Note that, due to the additive noise with variance 25, aperfect fit to the underlying factors would result in an RMSE of 5 with respect to the noisy data.) The LS-NMFfits the data worst due to the truncation of negative data points, and the CNMFfits the data best, due to overfitting. With respect to the noise-free data andthe underlying factors, the RMSE is worst for the LS-NMF and best for theGPP-NMF with correct prior. The GPP-NMF with incorrect prior is better thanboth LS-NMF and CNMF in this case. This shows that in this situation itis better to use a prior which is not perfectlycorrect, compared to using no prior as in the LS-NMF and CNMF methods, (whichcorresponds to a flat prior over the nonnegative reals and no correlations).

Bottom Line: We present a general method for including prior knowledge in a nonnegative matrix factorization (NMF), based on Gaussian process priors.We assume that the nonnegative factors in the NMF are linked by a strictly increasing function to an underlying Gaussian process specified by its covariance function.This allows us to find NMF decompositions that agree with our prior knowledge of the distribution of the factors, such as sparseness, smoothness, and symmetries.

View Article: PubMed Central - PubMed

Affiliation: Department of Informatics and Mathematical Modelling, Technical University of Denmark, Richard Petersens Plads, DTU-Building 321, 2800 Lyngby, Denmark. mns@imm.dtu.dk

ABSTRACT
We present a general method for including prior knowledge in a nonnegative matrix factorization (NMF), based on Gaussian process priors. We assume that the nonnegative factors in the NMF are linked by a strictly increasing function to an underlying Gaussian process specified by its covariance function. This allows us to find NMF decompositions that agree with our prior knowledge of the distribution of the factors, such as sparseness, smoothness, and symmetries. The method is demonstrated with an example from chemical shift brain imaging.

No MeSH data available.


Related in: MedlinePlus