Limits...
Effect of thematic map misclassification on landscape multi-metric assessment.

Kleindl WJ, Powell SL, Hauer FR - Environ Monit Assess (2015)

Bottom Line: However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results.Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error.Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.

View Article: PubMed Central - PubMed

Affiliation: Flathead Lake Biological Station and Montana Institute on Ecosystems, University of Montana, Missoula, MT, 59812, USA, b.kleindl@naiadllc.com.

ABSTRACT
Advancements in remote sensing and computational tools have increased our awareness of large-scale environmental problems, thereby creating a need for monitoring, assessment, and management at these scales. Over the last decade, several watershed and regional multi-metric indices have been developed to assist decision-makers with planning actions of these scales. However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results. Here, we examined the sensitivity of a landscape-scale multi-metric index (MMI) to error from thematic land-cover misclassification and the implications of this uncertainty for resource management decisions. Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error. Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.

Show MeSH

Related in: MedlinePlus

Naive data (stars) and distribution boxplots of simulated fragmentation (a), perturbation (b) scores averaged from the buffer and floodplain results, and index (c) scores with 10 % autocorrelation filters (black) and 20 % autocorrelation filters (gray)
© Copyright Policy - OpenAccess
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4419156&req=5

Fig4: Naive data (stars) and distribution boxplots of simulated fragmentation (a), perturbation (b) scores averaged from the buffer and floodplain results, and index (c) scores with 10 % autocorrelation filters (black) and 20 % autocorrelation filters (gray)

Mentions: For the error simulation model, user probability matrices (Tables 3 and 4) and autocorrelation filters were used in the confusion frequency simulations to provide a distribution of metrics and index scores, with 95 % confidence intervals (Fig. 4).1 The simulated and naive results closely match the LULC gradient across the study area (Fig. 2). A pairwise Wilcoxon signed rank test was applied to all simulated index sites using both the 10 and 20 % autocorrelation filter under the hypothesis that there were no differences between the simulated sites. For sites N-2 and N-3, there was very strong evidence that they have the same mean index score (p value equal to 1.0) using the 10 % filter, but there was strong evidence that all sites were different (p value < 0.001) using the 20 % filter. Sites N-2 and N-3 both had naive score of 1.0 and all other naive scores were different. All remaining sites failed to support the hypothesis, showing strong evidence of a difference between sites (p value < 0.001) for both filters.Fig. 4


Effect of thematic map misclassification on landscape multi-metric assessment.

Kleindl WJ, Powell SL, Hauer FR - Environ Monit Assess (2015)

Naive data (stars) and distribution boxplots of simulated fragmentation (a), perturbation (b) scores averaged from the buffer and floodplain results, and index (c) scores with 10 % autocorrelation filters (black) and 20 % autocorrelation filters (gray)
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4419156&req=5

Fig4: Naive data (stars) and distribution boxplots of simulated fragmentation (a), perturbation (b) scores averaged from the buffer and floodplain results, and index (c) scores with 10 % autocorrelation filters (black) and 20 % autocorrelation filters (gray)
Mentions: For the error simulation model, user probability matrices (Tables 3 and 4) and autocorrelation filters were used in the confusion frequency simulations to provide a distribution of metrics and index scores, with 95 % confidence intervals (Fig. 4).1 The simulated and naive results closely match the LULC gradient across the study area (Fig. 2). A pairwise Wilcoxon signed rank test was applied to all simulated index sites using both the 10 and 20 % autocorrelation filter under the hypothesis that there were no differences between the simulated sites. For sites N-2 and N-3, there was very strong evidence that they have the same mean index score (p value equal to 1.0) using the 10 % filter, but there was strong evidence that all sites were different (p value < 0.001) using the 20 % filter. Sites N-2 and N-3 both had naive score of 1.0 and all other naive scores were different. All remaining sites failed to support the hypothesis, showing strong evidence of a difference between sites (p value < 0.001) for both filters.Fig. 4

Bottom Line: However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results.Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error.Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.

View Article: PubMed Central - PubMed

Affiliation: Flathead Lake Biological Station and Montana Institute on Ecosystems, University of Montana, Missoula, MT, 59812, USA, b.kleindl@naiadllc.com.

ABSTRACT
Advancements in remote sensing and computational tools have increased our awareness of large-scale environmental problems, thereby creating a need for monitoring, assessment, and management at these scales. Over the last decade, several watershed and regional multi-metric indices have been developed to assist decision-makers with planning actions of these scales. However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results. Here, we examined the sensitivity of a landscape-scale multi-metric index (MMI) to error from thematic land-cover misclassification and the implications of this uncertainty for resource management decisions. Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error. Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.

Show MeSH
Related in: MedlinePlus