Limits...
An automated sleep-state classification algorithm for quantifying sleep timing and sleep-dependent dynamics of electroencephalographic and cerebral metabolic parameters.

Rempe MJ, Clegern WC, Wisor JP - Nat Sci Sleep (2015)

Bottom Line: Automated state scoring can minimize the burden associated with state and thereby facilitate the use of shorter epoch durations.Error associated with mathematical modeling of temporal dynamics of both EEG slow-wave activity and cerebral lactate either did not differ significantly when state scoring was done with automated versus visual scoring, or was reduced with automated state scoring relative to manual classification.Machine scoring is as effective as human scoring in detecting experimental effects in rodent sleep studies.

View Article: PubMed Central - PubMed

Affiliation: Mathematics and Computer Science, Whitworth University, Spokane, WA, USA ; College of Medical Sciences and Sleep and Performance Research Center, Washington State University, Spokane, WA, USA.

ABSTRACT

Introduction: Rodent sleep research uses electroencephalography (EEG) and electromyography (EMG) to determine the sleep state of an animal at any given time. EEG and EMG signals, typically sampled at >100 Hz, are segmented arbitrarily into epochs of equal duration (usually 2-10 seconds), and each epoch is scored as wake, slow-wave sleep (SWS), or rapid-eye-movement sleep (REMS), on the basis of visual inspection. Automated state scoring can minimize the burden associated with state and thereby facilitate the use of shorter epoch durations.

Methods: We developed a semiautomated state-scoring procedure that uses a combination of principal component analysis and naïve Bayes classification, with the EEG and EMG as inputs. We validated this algorithm against human-scored sleep-state scoring of data from C57BL/6J and BALB/CJ mice. We then applied a general homeostatic model to characterize the state-dependent dynamics of sleep slow-wave activity and cerebral glycolytic flux, measured as lactate concentration.

Results: More than 89% of epochs scored as wake or SWS by the human were scored as the same state by the machine, whether scoring in 2-second or 10-second epochs. The majority of epochs scored as REMS by the human were also scored as REMS by the machine. However, of epochs scored as REMS by the human, more than 10% were scored as SWS by the machine and 18 (10-second epochs) to 28% (2-second epochs) were scored as wake. These biases were not strain-specific, as strain differences in sleep-state timing relative to the light/dark cycle, EEG power spectral profiles, and the homeostatic dynamics of both slow waves and lactate were detected equally effectively with the automated method or the manual scoring method. Error associated with mathematical modeling of temporal dynamics of both EEG slow-wave activity and cerebral lactate either did not differ significantly when state scoring was done with automated versus visual scoring, or was reduced with automated state scoring relative to manual classification.

Conclusions: Machine scoring is as effective as human scoring in detecting experimental effects in rodent sleep studies. Automated scoring is an efficient alternative to visual inspection in studies of strain differences in sleep and the temporal dynamics of sleep-related physiological parameters.

No MeSH data available.


Related in: MedlinePlus

Agreement statistics between human scoring and machine scoring.Notes: For the data scored in 10-second epochs, the agreement statistics compare the human and machine scoring for the entire 40–48-hour recording. For the data in 2-second epochs, the comparison is between 8,640 epochs that were scored by hand and the same 8,640 epochs scored by the automated scoring algorithm. Lines inside boxes represent median values. The lower end of each box indicates the first quartile of the data (Q1), and the upper end represents the third quartile (Q3). To draw the whiskers, we calculated the interquartile range (IQR), which is the distance between Q1 and Q3. The lower whisker indicates the lowest data point within 1.5 IQR of Q1. The upper whisker indicates the largest data point within 1.5 IQR of Q3. Outliers more than 1.5 IQR but less than 3 IQR above Q3 or below Q1 are represented with open circles.Abbreviations: B6, C57BL/6J mice; BA, BALB/CJ mice; REMS, rapid-eye-movement sleep; SWS, slow-wave sleep.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4562753&req=5

f3-nss-7-085: Agreement statistics between human scoring and machine scoring.Notes: For the data scored in 10-second epochs, the agreement statistics compare the human and machine scoring for the entire 40–48-hour recording. For the data in 2-second epochs, the comparison is between 8,640 epochs that were scored by hand and the same 8,640 epochs scored by the automated scoring algorithm. Lines inside boxes represent median values. The lower end of each box indicates the first quartile of the data (Q1), and the upper end represents the third quartile (Q3). To draw the whiskers, we calculated the interquartile range (IQR), which is the distance between Q1 and Q3. The lower whisker indicates the lowest data point within 1.5 IQR of Q1. The upper whisker indicates the largest data point within 1.5 IQR of Q3. Outliers more than 1.5 IQR but less than 3 IQR above Q3 or below Q1 are represented with open circles.Abbreviations: B6, C57BL/6J mice; BA, BALB/CJ mice; REMS, rapid-eye-movement sleep; SWS, slow-wave sleep.

Mentions: Visual inspection of the machine scoring compared with the human scoring (based on EEG and EMG signals) showed a high degree of agreement (Figure 2). To quantify agreement between the two scoring methods, we computed Cohen’s kappa, as well as global agreement and agreement of each of the states (Figure 3). Global agreement and kappa were high for both strains and both epoch lengths. We also pooled the 10-second epoch data and the 2-second epoch data from both strains, and we constructed confusion matrices for each (Table 1) indicating agreement between human scoring and the machine algorithm for each state. The percentage of correct classifications for each state appears in the entries along the diagonal of the matrix, and misclassifications appear in the off-diagonal entries. For example, the first data row of Table 1 indicates that of all epochs scored as wake by the human scorer, 89.54% of those were also scored as wake by the algorithm, 9.71% of those epochs were scored as SWS, and 0.75% were scored as REMS.


An automated sleep-state classification algorithm for quantifying sleep timing and sleep-dependent dynamics of electroencephalographic and cerebral metabolic parameters.

Rempe MJ, Clegern WC, Wisor JP - Nat Sci Sleep (2015)

Agreement statistics between human scoring and machine scoring.Notes: For the data scored in 10-second epochs, the agreement statistics compare the human and machine scoring for the entire 40–48-hour recording. For the data in 2-second epochs, the comparison is between 8,640 epochs that were scored by hand and the same 8,640 epochs scored by the automated scoring algorithm. Lines inside boxes represent median values. The lower end of each box indicates the first quartile of the data (Q1), and the upper end represents the third quartile (Q3). To draw the whiskers, we calculated the interquartile range (IQR), which is the distance between Q1 and Q3. The lower whisker indicates the lowest data point within 1.5 IQR of Q1. The upper whisker indicates the largest data point within 1.5 IQR of Q3. Outliers more than 1.5 IQR but less than 3 IQR above Q3 or below Q1 are represented with open circles.Abbreviations: B6, C57BL/6J mice; BA, BALB/CJ mice; REMS, rapid-eye-movement sleep; SWS, slow-wave sleep.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4562753&req=5

f3-nss-7-085: Agreement statistics between human scoring and machine scoring.Notes: For the data scored in 10-second epochs, the agreement statistics compare the human and machine scoring for the entire 40–48-hour recording. For the data in 2-second epochs, the comparison is between 8,640 epochs that were scored by hand and the same 8,640 epochs scored by the automated scoring algorithm. Lines inside boxes represent median values. The lower end of each box indicates the first quartile of the data (Q1), and the upper end represents the third quartile (Q3). To draw the whiskers, we calculated the interquartile range (IQR), which is the distance between Q1 and Q3. The lower whisker indicates the lowest data point within 1.5 IQR of Q1. The upper whisker indicates the largest data point within 1.5 IQR of Q3. Outliers more than 1.5 IQR but less than 3 IQR above Q3 or below Q1 are represented with open circles.Abbreviations: B6, C57BL/6J mice; BA, BALB/CJ mice; REMS, rapid-eye-movement sleep; SWS, slow-wave sleep.
Mentions: Visual inspection of the machine scoring compared with the human scoring (based on EEG and EMG signals) showed a high degree of agreement (Figure 2). To quantify agreement between the two scoring methods, we computed Cohen’s kappa, as well as global agreement and agreement of each of the states (Figure 3). Global agreement and kappa were high for both strains and both epoch lengths. We also pooled the 10-second epoch data and the 2-second epoch data from both strains, and we constructed confusion matrices for each (Table 1) indicating agreement between human scoring and the machine algorithm for each state. The percentage of correct classifications for each state appears in the entries along the diagonal of the matrix, and misclassifications appear in the off-diagonal entries. For example, the first data row of Table 1 indicates that of all epochs scored as wake by the human scorer, 89.54% of those were also scored as wake by the algorithm, 9.71% of those epochs were scored as SWS, and 0.75% were scored as REMS.

Bottom Line: Automated state scoring can minimize the burden associated with state and thereby facilitate the use of shorter epoch durations.Error associated with mathematical modeling of temporal dynamics of both EEG slow-wave activity and cerebral lactate either did not differ significantly when state scoring was done with automated versus visual scoring, or was reduced with automated state scoring relative to manual classification.Machine scoring is as effective as human scoring in detecting experimental effects in rodent sleep studies.

View Article: PubMed Central - PubMed

Affiliation: Mathematics and Computer Science, Whitworth University, Spokane, WA, USA ; College of Medical Sciences and Sleep and Performance Research Center, Washington State University, Spokane, WA, USA.

ABSTRACT

Introduction: Rodent sleep research uses electroencephalography (EEG) and electromyography (EMG) to determine the sleep state of an animal at any given time. EEG and EMG signals, typically sampled at >100 Hz, are segmented arbitrarily into epochs of equal duration (usually 2-10 seconds), and each epoch is scored as wake, slow-wave sleep (SWS), or rapid-eye-movement sleep (REMS), on the basis of visual inspection. Automated state scoring can minimize the burden associated with state and thereby facilitate the use of shorter epoch durations.

Methods: We developed a semiautomated state-scoring procedure that uses a combination of principal component analysis and naïve Bayes classification, with the EEG and EMG as inputs. We validated this algorithm against human-scored sleep-state scoring of data from C57BL/6J and BALB/CJ mice. We then applied a general homeostatic model to characterize the state-dependent dynamics of sleep slow-wave activity and cerebral glycolytic flux, measured as lactate concentration.

Results: More than 89% of epochs scored as wake or SWS by the human were scored as the same state by the machine, whether scoring in 2-second or 10-second epochs. The majority of epochs scored as REMS by the human were also scored as REMS by the machine. However, of epochs scored as REMS by the human, more than 10% were scored as SWS by the machine and 18 (10-second epochs) to 28% (2-second epochs) were scored as wake. These biases were not strain-specific, as strain differences in sleep-state timing relative to the light/dark cycle, EEG power spectral profiles, and the homeostatic dynamics of both slow waves and lactate were detected equally effectively with the automated method or the manual scoring method. Error associated with mathematical modeling of temporal dynamics of both EEG slow-wave activity and cerebral lactate either did not differ significantly when state scoring was done with automated versus visual scoring, or was reduced with automated state scoring relative to manual classification.

Conclusions: Machine scoring is as effective as human scoring in detecting experimental effects in rodent sleep studies. Automated scoring is an efficient alternative to visual inspection in studies of strain differences in sleep and the temporal dynamics of sleep-related physiological parameters.

No MeSH data available.


Related in: MedlinePlus