Limits...
Automatic Tissue Differentiation Based on Confocal Endomicroscopic Images for Intraoperative Guidance in Neurosurgery.

Kamen A, Sun S, Wan S, Kluckner S, Chen T, Gigler AM, Simon E, Fleischer M, Javed M, Daali S, Igressa A, Charalampaki P - Biomed Res Int (2016)

Bottom Line: One major challenge is to categorize these images reliably during the surgery as quickly as possible.We have verified this method on the example of two types of brain tumors: glioblastoma and meningioma.We achieved an average cross validation accuracy of better than 83%.

View Article: PubMed Central - PubMed

Affiliation: Siemens Healthcare, Technology Center, Princeton, NJ 08540, USA.

ABSTRACT
Diagnosis of tumor and definition of tumor borders intraoperatively using fast histopathology is often not sufficiently informative primarily due to tissue architecture alteration during sample preparation step. Confocal laser microscopy (CLE) provides microscopic information of tissue in real-time on cellular and subcellular levels, where tissue characterization is possible. One major challenge is to categorize these images reliably during the surgery as quickly as possible. To address this, we propose an automated tissue differentiation algorithm based on the machine learning concept. During a training phase, a large number of image frames with known tissue types are analyzed and the most discriminant image-based signatures for various tissue types are identified. During the procedure, the algorithm uses the learnt image features to assign a proper tissue type to the acquired image frame. We have verified this method on the example of two types of brain tumors: glioblastoma and meningioma. The algorithm was trained using 117 image sequences containing over 27 thousand images captured from more than 20 patients. We achieved an average cross validation accuracy of better than 83%. We believe this algorithm could be a useful component to an intraoperative pathology system for guiding the resection procedure based on cellular level information.

No MeSH data available.


Related in: MedlinePlus

Illustration of the image recognition system for tissue classification.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4835625&req=5

fig4: Illustration of the image recognition system for tissue classification.

Mentions: Our classification pipeline includes three parts: offline unsupervised codebook learning, offline supervised classifier training, and online image and video classification. The online classification system is shown in Figure 4. The core components are local feature extraction, feature coding, feature pooling, and classification. Local feature points are detected on the input image and descriptors such as “SIFT” [7] and “HOG” [8] are extracted from each feature point. To encode local features, codebooks are learned offline. A codebook with m entries is applied to quantize each descriptor and generate the “code” layer. As a preferred embodiment, hierarchical K-means clustering method is utilized. For the supervised classification, each image is then converted into an m-dimensional code represented as a histogram, where each bin encodes the occurrence of a quantized feature descriptor. Finally, a classifier is trained using the coded features. As one preferred embodiment, support vector machine (SVM) [9] is utilized. Note that our system is not limited to SVM classifier. For example, as another embodiment, random forest classifier [10] can be utilized alternatively. Two variations of our system are considered. (1) If input images are considered as video streams, our system is able to incorporate the visual cues from adjacent (prior) image frames. This significantly improves the performance of our recognition system. (2) If input images are low-contrast and contain little categorical information, our system can automatically discard those images from further processing. These two variations increase the robustness of the overall system.


Automatic Tissue Differentiation Based on Confocal Endomicroscopic Images for Intraoperative Guidance in Neurosurgery.

Kamen A, Sun S, Wan S, Kluckner S, Chen T, Gigler AM, Simon E, Fleischer M, Javed M, Daali S, Igressa A, Charalampaki P - Biomed Res Int (2016)

Illustration of the image recognition system for tissue classification.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4835625&req=5

fig4: Illustration of the image recognition system for tissue classification.
Mentions: Our classification pipeline includes three parts: offline unsupervised codebook learning, offline supervised classifier training, and online image and video classification. The online classification system is shown in Figure 4. The core components are local feature extraction, feature coding, feature pooling, and classification. Local feature points are detected on the input image and descriptors such as “SIFT” [7] and “HOG” [8] are extracted from each feature point. To encode local features, codebooks are learned offline. A codebook with m entries is applied to quantize each descriptor and generate the “code” layer. As a preferred embodiment, hierarchical K-means clustering method is utilized. For the supervised classification, each image is then converted into an m-dimensional code represented as a histogram, where each bin encodes the occurrence of a quantized feature descriptor. Finally, a classifier is trained using the coded features. As one preferred embodiment, support vector machine (SVM) [9] is utilized. Note that our system is not limited to SVM classifier. For example, as another embodiment, random forest classifier [10] can be utilized alternatively. Two variations of our system are considered. (1) If input images are considered as video streams, our system is able to incorporate the visual cues from adjacent (prior) image frames. This significantly improves the performance of our recognition system. (2) If input images are low-contrast and contain little categorical information, our system can automatically discard those images from further processing. These two variations increase the robustness of the overall system.

Bottom Line: One major challenge is to categorize these images reliably during the surgery as quickly as possible.We have verified this method on the example of two types of brain tumors: glioblastoma and meningioma.We achieved an average cross validation accuracy of better than 83%.

View Article: PubMed Central - PubMed

Affiliation: Siemens Healthcare, Technology Center, Princeton, NJ 08540, USA.

ABSTRACT
Diagnosis of tumor and definition of tumor borders intraoperatively using fast histopathology is often not sufficiently informative primarily due to tissue architecture alteration during sample preparation step. Confocal laser microscopy (CLE) provides microscopic information of tissue in real-time on cellular and subcellular levels, where tissue characterization is possible. One major challenge is to categorize these images reliably during the surgery as quickly as possible. To address this, we propose an automated tissue differentiation algorithm based on the machine learning concept. During a training phase, a large number of image frames with known tissue types are analyzed and the most discriminant image-based signatures for various tissue types are identified. During the procedure, the algorithm uses the learnt image features to assign a proper tissue type to the acquired image frame. We have verified this method on the example of two types of brain tumors: glioblastoma and meningioma. The algorithm was trained using 117 image sequences containing over 27 thousand images captured from more than 20 patients. We achieved an average cross validation accuracy of better than 83%. We believe this algorithm could be a useful component to an intraoperative pathology system for guiding the resection procedure based on cellular level information.

No MeSH data available.


Related in: MedlinePlus