Limits...
Registration of OCT fundus images with color fundus photographs based on blood vessel ridges.

Li Y, Gregori G, Knighton RW, Lujan BJ, Rosenfeld PJ - Opt Express (2011)

Bottom Line: Blood vessel ridges are taken as features for registration.Based on this distance a similarity function between the pair image is defined.The average root mean square errors for the affine model are 31 µm (normal) and 59 µm (eyes with disease).

View Article: PubMed Central - PubMed

Affiliation: Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida 33136, USA. yli@med.miami.edu

ABSTRACT
This paper proposes an algorithm to register OCT fundus images (OFIs) with color fundus photographs (CFPs). This makes it possible to correlate retinal features across the different imaging modalities. Blood vessel ridges are taken as features for registration. A specially defined distance, incorporating information of normal direction of blood vessel ridge pixels, is designed to calculate the distance between each pair of pixels to be matched in the pair image. Based on this distance a similarity function between the pair image is defined. Brute force search is used for a coarse registration and then an Iterative Closest Point (ICP) algorithm for a more accurate registration. The registration algorithm was tested on a sample set containing images of both normal eyes and eyes with pathologies. Three transformation models (similarity, affine and quadratic models) were tested on all image pairs respectively. The experimental results showed that the registration algorithm worked well. The average root mean square errors for the affine model are 31 µm (normal) and 59 µm (eyes with disease). The proposed algorithm can be easily adapted to registration for other modality retinal images.

Show MeSH

Related in: MedlinePlus

Estimation of translation parameters by similarity maps, where the x and y axes correspond to the translation differences along these axes. The side bars display the color map of similarities. (a) S1 similarity map, where the peripheral bright pixels are spurious peak similarities. (b) S similarity map with the correction term introduced, where the brightest pixel around at the center corresponds to the desired translation.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3368356&req=5

g004: Estimation of translation parameters by similarity maps, where the x and y axes correspond to the translation differences along these axes. The side bars display the color map of similarities. (a) S1 similarity map, where the peripheral bright pixels are spurious peak similarities. (b) S similarity map with the correction term introduced, where the brightest pixel around at the center corresponds to the desired translation.

Mentions: For one pair of matching pixels, the modified distance y is determined by the Euclidean distance x and the angle θ (the difference between the normal directions of the two pixels along each vessel ridge curve). A penalty factor from θ is designed to pose a large distance to a pair of matching pixels if their normal directions are not close. The modified distance y is defined as:y={x,       when θ≤1/16πx+θ−116π516π×(2×thy),       when   3/8π>θ>1/16infinity,       when θ≥3/8π(2)The correction term Aactual overlap/Adesired overlap is introduced in the definition of S(M) to correct potential artificial peaks in the similarity function when the overlap area is very small (see Fig. 4Fig. 4


Registration of OCT fundus images with color fundus photographs based on blood vessel ridges.

Li Y, Gregori G, Knighton RW, Lujan BJ, Rosenfeld PJ - Opt Express (2011)

Estimation of translation parameters by similarity maps, where the x and y axes correspond to the translation differences along these axes. The side bars display the color map of similarities. (a) S1 similarity map, where the peripheral bright pixels are spurious peak similarities. (b) S similarity map with the correction term introduced, where the brightest pixel around at the center corresponds to the desired translation.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3368356&req=5

g004: Estimation of translation parameters by similarity maps, where the x and y axes correspond to the translation differences along these axes. The side bars display the color map of similarities. (a) S1 similarity map, where the peripheral bright pixels are spurious peak similarities. (b) S similarity map with the correction term introduced, where the brightest pixel around at the center corresponds to the desired translation.
Mentions: For one pair of matching pixels, the modified distance y is determined by the Euclidean distance x and the angle θ (the difference between the normal directions of the two pixels along each vessel ridge curve). A penalty factor from θ is designed to pose a large distance to a pair of matching pixels if their normal directions are not close. The modified distance y is defined as:y={x,       when θ≤1/16πx+θ−116π516π×(2×thy),       when   3/8π>θ>1/16infinity,       when θ≥3/8π(2)The correction term Aactual overlap/Adesired overlap is introduced in the definition of S(M) to correct potential artificial peaks in the similarity function when the overlap area is very small (see Fig. 4Fig. 4

Bottom Line: Blood vessel ridges are taken as features for registration.Based on this distance a similarity function between the pair image is defined.The average root mean square errors for the affine model are 31 µm (normal) and 59 µm (eyes with disease).

View Article: PubMed Central - PubMed

Affiliation: Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida 33136, USA. yli@med.miami.edu

ABSTRACT
This paper proposes an algorithm to register OCT fundus images (OFIs) with color fundus photographs (CFPs). This makes it possible to correlate retinal features across the different imaging modalities. Blood vessel ridges are taken as features for registration. A specially defined distance, incorporating information of normal direction of blood vessel ridge pixels, is designed to calculate the distance between each pair of pixels to be matched in the pair image. Based on this distance a similarity function between the pair image is defined. Brute force search is used for a coarse registration and then an Iterative Closest Point (ICP) algorithm for a more accurate registration. The registration algorithm was tested on a sample set containing images of both normal eyes and eyes with pathologies. Three transformation models (similarity, affine and quadratic models) were tested on all image pairs respectively. The experimental results showed that the registration algorithm worked well. The average root mean square errors for the affine model are 31 µm (normal) and 59 µm (eyes with disease). The proposed algorithm can be easily adapted to registration for other modality retinal images.

Show MeSH
Related in: MedlinePlus