Limits...
Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

Ehsan S, Clark AF - Sensors (Basel) (2015)

Bottom Line: The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size.An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video).Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements.

View Article: PubMed Central - PubMed

Affiliation: School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK. sehsan@essex.ac.uk.

ABSTRACT
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

No MeSH data available.


A sample 3 × 3 integral image block for the proposed method. The shaded region shows the integral image values that need to be stored.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4541907&req=5

sensors-15-16804-f012: A sample 3 × 3 integral image block for the proposed method. The shaded region shows the integral image values that need to be stored.

Mentions: Unlike the methods in [30], the proposed technique attempts to reduce the depth of the memory required to store an integral image. For this particular method, the width of the memory is assumed to be(length of the image × width of the image x maximum pixel value) rounded to the upper integer value. The first step is to make the length and width of the integral image both into multiples of 3. For example, if the integral image dimensions are 360 × 240 then the length and the width values are already multiples of 3 and nothing needs to be done. Otherwise, the last rows and/or columns of the integral image are discarded to achieve this objective. In the worst case, the last two rows and the last two columns need to be eliminated. The whole integral image is then divided into blocks of 3 × 3 integral image values. Figure 12 depicts a single such block. The shaded integral images values in Figure 12 are the ones that are selected by the proposed method to store in the memory; the remaining four values on the corners are discarded. Despite not storing these four corner integral image values, the 3 × 3 integral image block can be perfectly reconstructed from the stored integral image values by utilizing the fact that:(23)a=b+d−e+input pixel value at e(24)c=b+f−e−input pixel value at f(25)g=d+h−e−input pixel value at h(26)i=h+f−e+input pixel value at i


Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

Ehsan S, Clark AF - Sensors (Basel) (2015)

A sample 3 × 3 integral image block for the proposed method. The shaded region shows the integral image values that need to be stored.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4541907&req=5

sensors-15-16804-f012: A sample 3 × 3 integral image block for the proposed method. The shaded region shows the integral image values that need to be stored.
Mentions: Unlike the methods in [30], the proposed technique attempts to reduce the depth of the memory required to store an integral image. For this particular method, the width of the memory is assumed to be(length of the image × width of the image x maximum pixel value) rounded to the upper integer value. The first step is to make the length and width of the integral image both into multiples of 3. For example, if the integral image dimensions are 360 × 240 then the length and the width values are already multiples of 3 and nothing needs to be done. Otherwise, the last rows and/or columns of the integral image are discarded to achieve this objective. In the worst case, the last two rows and the last two columns need to be eliminated. The whole integral image is then divided into blocks of 3 × 3 integral image values. Figure 12 depicts a single such block. The shaded integral images values in Figure 12 are the ones that are selected by the proposed method to store in the memory; the remaining four values on the corners are discarded. Despite not storing these four corner integral image values, the 3 × 3 integral image block can be perfectly reconstructed from the stored integral image values by utilizing the fact that:(23)a=b+d−e+input pixel value at e(24)c=b+f−e−input pixel value at f(25)g=d+h−e−input pixel value at h(26)i=h+f−e+input pixel value at i

Bottom Line: The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size.An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video).Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements.

View Article: PubMed Central - PubMed

Affiliation: School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK. sehsan@essex.ac.uk.

ABSTRACT
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

No MeSH data available.