Limits...
Tool for Semiautomatic Labeling of Moving Objects in Video Sequences: TSLAB.

Cuevas C, Yáñez EM, García N - Sensors (Basel) (2015)

Bottom Line: An advanced and user-friendly tool for fast labeling of moving objects captured with surveillance sensors is proposed, which is available to the public.The labeling can be performed easily and quickly thanks to a very friendly graphical user interface that allows one to automatize many common operations.This interface also includes some semiautomatic advanced tools that simplify the labeling tasks and drastically reduce the time required to obtain high-quality results.

View Article: PubMed Central - PubMed

Affiliation: Grupo de Tratamiento de Imágenes, Universidad Politécnica de Madrid (UPM), E-28040 Madrid, Spain. ccr@gti.ssr.upm.es.

ABSTRACT
An advanced and user-friendly tool for fast labeling of moving objects captured with surveillance sensors is proposed, which is available to the public. This tool allows the creation of three kinds of labels: moving objects, shadows and occlusions. These labels are created at both the pixel level and object level, which makes them suitable to assess the quality of both moving object detection strategies and tracking algorithms. The labeling can be performed easily and quickly thanks to a very friendly graphical user interface that allows one to automatize many common operations. This interface also includes some semiautomatic advanced tools that simplify the labeling tasks and drastically reduce the time required to obtain high-quality results.

No MeSH data available.


Related in: MedlinePlus

Layers obtained from an image with two moving objects partially occluded. (a) Original image; (b) layers of the left object (VMO in red and SMO in green); (c) layers of the right object (VMO in red and OMO in blue); (d) layers of both objects (VMO in red, SMO in yellow and OMO in cyan).
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4541825&req=5

f3-sensors-15-15159: Layers obtained from an image with two moving objects partially occluded. (a) Original image; (b) layers of the left object (VMO in red and SMO in green); (c) layers of the right object (VMO in red and OMO in blue); (d) layers of both objects (VMO in red, SMO in yellow and OMO in cyan).

Mentions: Figure 3 illustrates an image with two moving objects (Figure 3a) and three ground-truth masks containing the three possible types of layers. In Figure 3b, the data corresponding to the moving object in the left have been highlighted: red for the VMO layer and green for the SMO layer. As a visual reference, the silhouette of the other moving object has been also included in this mask. Analogously, the layers corresponding to the moving object in the right of the image have been highlighted in Figure 3c: red for the VMO layer and blue for the OMO layer. Again, the silhouette of the other moving object has been included as a visual reference. Finally, Figure 3d depicts the total mask, where the red pixels indicate that they belong to VMO layers, the cyan pixels indicate that they are part of both a VMO layer and a OMO layer (the sum of red and blue is cyan) and the yellow pixels indicate that they contain data from a VMO layer and a SMO layer (the sum of red and green is yellow).


Tool for Semiautomatic Labeling of Moving Objects in Video Sequences: TSLAB.

Cuevas C, Yáñez EM, García N - Sensors (Basel) (2015)

Layers obtained from an image with two moving objects partially occluded. (a) Original image; (b) layers of the left object (VMO in red and SMO in green); (c) layers of the right object (VMO in red and OMO in blue); (d) layers of both objects (VMO in red, SMO in yellow and OMO in cyan).
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4541825&req=5

f3-sensors-15-15159: Layers obtained from an image with two moving objects partially occluded. (a) Original image; (b) layers of the left object (VMO in red and SMO in green); (c) layers of the right object (VMO in red and OMO in blue); (d) layers of both objects (VMO in red, SMO in yellow and OMO in cyan).
Mentions: Figure 3 illustrates an image with two moving objects (Figure 3a) and three ground-truth masks containing the three possible types of layers. In Figure 3b, the data corresponding to the moving object in the left have been highlighted: red for the VMO layer and green for the SMO layer. As a visual reference, the silhouette of the other moving object has been also included in this mask. Analogously, the layers corresponding to the moving object in the right of the image have been highlighted in Figure 3c: red for the VMO layer and blue for the OMO layer. Again, the silhouette of the other moving object has been included as a visual reference. Finally, Figure 3d depicts the total mask, where the red pixels indicate that they belong to VMO layers, the cyan pixels indicate that they are part of both a VMO layer and a OMO layer (the sum of red and blue is cyan) and the yellow pixels indicate that they contain data from a VMO layer and a SMO layer (the sum of red and green is yellow).

Bottom Line: An advanced and user-friendly tool for fast labeling of moving objects captured with surveillance sensors is proposed, which is available to the public.The labeling can be performed easily and quickly thanks to a very friendly graphical user interface that allows one to automatize many common operations.This interface also includes some semiautomatic advanced tools that simplify the labeling tasks and drastically reduce the time required to obtain high-quality results.

View Article: PubMed Central - PubMed

Affiliation: Grupo de Tratamiento de Imágenes, Universidad Politécnica de Madrid (UPM), E-28040 Madrid, Spain. ccr@gti.ssr.upm.es.

ABSTRACT
An advanced and user-friendly tool for fast labeling of moving objects captured with surveillance sensors is proposed, which is available to the public. This tool allows the creation of three kinds of labels: moving objects, shadows and occlusions. These labels are created at both the pixel level and object level, which makes them suitable to assess the quality of both moving object detection strategies and tracking algorithms. The labeling can be performed easily and quickly thanks to a very friendly graphical user interface that allows one to automatize many common operations. This interface also includes some semiautomatic advanced tools that simplify the labeling tasks and drastically reduce the time required to obtain high-quality results.

No MeSH data available.


Related in: MedlinePlus