Limits...
A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

Chen YL, Chiang HH, Chiang CY, Liu CM, Yuan SM, Wang JH - Sensors (Basel) (2012)

Bottom Line: The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle.These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform.Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

View Article: PubMed Central - PubMed

Affiliation: Department of Computer Science and Information Engineering, National Taipei University of Technology, Taipei 10608, Taiwan. ylchen@csie.ntut.edu.tw

ABSTRACT
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

No MeSH data available.


The detection region and virtual horizon for bright object extraction in Figure 2.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3376597&req=5

f4-sensors-12-02373: The detection region and virtual horizon for bright object extraction in Figure 2.

Mentions: Accordingly, to screen out non-vehicle illuminant objects such as street lamps and traffic lights located above half of the vertical y-axis (i.e., the “horizon”), and save the computation cost, the bright object extraction process is only performed on the bright components located under the virtual horizon (Figure 4). Accordingly, as Figure 5 shows, after applying the bright object segmentation module on the sample image in Figure 2, pixels of bright objects are successfully separated into thresholded object planes under real illumination conditions.


A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

Chen YL, Chiang HH, Chiang CY, Liu CM, Yuan SM, Wang JH - Sensors (Basel) (2012)

The detection region and virtual horizon for bright object extraction in Figure 2.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3376597&req=5

f4-sensors-12-02373: The detection region and virtual horizon for bright object extraction in Figure 2.
Mentions: Accordingly, to screen out non-vehicle illuminant objects such as street lamps and traffic lights located above half of the vertical y-axis (i.e., the “horizon”), and save the computation cost, the bright object extraction process is only performed on the bright components located under the virtual horizon (Figure 4). Accordingly, as Figure 5 shows, after applying the bright object segmentation module on the sample image in Figure 2, pixels of bright objects are successfully separated into thresholded object planes under real illumination conditions.

Bottom Line: The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle.These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform.Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

View Article: PubMed Central - PubMed

Affiliation: Department of Computer Science and Information Engineering, National Taipei University of Technology, Taipei 10608, Taiwan. ylchen@csie.ntut.edu.tw

ABSTRACT
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

No MeSH data available.