Limits...
Cognitive Learning, Monitoring and Assistance of Industrial Workflows Using Egocentric Sensor Networks.

Bleser G, Damen D, Behera A, Hendeby G, Mura K, Miezal M, Gee A, Petersen N, Maçães G, Domingues H, Gorecky D, Almeida L, Mayol-Cuevas W, Calway A, Cohn AG, Hogg DC, Stricker D - PLoS ONE (2015)

Bottom Line: The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind.The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD).A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed.

View Article: PubMed Central - PubMed

Affiliation: Department Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Germany; Department of Computer Science, Technical University of Kaiserslautern, Kaiserslautern, Germany.

ABSTRACT
Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system.

No MeSH data available.


Related in: MedlinePlus

Object recognition and tracking for Ball valve.Frames showing the Ball valve task for 6 sequences from 3 different operators across the task’s primitive events. False negative frames (yellow-bounded) indicate cases where the object failed to be recognized due to a significantly different grasp between learning and testing, or due to occlusion. False positive cases (red-bounded) are due to ambiguous grasps of objects, particularly the spanner and the screw driver, when seen from an overhead camera.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4488426&req=5

pone.0127769.g021: Object recognition and tracking for Ball valve.Frames showing the Ball valve task for 6 sequences from 3 different operators across the task’s primitive events. False negative frames (yellow-bounded) indicate cases where the object failed to be recognized due to a significantly different grasp between learning and testing, or due to occlusion. False positive cases (red-bounded) are due to ambiguous grasps of objects, particularly the spanner and the screw driver, when seen from an overhead camera.

Mentions: Fig 21 shows sequences of frames from multiple operators for the Ball valve task along with some recognition results. Table 5 presents quantitative results for task-relevant object tracking on the Labeling & Packaging task. The failure to recognize objects is usually related to the object being mostly occluded during the task performance. Another limitation is the usage of cluster-based tracking. The tracker currently maintains a single identity for each cluster. For example, when the pen is writing onto the box, only one cluster is tracked. The pen’s identity is thus often ignored by the tracker, though it achieves good recognition results. Another source of failure shown particularly in Fig 21 is due to ambiguous grasps for small objects (for example the hand-held plier and the hand-held spanner), or the fact that the grasp during learning differed from the grasp while performing the task. This could be improved by adding more discriminative features for similar objects, and by learning more views of the objects during task performance. These improvements are left for future work.


Cognitive Learning, Monitoring and Assistance of Industrial Workflows Using Egocentric Sensor Networks.

Bleser G, Damen D, Behera A, Hendeby G, Mura K, Miezal M, Gee A, Petersen N, Maçães G, Domingues H, Gorecky D, Almeida L, Mayol-Cuevas W, Calway A, Cohn AG, Hogg DC, Stricker D - PLoS ONE (2015)

Object recognition and tracking for Ball valve.Frames showing the Ball valve task for 6 sequences from 3 different operators across the task’s primitive events. False negative frames (yellow-bounded) indicate cases where the object failed to be recognized due to a significantly different grasp between learning and testing, or due to occlusion. False positive cases (red-bounded) are due to ambiguous grasps of objects, particularly the spanner and the screw driver, when seen from an overhead camera.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4488426&req=5

pone.0127769.g021: Object recognition and tracking for Ball valve.Frames showing the Ball valve task for 6 sequences from 3 different operators across the task’s primitive events. False negative frames (yellow-bounded) indicate cases where the object failed to be recognized due to a significantly different grasp between learning and testing, or due to occlusion. False positive cases (red-bounded) are due to ambiguous grasps of objects, particularly the spanner and the screw driver, when seen from an overhead camera.
Mentions: Fig 21 shows sequences of frames from multiple operators for the Ball valve task along with some recognition results. Table 5 presents quantitative results for task-relevant object tracking on the Labeling & Packaging task. The failure to recognize objects is usually related to the object being mostly occluded during the task performance. Another limitation is the usage of cluster-based tracking. The tracker currently maintains a single identity for each cluster. For example, when the pen is writing onto the box, only one cluster is tracked. The pen’s identity is thus often ignored by the tracker, though it achieves good recognition results. Another source of failure shown particularly in Fig 21 is due to ambiguous grasps for small objects (for example the hand-held plier and the hand-held spanner), or the fact that the grasp during learning differed from the grasp while performing the task. This could be improved by adding more discriminative features for similar objects, and by learning more views of the objects during task performance. These improvements are left for future work.

Bottom Line: The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind.The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD).A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed.

View Article: PubMed Central - PubMed

Affiliation: Department Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Germany; Department of Computer Science, Technical University of Kaiserslautern, Kaiserslautern, Germany.

ABSTRACT
Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system.

No MeSH data available.


Related in: MedlinePlus