Limits...
Cognitive Learning, Monitoring and Assistance of Industrial Workflows Using Egocentric Sensor Networks.

Bleser G, Damen D, Behera A, Hendeby G, Mura K, Miezal M, Gee A, Petersen N, Maçães G, Domingues H, Gorecky D, Almeida L, Mayol-Cuevas W, Calway A, Cohn AG, Hogg DC, Stricker D - PLoS ONE (2015)

Bottom Line: The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind.The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD).A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed.

View Article: PubMed Central - PubMed

Affiliation: Department Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Germany; Department of Computer Science, Technical University of Kaiserslautern, Kaiserslautern, Germany.

ABSTRACT
Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system.

No MeSH data available.


Predicted atomic events vs. ground-truth for the leave-one-subject-out evaluation of the Labeling & Packaging dataset.Each figure represents the predictions of the atomic events in sequences belonging to the left-out subject (left to right, top to bottom: subject 1,2,3,4). The bottom bars show the ground truth and the top bars show the prediction. The vertical lines separate two consecutive workflow sequences. Different colors indicate different atomic events.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4488426&req=5

pone.0127769.g014: Predicted atomic events vs. ground-truth for the leave-one-subject-out evaluation of the Labeling & Packaging dataset.Each figure represents the predictions of the atomic events in sequences belonging to the left-out subject (left to right, top to bottom: subject 1,2,3,4). The bottom bars show the ground truth and the top bars show the prediction. The vertical lines separate two consecutive workflow sequences. Different colors indicate different atomic events.

Mentions: As a complement to the quantitative results presented above, Fig 14 provides a more global and qualitative view on the performance of the proposed framework and its live activity monitoring capabilities. It illustrates the quality of results obtained using the off-line leave-one-subject-out experiment for the Labeling & Packaging dataset (Object IMU, column 6 in Table 4). In the figure, vertical lines separate the different workflow executions of each subject. Each atomic event is assigned a unique color and the bottom bar of each subject shows the ground truth. At a given instant the prediction is correct, if the colors of both bars are identical. From the figure it is evident that the prediction often jumps to a wrong atomic event for a short period of time, e.g., due to misclassifying a known event or an irrelevant action. However, after sufficient information has been observed, the system recovers to the correct event. From the experiments it was also observed that the current atomic event is often confused with the previous and the next event. This is a typical synchronization error for sequential data, which is partly due to the manual assignment of ground truth labels. In fact it is difficult for humans to assign boundaries consistently between consecutive events. The effect can be seen in Fig 15, which shows confusion matrices for the on-line workflow monitoring. In each matrix, the diagonal elements are clearly dominant, which reflects the accuracy of the proposed method. Moreover, the false positives are often either the previous or the next atomic events, which shows the above-mentioned synchronization error. Given the goal of the proposed monitoring system to guide users through workflows while ensuring that each relevant atomic task is properly completed, the reduction of such synchronization errors requires further investigation. A detailed evaluation of this synchronization error is presented in [66].


Cognitive Learning, Monitoring and Assistance of Industrial Workflows Using Egocentric Sensor Networks.

Bleser G, Damen D, Behera A, Hendeby G, Mura K, Miezal M, Gee A, Petersen N, Maçães G, Domingues H, Gorecky D, Almeida L, Mayol-Cuevas W, Calway A, Cohn AG, Hogg DC, Stricker D - PLoS ONE (2015)

Predicted atomic events vs. ground-truth for the leave-one-subject-out evaluation of the Labeling & Packaging dataset.Each figure represents the predictions of the atomic events in sequences belonging to the left-out subject (left to right, top to bottom: subject 1,2,3,4). The bottom bars show the ground truth and the top bars show the prediction. The vertical lines separate two consecutive workflow sequences. Different colors indicate different atomic events.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4488426&req=5

pone.0127769.g014: Predicted atomic events vs. ground-truth for the leave-one-subject-out evaluation of the Labeling & Packaging dataset.Each figure represents the predictions of the atomic events in sequences belonging to the left-out subject (left to right, top to bottom: subject 1,2,3,4). The bottom bars show the ground truth and the top bars show the prediction. The vertical lines separate two consecutive workflow sequences. Different colors indicate different atomic events.
Mentions: As a complement to the quantitative results presented above, Fig 14 provides a more global and qualitative view on the performance of the proposed framework and its live activity monitoring capabilities. It illustrates the quality of results obtained using the off-line leave-one-subject-out experiment for the Labeling & Packaging dataset (Object IMU, column 6 in Table 4). In the figure, vertical lines separate the different workflow executions of each subject. Each atomic event is assigned a unique color and the bottom bar of each subject shows the ground truth. At a given instant the prediction is correct, if the colors of both bars are identical. From the figure it is evident that the prediction often jumps to a wrong atomic event for a short period of time, e.g., due to misclassifying a known event or an irrelevant action. However, after sufficient information has been observed, the system recovers to the correct event. From the experiments it was also observed that the current atomic event is often confused with the previous and the next event. This is a typical synchronization error for sequential data, which is partly due to the manual assignment of ground truth labels. In fact it is difficult for humans to assign boundaries consistently between consecutive events. The effect can be seen in Fig 15, which shows confusion matrices for the on-line workflow monitoring. In each matrix, the diagonal elements are clearly dominant, which reflects the accuracy of the proposed method. Moreover, the false positives are often either the previous or the next atomic events, which shows the above-mentioned synchronization error. Given the goal of the proposed monitoring system to guide users through workflows while ensuring that each relevant atomic task is properly completed, the reduction of such synchronization errors requires further investigation. A detailed evaluation of this synchronization error is presented in [66].

Bottom Line: The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind.The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD).A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed.

View Article: PubMed Central - PubMed

Affiliation: Department Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Germany; Department of Computer Science, Technical University of Kaiserslautern, Kaiserslautern, Germany.

ABSTRACT
Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system.

No MeSH data available.