Limits...
Cognitive Learning, Monitoring and Assistance of Industrial Workflows Using Egocentric Sensor Networks.

Bleser G, Damen D, Behera A, Hendeby G, Mura K, Miezal M, Gee A, Petersen N, Maçães G, Domingues H, Gorecky D, Almeida L, Mayol-Cuevas W, Calway A, Cohn AG, Hogg DC, Stricker D - PLoS ONE (2015)

Bottom Line: The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind.The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD).A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed.

View Article: PubMed Central - PubMed

Affiliation: Department Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Germany; Department of Computer Science, Technical University of Kaiserslautern, Kaiserslautern, Germany.

ABSTRACT
Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system.

No MeSH data available.


Object recognition results.Confusion matrix showing the accuracy when objects are learnt from a different operator for the Nails & Screws (left) and the Labeling & Packaging (right) tasks.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4488426&req=5

pone.0127769.g023: Object recognition results.Confusion matrix showing the accuracy when objects are learnt from a different operator for the Nails & Screws (left) and the Labeling & Packaging (right) tasks.

Mentions: To show the importance of real-time learning of personal grips, the distinctiveness of the descriptors was evaluated by learning from the manipulation sequence of one operator then testing on other operators. The results are presented in a confusion matrix (see Fig 23). Each cell in the matrix measures the ability to recognize hand-held tools in sequences of operator (row) when learning is done from the grips of another operator (column). For the second operator in the Nails & Screws task, for example, the accuracy drops from 67.0% to a maximum of 29.4% when a different individual’s manipulation sequences were used for learning object grips. For the the Labeling & Packaging task, the maximum drop was from 67.5% to 39.5% for the third operator. These results highlight the importance of the learning-based approach for object grips, as each user has a different way of manipulating tools which affects the visual recognition of objects. Learning person-specific object manipulations is online and is required once per operator. Once the objects are learnt, the operator can use these objects for multiple workflows. While the low-level learning of objects is person-dependent, the workflow monitoring (cf. Section 3.1) does not need to be adapted or changed for new operators. We thus believe that introducing this learning-based approach for person-specific object manipulations enables learning and recognition of workflows from individuals with varied grips as well as different tool shapes and sizes. It should be noted that the system assumes the operator knows how to grip the tools and does not need guidance on handling tools. We find this compromise acceptable, as it is the workflow that we wish to monitor rather than fine-grained object gripping.


Cognitive Learning, Monitoring and Assistance of Industrial Workflows Using Egocentric Sensor Networks.

Bleser G, Damen D, Behera A, Hendeby G, Mura K, Miezal M, Gee A, Petersen N, Maçães G, Domingues H, Gorecky D, Almeida L, Mayol-Cuevas W, Calway A, Cohn AG, Hogg DC, Stricker D - PLoS ONE (2015)

Object recognition results.Confusion matrix showing the accuracy when objects are learnt from a different operator for the Nails & Screws (left) and the Labeling & Packaging (right) tasks.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4488426&req=5

pone.0127769.g023: Object recognition results.Confusion matrix showing the accuracy when objects are learnt from a different operator for the Nails & Screws (left) and the Labeling & Packaging (right) tasks.
Mentions: To show the importance of real-time learning of personal grips, the distinctiveness of the descriptors was evaluated by learning from the manipulation sequence of one operator then testing on other operators. The results are presented in a confusion matrix (see Fig 23). Each cell in the matrix measures the ability to recognize hand-held tools in sequences of operator (row) when learning is done from the grips of another operator (column). For the second operator in the Nails & Screws task, for example, the accuracy drops from 67.0% to a maximum of 29.4% when a different individual’s manipulation sequences were used for learning object grips. For the the Labeling & Packaging task, the maximum drop was from 67.5% to 39.5% for the third operator. These results highlight the importance of the learning-based approach for object grips, as each user has a different way of manipulating tools which affects the visual recognition of objects. Learning person-specific object manipulations is online and is required once per operator. Once the objects are learnt, the operator can use these objects for multiple workflows. While the low-level learning of objects is person-dependent, the workflow monitoring (cf. Section 3.1) does not need to be adapted or changed for new operators. We thus believe that introducing this learning-based approach for person-specific object manipulations enables learning and recognition of workflows from individuals with varied grips as well as different tool shapes and sizes. It should be noted that the system assumes the operator knows how to grip the tools and does not need guidance on handling tools. We find this compromise acceptable, as it is the workflow that we wish to monitor rather than fine-grained object gripping.

Bottom Line: The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind.The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD).A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed.

View Article: PubMed Central - PubMed

Affiliation: Department Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Germany; Department of Computer Science, Technical University of Kaiserslautern, Kaiserslautern, Germany.

ABSTRACT
Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system.

No MeSH data available.