Limits...
Design and implementation of a generalized laboratory data model.

Wendl MC, Smith S, Pohl CS, Dooling DJ, Chinwalla AT, Crouse K, Hepler T, Leong S, Carmichael L, Nhan M, Oberkfell BJ, Mardis ER, Hillier LW, Wilson RK - BMC Bioinformatics (2007)

Bottom Line: The implementation described here has served as the Washington University Genome Sequencing Center's primary information system for several years.It handles all transactions underlying a throughput rate of about 9 million sequencing reactions of various kinds per month and has handily weathered a number of major pipeline reconfigurations.The basic data model can be readily adapted to other high-volume processing environments.

View Article: PubMed Central - HTML - PubMed

Affiliation: Genome Sequencing Center, Washington University, St, Louis, MO 63108, USA. mwendl@wustl.edu

ABSTRACT

Background: Investigators in the biological sciences continue to exploit laboratory automation methods and have dramatically increased the rates at which they can generate data. In many environments, the methods themselves also evolve in a rapid and fluid manner. These observations point to the importance of robust information management systems in the modern laboratory. Designing and implementing such systems is non-trivial and it appears that in many cases a database project ultimately proves unserviceable.

Results: We describe a general modeling framework for laboratory data and its implementation as an information management system. The model utilizes several abstraction techniques, focusing especially on the concepts of inheritance and meta-data. Traditional approaches commingle event-oriented data with regular entity data in ad hoc ways. Instead, we define distinct regular entity and event schemas, but fully integrate these via a standardized interface. The design allows straightforward definition of a "processing pipeline" as a sequence of events, obviating the need for separate workflow management systems. A layer above the event-oriented schema integrates events into a workflow by defining "processing directives", which act as automated project managers of items in the system. Directives can be added or modified in an almost trivial fashion, i.e., without the need for schema modification or re-certification of applications. Association between regular entities and events is managed via simple "many-to-many" relationships. We describe the programming interface, as well as techniques for handling input/output, process control, and state transitions.

Conclusion: The implementation described here has served as the Washington University Genome Sequencing Center's primary information system for several years. It handles all transactions underlying a throughput rate of about 9 million sequencing reactions of various kinds per month and has handily weathered a number of major pipeline reconfigurations. The basic data model can be readily adapted to other high-volume processing environments.

Show MeSH
Possible state transitions for an instance of an event.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2194795&req=5

Figure 3: Possible state transitions for an instance of an event.

Mentions: Events themselves can be thought of as state transition machines that respond to external cues and commands [27]. Fig. 3 shows a map of the state transitions that we have found useful for our particular circumstances. The novelty here is that states fall into two categories: those that are not actually recorded in the physical database ('is_initiated') and those that are recorded (all the rest). This separation provides an extra opportunity for validating information and directives at the initiation phase of a process. Specifically, at the application level, a step is created in the 'is_initiated' state, i.e., it has a reserved ID from the database and can accept parameters and processing directives as attachments. If the additions are valid, the step can proceed to subsequent, recordable states, otherwise it is aborted. The remaining states are summarized as follows.


Design and implementation of a generalized laboratory data model.

Wendl MC, Smith S, Pohl CS, Dooling DJ, Chinwalla AT, Crouse K, Hepler T, Leong S, Carmichael L, Nhan M, Oberkfell BJ, Mardis ER, Hillier LW, Wilson RK - BMC Bioinformatics (2007)

Possible state transitions for an instance of an event.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2194795&req=5

Figure 3: Possible state transitions for an instance of an event.
Mentions: Events themselves can be thought of as state transition machines that respond to external cues and commands [27]. Fig. 3 shows a map of the state transitions that we have found useful for our particular circumstances. The novelty here is that states fall into two categories: those that are not actually recorded in the physical database ('is_initiated') and those that are recorded (all the rest). This separation provides an extra opportunity for validating information and directives at the initiation phase of a process. Specifically, at the application level, a step is created in the 'is_initiated' state, i.e., it has a reserved ID from the database and can accept parameters and processing directives as attachments. If the additions are valid, the step can proceed to subsequent, recordable states, otherwise it is aborted. The remaining states are summarized as follows.

Bottom Line: The implementation described here has served as the Washington University Genome Sequencing Center's primary information system for several years.It handles all transactions underlying a throughput rate of about 9 million sequencing reactions of various kinds per month and has handily weathered a number of major pipeline reconfigurations.The basic data model can be readily adapted to other high-volume processing environments.

View Article: PubMed Central - HTML - PubMed

Affiliation: Genome Sequencing Center, Washington University, St, Louis, MO 63108, USA. mwendl@wustl.edu

ABSTRACT

Background: Investigators in the biological sciences continue to exploit laboratory automation methods and have dramatically increased the rates at which they can generate data. In many environments, the methods themselves also evolve in a rapid and fluid manner. These observations point to the importance of robust information management systems in the modern laboratory. Designing and implementing such systems is non-trivial and it appears that in many cases a database project ultimately proves unserviceable.

Results: We describe a general modeling framework for laboratory data and its implementation as an information management system. The model utilizes several abstraction techniques, focusing especially on the concepts of inheritance and meta-data. Traditional approaches commingle event-oriented data with regular entity data in ad hoc ways. Instead, we define distinct regular entity and event schemas, but fully integrate these via a standardized interface. The design allows straightforward definition of a "processing pipeline" as a sequence of events, obviating the need for separate workflow management systems. A layer above the event-oriented schema integrates events into a workflow by defining "processing directives", which act as automated project managers of items in the system. Directives can be added or modified in an almost trivial fashion, i.e., without the need for schema modification or re-certification of applications. Association between regular entities and events is managed via simple "many-to-many" relationships. We describe the programming interface, as well as techniques for handling input/output, process control, and state transitions.

Conclusion: The implementation described here has served as the Washington University Genome Sequencing Center's primary information system for several years. It handles all transactions underlying a throughput rate of about 9 million sequencing reactions of various kinds per month and has handily weathered a number of major pipeline reconfigurations. The basic data model can be readily adapted to other high-volume processing environments.

Show MeSH