Probabilistic Analysis of Pattern Formation in Monotonic Self-Assembly.
Bottom Line:
Self-assembly systems that assemble patterns that are similar to the targets in a significant percentage are "strong" assemblers.Efficiency is a composite measure of the accuracy and purity of an assembler.Finally, some general results are established that, for efficient assembly, imply that every target pattern is guaranteed to be assembled with a minimum common positive probability, regardless of its size, and that a trichotomy exists to characterize the global behavior of typical efficient, monotonic self-assembly systems in the literature.
View Article:
PubMed Central - PubMed
Affiliation: Department of Computer Science, University of Memphis, Memphis, TN, United States of America.
ABSTRACT
Inspired by biological systems, self-assembly aims to construct complex structures. It functions through piece-wise, local interactions among component parts and has the potential to produce novel materials and devices at the nanoscale. Algorithmic self-assembly models the product of self-assembly as the output of some computational process, and attempts to control the process of assembly algorithmically. Though providing fundamental insights, these computational models have yet to fully account for the randomness that is inherent in experimental realizations, which tend to be based on trial and error methods. In order to develop a method of analysis that addresses experimental parameters, such as error and yield, this work focuses on the capability of assembly systems to produce a pre-determined set of target patterns, either accurately or perhaps only approximately. Self-assembly systems that assemble patterns that are similar to the targets in a significant percentage are "strong" assemblers. In addition, assemblers should predominantly produce target patterns, with a small percentage of errors or junk. These definitions approximate notions of yield and purity in chemistry and manufacturing. By combining these definitions, a criterion for efficient assembly is developed that can be used to compare the ability of different assembly systems to produce a given target set. Efficiency is a composite measure of the accuracy and purity of an assembler. Typical examples in algorithmic assembly are assessed in the context of these metrics. In addition to validating the method, they also provide some insight that might be used to guide experimentation. Finally, some general results are established that, for efficient assembly, imply that every target pattern is guaranteed to be assembled with a minimum common positive probability, regardless of its size, and that a trichotomy exists to characterize the global behavior of typical efficient, monotonic self-assembly systems in the literature. No MeSH data available. |
Related In:
Results -
Collection
License getmorefigures.php?uid=PMC4589292&req=5
Mentions: This section addresses the primary question: What is the appropriate definition of assembly of a target set of patterns P given that the assembler may produce a set of patterns A that potentially contains nontarget (or junk) patterns not in P? These two sets of interest are ideally the same, but in practice they may not be (Fig 2). At first, one might be tempted to require only that produce a positive fraction of all patterns of every given size n, i.e., to impose the condition that for some fraction p > 0,∀n/An∩Pn/≥p/Pn/,(1)where An and Pn denote the set of patterns of size n in A and P, respectively. A set of patterns P is weakly probabilistically assemblable if this condition holds for some assembler and some p > 0. Thus, when an assembler weakly probabilistically assembles P with probability p = 1, must assemble all the target patterns of every size n. In the aTAM, tile assembly systems are usually designed to implement an algorithm that produces the target set P with probability p = 1 because the patterns assembled are all in P and all the target patterns are eventually generated. In this case, A = P. In general, however, Condition Eq (1) is too weak to impose a hard constraint. For example, an assembler that produces every possible pattern of a given dimension weakly assembles any target set of patterns P with probability p = 1, since in that case An∩Pn = Pn. Yet, the assembler has no idea what the set of target patterns is, so it will generally produce an inordinate number of patterns not in P and cannot be considered “high yield” or “efficient”. We also note that if one begins with any given assembler, it will likewise weakly assemble the full set of patterns it produces with p = 1 if the target set is fixed to be that set a posteriori. In contrast, an experimentalist wishing to self-assemble a product often has a particular set of target patterns in mind before proceeding to find an appropriate process in the lab to produce them. More desirable assemblers are expected not only to produce a significant fraction of the target set, but also to have some “idea” of what the target set of patterns is like. Therefore, we also require that no pattern produced be significantly different from all patterns of the same size in the target set. Such a concept can be obtained by adding a second condition as in Definition 3. |
View Article: PubMed Central - PubMed
Affiliation: Department of Computer Science, University of Memphis, Memphis, TN, United States of America.
No MeSH data available.