Limits...
A compression algorithm for the combination of PDF sets.

Carrazza S, Latorre JI, Rojo J, Watt G - Eur Phys J C Part Fields (2015)

Bottom Line: We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets.The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions.We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.

View Article: PubMed Central - PubMed

Affiliation: Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, Via Celoria 16, 20133 Milan, Italy.

ABSTRACT

The current PDF4LHC recommendation to estimate uncertainties due to parton distribution functions (PDFs) in theoretical predictions for LHC processes involves the combination of separate predictions computed using PDF sets from different groups, each of which comprises a relatively large number of either Hessian eigenvectors or Monte Carlo (MC) replicas. While many fixed-order and parton shower programs allow the evaluation of PDF uncertainties for a single PDF set at no additional CPU cost, this feature is not universal, and, moreover, the a posteriori combination of the predictions using at least three different PDF sets is still required. In this work, we present a strategy for the statistical combination of individual PDF sets, based on the MC representation of Hessian sets, followed by a compression algorithm for the reduction of the number of MC replicas. We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets. The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions. We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.

No MeSH data available.


Schematic representation of the compression strategy used in this work: a prior PDF set and the number of compressed replicas is the input of a GA algorithm which selects the best subset of replicas which minimizes the ERF between the prior and the compressed set
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4594162&req=5

Fig5: Schematic representation of the compression strategy used in this work: a prior PDF set and the number of compressed replicas is the input of a GA algorithm which selects the best subset of replicas which minimizes the ERF between the prior and the compressed set

Mentions: An schematic diagram for our compression strategy is shown in Fig. 5. The prior set of Monte Carlo PDF replicas, the desired number of compressed replicas, , and the value of the factorization scale Q at which the PDFs are evaluated, , are the required parameters for the compression algorithm. Note that it is enough to sample the PDFs in a range of values of Bjorken-x at a fixed value of , since the DGLAP equation uniquely determines the evolution for higher scales . The minimization of the error function is performed using genetic algorithms (GAs), similarly as in the neural network training of the NNPDF fits. GAs work as usual by finding candidates for subsets of leading to smaller values of the error function Eq. (5) until some suitable convergence criterion is satisfied. The output of this algorithm is thus the list of the replicas from the prior set of that minimize the error function. These replicas define the CMC-PDFs for each specific value of . The final step of the process is a series of validation tests where the CMC-PDFs are compared to the prior set in terms of parton distributions at different scales, luminosities, and LHC cross sections, in a fully automated way.Fig. 5


A compression algorithm for the combination of PDF sets.

Carrazza S, Latorre JI, Rojo J, Watt G - Eur Phys J C Part Fields (2015)

Schematic representation of the compression strategy used in this work: a prior PDF set and the number of compressed replicas is the input of a GA algorithm which selects the best subset of replicas which minimizes the ERF between the prior and the compressed set
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4594162&req=5

Fig5: Schematic representation of the compression strategy used in this work: a prior PDF set and the number of compressed replicas is the input of a GA algorithm which selects the best subset of replicas which minimizes the ERF between the prior and the compressed set
Mentions: An schematic diagram for our compression strategy is shown in Fig. 5. The prior set of Monte Carlo PDF replicas, the desired number of compressed replicas, , and the value of the factorization scale Q at which the PDFs are evaluated, , are the required parameters for the compression algorithm. Note that it is enough to sample the PDFs in a range of values of Bjorken-x at a fixed value of , since the DGLAP equation uniquely determines the evolution for higher scales . The minimization of the error function is performed using genetic algorithms (GAs), similarly as in the neural network training of the NNPDF fits. GAs work as usual by finding candidates for subsets of leading to smaller values of the error function Eq. (5) until some suitable convergence criterion is satisfied. The output of this algorithm is thus the list of the replicas from the prior set of that minimize the error function. These replicas define the CMC-PDFs for each specific value of . The final step of the process is a series of validation tests where the CMC-PDFs are compared to the prior set in terms of parton distributions at different scales, luminosities, and LHC cross sections, in a fully automated way.Fig. 5

Bottom Line: We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets.The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions.We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.

View Article: PubMed Central - PubMed

Affiliation: Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, Via Celoria 16, 20133 Milan, Italy.

ABSTRACT

The current PDF4LHC recommendation to estimate uncertainties due to parton distribution functions (PDFs) in theoretical predictions for LHC processes involves the combination of separate predictions computed using PDF sets from different groups, each of which comprises a relatively large number of either Hessian eigenvectors or Monte Carlo (MC) replicas. While many fixed-order and parton shower programs allow the evaluation of PDF uncertainties for a single PDF set at no additional CPU cost, this feature is not universal, and, moreover, the a posteriori combination of the predictions using at least three different PDF sets is still required. In this work, we present a strategy for the statistical combination of individual PDF sets, based on the MC representation of Hessian sets, followed by a compression algorithm for the reduction of the number of MC replicas. We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets. The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions. We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.

No MeSH data available.