Limits...
A compression algorithm for the combination of PDF sets.

Carrazza S, Latorre JI, Rojo J, Watt G - Eur Phys J C Part Fields (2015)

Bottom Line: We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets.The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions.We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.

View Article: PubMed Central - PubMed

Affiliation: Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, Via Celoria 16, 20133 Milan, Italy.

ABSTRACT

The current PDF4LHC recommendation to estimate uncertainties due to parton distribution functions (PDFs) in theoretical predictions for LHC processes involves the combination of separate predictions computed using PDF sets from different groups, each of which comprises a relatively large number of either Hessian eigenvectors or Monte Carlo (MC) replicas. While many fixed-order and parton shower programs allow the evaluation of PDF uncertainties for a single PDF set at no additional CPU cost, this feature is not universal, and, moreover, the a posteriori combination of the predictions using at least three different PDF sets is still required. In this work, we present a strategy for the statistical combination of individual PDF sets, based on the MC representation of Hessian sets, followed by a compression algorithm for the reduction of the number of MC replicas. We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets. The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions. We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.

No MeSH data available.


The various contributions to the ERF, Eq. (5), for the compression of the NNPDF3.0 NLO set with  replicas. For each value of , we show the value of each contribution to the ERF for the best-fit result of the compression algorithm (red points). We compare the results of the compression with the values of the ERF averaged over  random partitions of  replicas (blue points), as well as the 50, 68, and 90 % confidence-level intervals computed over these random partitions. The dashed horizontal line is the 68 % lower band of the ERF for the average of the random partitions with , and is inserted for illustration purposes only
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4594162&req=5

Fig7: The various contributions to the ERF, Eq. (5), for the compression of the NNPDF3.0 NLO set with replicas. For each value of , we show the value of each contribution to the ERF for the best-fit result of the compression algorithm (red points). We compare the results of the compression with the values of the ERF averaged over random partitions of replicas (blue points), as well as the 50, 68, and 90 % confidence-level intervals computed over these random partitions. The dashed horizontal line is the 68 % lower band of the ERF for the average of the random partitions with , and is inserted for illustration purposes only

Mentions: In order to quantify the performance of the compression algorithm, and to compare it with that of a random selection of the reduced set of replicas, Fig. 7 shows the various contributions to the ERF, Eq. (5), for the compression of the NNPDF3.0 NLO set with replicas. For each value of , we show the value of each contribution to the ERF for the best-fit result of the compression algorithm (red points). We compare the results of the compression with the values of the ERF averaged over random partitions of replicas (blue points), as well as the 50, 68, and 90 % confidence-level intervals computed over these random partitions.


A compression algorithm for the combination of PDF sets.

Carrazza S, Latorre JI, Rojo J, Watt G - Eur Phys J C Part Fields (2015)

The various contributions to the ERF, Eq. (5), for the compression of the NNPDF3.0 NLO set with  replicas. For each value of , we show the value of each contribution to the ERF for the best-fit result of the compression algorithm (red points). We compare the results of the compression with the values of the ERF averaged over  random partitions of  replicas (blue points), as well as the 50, 68, and 90 % confidence-level intervals computed over these random partitions. The dashed horizontal line is the 68 % lower band of the ERF for the average of the random partitions with , and is inserted for illustration purposes only
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4594162&req=5

Fig7: The various contributions to the ERF, Eq. (5), for the compression of the NNPDF3.0 NLO set with replicas. For each value of , we show the value of each contribution to the ERF for the best-fit result of the compression algorithm (red points). We compare the results of the compression with the values of the ERF averaged over random partitions of replicas (blue points), as well as the 50, 68, and 90 % confidence-level intervals computed over these random partitions. The dashed horizontal line is the 68 % lower band of the ERF for the average of the random partitions with , and is inserted for illustration purposes only
Mentions: In order to quantify the performance of the compression algorithm, and to compare it with that of a random selection of the reduced set of replicas, Fig. 7 shows the various contributions to the ERF, Eq. (5), for the compression of the NNPDF3.0 NLO set with replicas. For each value of , we show the value of each contribution to the ERF for the best-fit result of the compression algorithm (red points). We compare the results of the compression with the values of the ERF averaged over random partitions of replicas (blue points), as well as the 50, 68, and 90 % confidence-level intervals computed over these random partitions.

Bottom Line: We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets.The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions.We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.

View Article: PubMed Central - PubMed

Affiliation: Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, Via Celoria 16, 20133 Milan, Italy.

ABSTRACT

The current PDF4LHC recommendation to estimate uncertainties due to parton distribution functions (PDFs) in theoretical predictions for LHC processes involves the combination of separate predictions computed using PDF sets from different groups, each of which comprises a relatively large number of either Hessian eigenvectors or Monte Carlo (MC) replicas. While many fixed-order and parton shower programs allow the evaluation of PDF uncertainties for a single PDF set at no additional CPU cost, this feature is not universal, and, moreover, the a posteriori combination of the predictions using at least three different PDF sets is still required. In this work, we present a strategy for the statistical combination of individual PDF sets, based on the MC representation of Hessian sets, followed by a compression algorithm for the reduction of the number of MC replicas. We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets. The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions. We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.

No MeSH data available.