Limits...
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

Idris M, Hussain S, Siddiqi MH, Hassan W, Syed Muhammad Bilal H, Lee S - PLoS ONE (2015)

Bottom Line: The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce.The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost.Complexity and qualitative results analysis shows significant performance improvement.

View Article: PubMed Central - PubMed

Affiliation: Ubiquitous Computing Lab., Department of Computer Engineering, Kyung Hee University, Yongin-si, Gyeonggi-do, Republic of Korea.

ABSTRACT
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

No MeSH data available.


Analysis based on changing data size w.r.t overall job execution time.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4549337&req=5

pone.0136259.g005: Analysis based on changing data size w.r.t overall job execution time.

Mentions: In this subsection, we evaluate the performance based on changing data size with constant cluster size. Both MRPack and MapReduce are executed on varying datasets and the same cluster consisting of eight nodes. In both cases, after certain threshold of datasets, the performance improved, as shown in Fig 5. However, the generic MapReduce executes a single algorithm in a single pass/job and MRPack executes multiple algorithms in a single pass/job. Explicit comparison between both with regard to data size show significant performance improvement for MRPack.


MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

Idris M, Hussain S, Siddiqi MH, Hassan W, Syed Muhammad Bilal H, Lee S - PLoS ONE (2015)

Analysis based on changing data size w.r.t overall job execution time.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4549337&req=5

pone.0136259.g005: Analysis based on changing data size w.r.t overall job execution time.
Mentions: In this subsection, we evaluate the performance based on changing data size with constant cluster size. Both MRPack and MapReduce are executed on varying datasets and the same cluster consisting of eight nodes. In both cases, after certain threshold of datasets, the performance improved, as shown in Fig 5. However, the generic MapReduce executes a single algorithm in a single pass/job and MRPack executes multiple algorithms in a single pass/job. Explicit comparison between both with regard to data size show significant performance improvement for MRPack.

Bottom Line: The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce.The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost.Complexity and qualitative results analysis shows significant performance improvement.

View Article: PubMed Central - PubMed

Affiliation: Ubiquitous Computing Lab., Department of Computer Engineering, Kyung Hee University, Yongin-si, Gyeonggi-do, Republic of Korea.

ABSTRACT
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

No MeSH data available.