Limits...
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

Idris M, Hussain S, Siddiqi MH, Hassan W, Syed Muhammad Bilal H, Lee S - PLoS ONE (2015)

Bottom Line: The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce.The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost.Complexity and qualitative results analysis shows significant performance improvement.

View Article: PubMed Central - PubMed

Affiliation: Ubiquitous Computing Lab., Department of Computer Engineering, Kyung Hee University, Yongin-si, Gyeonggi-do, Republic of Korea.

ABSTRACT
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

No MeSH data available.


Composite Key Structure: This structure shows keys modeling in MRPack where it is used to differentiate the algorithms.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4549337&req=5

pone.0136259.g002: Composite Key Structure: This structure shows keys modeling in MRPack where it is used to differentiate the algorithms.

Mentions: In MRPack, maintaining and managing different keys are difficult tasks. Data aggregation, partitioning, and sorting are all based on keys. To efficiently manage keys and overcome the challenge of skewed data, we design a hierarchical and composite key structure. In this scheme, the base class is a general abstract class, and for each algorithm, we extend the base class to use it for special case. We apply polymorphism and composition techniques to handle the keys. The general structure of the keys is shown in Fig 2. The key for each algorithm depends on its specification and requirements.


MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

Idris M, Hussain S, Siddiqi MH, Hassan W, Syed Muhammad Bilal H, Lee S - PLoS ONE (2015)

Composite Key Structure: This structure shows keys modeling in MRPack where it is used to differentiate the algorithms.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4549337&req=5

pone.0136259.g002: Composite Key Structure: This structure shows keys modeling in MRPack where it is used to differentiate the algorithms.
Mentions: In MRPack, maintaining and managing different keys are difficult tasks. Data aggregation, partitioning, and sorting are all based on keys. To efficiently manage keys and overcome the challenge of skewed data, we design a hierarchical and composite key structure. In this scheme, the base class is a general abstract class, and for each algorithm, we extend the base class to use it for special case. We apply polymorphism and composition techniques to handle the keys. The general structure of the keys is shown in Fig 2. The key for each algorithm depends on its specification and requirements.

Bottom Line: The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce.The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost.Complexity and qualitative results analysis shows significant performance improvement.

View Article: PubMed Central - PubMed

Affiliation: Ubiquitous Computing Lab., Department of Computer Engineering, Kyung Hee University, Yongin-si, Gyeonggi-do, Republic of Korea.

ABSTRACT
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

No MeSH data available.