Limits...
Partial storage optimization and load control strategy of cloud data centers.

Al Nuaimi K, Mohamed N, Al Nuaimi M, Al-Jaroodi J - ScientificWorldJournal (2015)

Bottom Line: Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud.Reducing the space needed will help in reducing the cost of providing such space.Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

View Article: PubMed Central - PubMed

Affiliation: UAE University, P.O. Box 15551, Al Ain, UAE.

ABSTRACT
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

No MeSH data available.


Cloud file redundant data removal process.
© Copyright Policy - open-access
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4417988&req=5

fig6: Cloud file redundant data removal process.

Mentions: To implement this technique we used the workflows shown in Figures 5 and 6. Figure 5 describes the workflow of downloading a file by the cloud client. To download a file, the client initiates a request to the cloud. The cloud controller then checks if the file was downloaded before; if so, then there will be data about the file partitions that were downloaded and which servers provided them. Having this history helps in selecting which server must provide which partition. The controller finds the required data from the database and then assigns the servers which already have the file partitions to the task. After the data is downloaded from all the servers, the client is updated by the requested file. However, there must be a first time download for each file to get its experience. Therefore, the alternative workflow is selected if the file is being downloaded for the first time. The file size in bytes is fetched and the block size is determined (Pseudocode 1). Then, servers are assigned based on their availability and processing speeds. When the dual direction download is processed from all servers for the first time, the client and the database are updated. The database must always be updated with what happens in the servers processing each partition so that the controller can decide later which partitions are to be kept on the server and which are to be removed.


Partial storage optimization and load control strategy of cloud data centers.

Al Nuaimi K, Mohamed N, Al Nuaimi M, Al-Jaroodi J - ScientificWorldJournal (2015)

Cloud file redundant data removal process.
© Copyright Policy - open-access
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4417988&req=5

fig6: Cloud file redundant data removal process.
Mentions: To implement this technique we used the workflows shown in Figures 5 and 6. Figure 5 describes the workflow of downloading a file by the cloud client. To download a file, the client initiates a request to the cloud. The cloud controller then checks if the file was downloaded before; if so, then there will be data about the file partitions that were downloaded and which servers provided them. Having this history helps in selecting which server must provide which partition. The controller finds the required data from the database and then assigns the servers which already have the file partitions to the task. After the data is downloaded from all the servers, the client is updated by the requested file. However, there must be a first time download for each file to get its experience. Therefore, the alternative workflow is selected if the file is being downloaded for the first time. The file size in bytes is fetched and the block size is determined (Pseudocode 1). Then, servers are assigned based on their availability and processing speeds. When the dual direction download is processed from all servers for the first time, the client and the database are updated. The database must always be updated with what happens in the servers processing each partition so that the controller can decide later which partitions are to be kept on the server and which are to be removed.

Bottom Line: Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud.Reducing the space needed will help in reducing the cost of providing such space.Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

View Article: PubMed Central - PubMed

Affiliation: UAE University, P.O. Box 15551, Al Ain, UAE.

ABSTRACT
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

No MeSH data available.