Please use this identifier to cite or link to this item: https://hdl.handle.net/10321/4875
Title: Energy-efficient resource management framework for cloud data centers
Authors: Sibiya, Khulekani 
Issue Date: May-2023
Abstract: 
The continuing global surge in various cloud services, IoT, and Edge (Fog) computing has led
to a sudden increase in the demand for Datacenters. By definition, a data center is a physical
facility that corporations/organizations use to house their critical applications and data. A data
canter‟s design is based on a network of computing and storage resources that enable the delivery of shared applications and data.
Notable advantages of Data Centers include but are not limited, to their ability to provide services to end-users based on affordable rates in various plans as per contractual agreements.
They also offer a robust hardware ecosystem as well as software. In operational terms, data
centers offer reliable and enhanced system performance by way of carefully distributing the
traffic loads uniformly across the cluster nodes. In that way, end users are excused from
maintenance responsibilities. Data centers also afford instant scalability based on changing capacity demands by users. To enhance the fail-safe abilities of data canters, backup systems are
incorporated. A notable drawback of Datacenters is the high power consumption which up
both CAPEX and OPEX costs. E.g it is prohibitively costly to erect robust cooling systems for
a large-scale data center. The same cooling system ought to be scalable to accommodate future
expansions of the data centers in terms of new services that may require new hardware to be
incorporated. Thus scalability of energy supply capacity is quite a challenge. Thus, how to
maximize power utilization and optimizing the performance per power budget is critical for data centers to deliver enough computation ability. Overall the operational costs of Data centers
directly link the resource management algorithms implemented to assign virtual machines
(VMs) to actual hardware servers and degrees of flexibility to relocate them elsewhere in case
of emergencies usually associated with power losses of excessive heating of system elements.
The main contribution of this thesis is in proposing and analyzing a hierarchical SLA-based
distributed hierarchical resource allocation and optimization scheme, that considers constraints
such as energy consumption and cooling-related energy consumption in addition to the scalability issue. We also incorporate a load-balancing algorithm to minimize the operational costs of
the proposed scheme. We utilize CloudSim, which is a customizable tool that supports the
modeling, and creation of several VMs, (as well as mapping tasks to appropriate VMs) for the
scheme‟s performance evaluation. Ultimately obtained results show that the scheme significantly reduces the operational costs of the overall cloud data center system and at the same time
ensures energy efficiency.
Description: 
A thesis submitted to the Faculty of Engineering and the Built Environment in fulfillment of the requirements for the degree of Master in Engineering (M. Eng), Durban University of Technology, Durban, South Africa, 2022.
URI: https://hdl.handle.net/10321/4875
DOI: https://doi.org/10.51415/10321/4875
Appears in Collections:Theses and dissertations (Engineering and Built Environment)

Files in This Item:
File Description SizeFormat
Sibiya_K_2023.pdf4.27 MBAdobe PDFView/Open
Show full item record

Page view(s)

325
checked on Dec 22, 2024

Download(s)

276
checked on Dec 22, 2024

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.