DESIGN AND IMPLEMENTATION OF SERVICE ORIENTED ARCHITECTURE ON CLOUD COMPUTING PLATFORM USING VIRTUALIZATION TECHNIQUE.
OJO, Moses Abiodun
BEING A RESEARCH PROJECT SUBMITTED
THE DEPARTMENT OF TELECOMMUNICATION SCIENCE
FACULTY OF COMMUNICATION AND INFORMATION SCIENCES
UNIVERSITY OF ILORIN, ILORIN, NIGERIA.
IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR
THE AWARD OF BACHELOR OF SCIENCE (B.Sc.) DEGREE
IN TELECOMMUNICATION SCIENCE.
Cloud Computing has taken the IT world by a gale, in fact, it is ubiquitous 1 2. Often viewed as the utopia of utility computing, it offers elasticity and monetary benefits second to none 1. In fact, in 2008 Oracle CEO Larry Ellison chastised the whole issue of cloud computing, saying that the term was overused and being applied to everything in the computer world 3.
Berkeley R&D Lab explains cloud computing as follows: The applications rendered as services over the Internet or Intranet, the hardware, and software in the datacenters that cater for those services. The facilities themselves have extensively been denoted as Software as a Service (SaaS). The datacenter hardware and software programs are what we will term a Cloud 3. When a Cloud is offered in a pay-as-you-go manner to the public, we name it a public cloud; the services being sold is Utility Computing. In contrast, the term private cloud refers to internal datacenters of an organization, not made accessible to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing 3.
As a key service rendering platform in the field of service computing, cloud computing provides a large-scale distributed computing paradigm that enable resources sharing driven by economies of scale in which a pool of abstracted, virtualized, dynamically scalable computing infrastructures, middleware, application development platforms and services delivered on demand to external users over the Internet or private user over the intranet 4.
In addition, networking is also a key element of the cloud infrastructure that provides data communications both inside a cloud data center or among data centers distributed at different locations. Performance has indicated that networking performance has a significant impact on the quality of cloud services, and in many cases, data communications become a bottleneck that restrains clouds from supporting high-end applications. Therefore, networks with quality-of-service (QoS) capabilities become an indispensable ingredient for high-performance cloud computing 5. The significant role that networking plays in cloud computing calls for a holistic vision of both computing and networking resources in a cloud environment, such a vision requires the underlying networking infrastructure to be opened and visible to upper-layer applications in clouds, thus enabling combined control, management, and optimization of computing and networking resources for cloud service provisioning. Because of this convolution in the cloud environment, researchers in the area of networking addresses this convolution by virtualizing all required resources (Network infrastructure and Software services). This method of virtualization is called Network Virtualization. Network virtualization is expected because of its fundamental attributes of cloud computing standard and plays a vital role in the next generation network 2.
There are four types of resources in cloud computing that can be shared and used up over the Internet or Intranet. The first type of resources is infrastructure resources, which include computing power, storage, and machine provisioning. For example, Amazon EC2 provides a web service interface to easily request and configure capacity online 6. Xdrive Box service provides online storage to users 7. Microsoft SkyDrive provides free storage service, with an integrated offline and online model that keeps privacy-related files on hard drives, and enables people to access those files remotely 8. In the area of computing power sharing, the Grid computing initiative has taken it as its major focus to use clustering and parallel computing technologies to share computing power with others, based on task scheduling when computers are idle. The second type of resources in Cloud Computing is software resources including middleware and development resources. The middleware comprises cloud-centric operating systems, application servers, databases, and others. The development resources understand design platforms, development tools, testing tools, deployment tools, and open source-based reference projects. The third type of resources in Cloud Computing is the application resources. The leading companies in the information industry are gradually moving applications and related data to the Internet. Software applications are delivered through Software as a Service (SaaS) model 9. The fourth type of resources in Cloud Computing is platform resources known as Platform as a Service (PaaS). This is another application delivery model that supplies all the resources required to build, utilize applications and services completely from the Internet or Intranet, without having to download or install software on a user’s personal computer. PaaS services include application design, development, simulation and testing, deployment, and hosting 2.
In this chapter, I will be looking into the aim of this project, problem statement, understanding what is meant by service-oriented architecture, and the methodology adopted in accomplishing these proclaimed objectives.
1.2 STATEMENT OF PROBLEM
Prior to the above preamble, service-oriented architecture can be deployed on cloud computing service models providing a large-scale distributed computing services that enable resources sharing in which a pool of virtualized, and scalable computing infrastructures is delivered on demand to users over the Internet or private user over the intranet. Thus, the need arises for, IT establishments, tertiary institutions, information technology laboratory. to migrate into it, eliminating the issues like user’s hardware incompatibility, software and hardware processor difference (either 64bit or 32bit), the need to install software on each host on the network and the shortage of physical hardware resources provide in a tech lab., libraries, and datacenter.
1.3 AIM OF THE PROJECT
The aim of this project is to implement the Software-as-a-Service (SaaS) model in cloud computing infrastructure using virtualization technique.
1.4 OBJECTIVE OF THE PROJECT
The objectives of this project are as follows:
1. To deploy a dedicated virtualization server on the server side.
2. To deploy a cloud computing infrastructure for virtualization server.
3. To investigate the QoS metric such as throughput, latency, delay, Jitter that affect service in a cloud computing environment.
1.5 SCOPE OF THE STUDY
The scope of this project to design and implement a service-oriented network infrastructure on the faculty of Communication and Information Science, University of Ilorin, providing a private cloud computing service for lecturers and students.
In order to achieve the aim and objectives of this project, there is a need to set up a dedicated virtualization server with cloud computing infrastructure at TCS Lab., then create virtualize machines for the academic software (Genx Probe, Opnet, Riverbed Modeler, and Microsoft Windows server) frequently used by students. Moreover, the virtualization server will be attached with a wireless access point for wireless access in the lab, user mobility, wider coverage, and remote access outside the lab. Furthermore, testing, network analysis, and QoS metrics will be carried out to investigate the feasibility of the project.
A comprehensive write-up on the stage by stage process to accomplish this implementation is discussed in subsequent chapters.
Figure 1.1: Architectural Design of Cloud Computing Infrastructure.
2.0 SERVICE ORIENTED ARCHITECTURE (SOA)
A service-oriented architecture (SOA) is a form of software design where services are provided to users through a communication protocol over the network. The core of service-oriented architecture is independent of vendors, products, and technologies 10. A service is a discrete unit of functionality that can be accessed remotely and acted upon independently. In SOA, services use protocols that describe how they transmit using description metadata. This metadata describes both the functional characteristics of the service and quality-of-service characteristics 11.
The idea of IT services to grant access to IT business functionality dates back to the Network Services Model from The Burton Group in 1991 12. In this approach, network services include file, print, data, directory, messaging, and security was designed for use within the corporate intranet. The term ‘Service-Oriented’ Architecture was coined by Roy Shulte and Yefim Natis 13 14 in 1996 to describe a style of multi-tier computing that helps organizations share logic and data among various applications and usage modes. In this work, SOA refers to a software architecture that builds a topology of interfaces, interface implementations, and interface calls. Applications Consortium in 1998 encouraged 15 that ‘organizations should plan to move to a multi-tier, service-oriented architecture (SOA) in which strategic applications are partitioned between user services, business services, data services, and legacy services.’ The predominant programming prototypes at this time were distributed object models, notably CORBA (Common Object Request Broker Architecture) and DCOM (Distributed Component Object Model). To avoid the complications seen with CORBA and DCOM, SOA makes use of Extensible Markup Language (XML) as the basis for exchanging IT service information with the protocol known as Extensible Markup Language, Remote Procedure Call (XMLRPC).
With the emergence of XML, the paradigm for data exchange and services has moved from software objects to XML documents. Viable benefits of service-oriented architecture include facilitating the manageable growth of large-scale enterprise systems, organizing large-scale networks of systems to enable and facilitate interoperation, and reducing costs of activities requiring inter-organizational cooperation 16.
2.1 SERVICE-ORIENTED ARCHITECTURE AND CLOUD COMPUTING
SOA and cloud computing are related, specifically, SOA is an architectural design that pilot business solutions to create, organize and reuse its computing components, while cloud computing is a set of enabling technology that services a bigger, more flexible platform for the enterprise to build their SOA solutions. In other words, SOA and cloud computing will coexist, complement, and support each other leading to the architecture called, Software-Oriented Computing and Cloud Computing Architecture (SOCCA) 17.
A service is different from a traditional software object in that it’s independent, self-described, reusable, and highly portable. Ser¬vices range from doing simple arithmetic calculations to executing complicated programs in distributed environments. By using standard description languages, such as the Web Ser¬vice Description Language (WSDL), a service can expose its interface to the outside world for service discovery either by Rep-resentational State Transfer (REST) protocols, or be invoked privately or as a composition of multiple services. The advantages of this new computing paradigm are visible: organizations can develop massively distributed software systems by assembling basic services with dynamism 18. These services may come from different service providers and use markup lan¬guage techniques, such as XML, to exchange program information and data.
Five decades ago, in 1961, computing pio¬neer John McCarthy forecast that compu¬tation may someday be systematized as a public utility 19. Cloud computing is that realization, as the model facilitates the delivery of com¬puting-on-demand much like other public utili-ties, such as electricity and gas. However, cloud computing isn’t a new concept. Other comput¬ing standard such as utility computing, grid com¬puting, and on-demand computing precede cloud computing by addressing the problems of organizing computational power as a pub¬licly available and easily accessible resource 19.
Figure 2.1: Service-Oriented Cloud Computing Architecture 17.
2.1.2 LAYERED ARCHITECTURE OF SOCCA
SOCCA is a layered architecture shown in Figure 2 17:
18.104.22.168 Individual Cloud Provider Layer: This layer bears a similarity to the current cloud implementations. Each cloud provider creates its own data center that power the cloud services it provides. Each cloud may have its own proprietary virtualization technology or utilize open source virtualization technology, such as Citrix, Eucalyptus 20. Similar to Market-Oriented Cloud Architecture proposed in 21, within each individual cloud, there is a request dispatcher working with Virtual Machine Monitor and Service Governance to allocate the requests to the available recourses. The peculiarity from current cloud implementations is that the cloud computing resources in SOCCA are componentized into independent services such as Storage Service, Computing Service, and Communication Service, with open-standardized interfaces, so they can be combined with services from other cloud providers to build a cross-platform virtual computer on the clouds. In order to achieve the utmost interoperability, even standards need to be implemented. For example, SQL is de facto standard for RDBMS data management, and many database vendors have their own implementations. A cloud version of SQL needs to be defined, so data manipulation logic of an application that works on one cloud can also work on other clouds 17.
22.214.171.124 Cloud Ontology Mapping Layer: Cloud providers might not adapt to the standards rigidly; they might also have implemented extra elements that are not included in the standards. Cloud Ontology Mapping Layer exists to mask the variances among the different individual cloud providers and it can help the migration of cloud application from one cloud to another. Several important ontology systems are needed 17:
1. Storage Ontology: It defines the ideas and terms related to data manipulation on the clouds, such as date insert, data delete, and data select, etc.
2. Computing Ontology: It defines the concepts and terms related to distributed computing on the clouds, such as Map/Reduce Framework.
3. Communication Ontology: It defines the concepts and terms related Communication Scheme among the clouds, such as data encoding scheme, message routing.
126.96.36.199 Cloud Broker Layer: Cloud brokers serve as the agents between individual cloud providers and the SOA layer. Each major cloud service has an associated service broker type. Generally, cloud brokers need to fulfill the following tasks 17:
i. Cloud Provider Information Publishing: Individual cloud providers publish specifications and pricing information for the cloud brokers. Important provider information includes 17:
• Cloud Provider Basic Information: Company Name, Company Address, Company Website, Company Contact Information, etc.
• Resource Type and Specifications: Whether it is a computer, storage and communication resource and its requirement and limitation. For example, for the data storage service, the data transmission rate can be as high as 2Gb/s.
• Pricing Information: How the services charge. This differs the most among different cloud providers. For example, presently, Google does not charge for the first 500MB storage, and $0.15 per GB of data after, while Amazon charges $0.11 per GB monthly for its EBS Volumes service 17.
ii. Ranking: Like the service brokers in SOA, cloud brokers also rank the cloud resources published. Services can be ranked in several types such as price, reliability, availability, and security, etc. Ranking can be achieved through user elective or past service supremacy records.
iii. Dynamic SLA Negotiation: Business is often dynamic, and the IT infrastructure has to be adaptive to house the business needs, therefore to achieve the optimum ROI (Return of Investment). It’s often the case that the IT resources a business demands can be projected. Cloud service brokers can help cloud users and cloud providers negotiate on SLA dynamically 17.
iv. On-Demand Provision Model: Most services encounter recurring request variation as well as some unforeseen request bursts due to external events. The only way to provide “on-demand” services, is to provide for them in advance. Precise request estimation and provision become critical for the success of the cloud computing, which lessens the excess of utility obtained and can save money by using utility computing 22 17.
2.1.3 SOA Layer: This layer fully takes the advantages of the existing infrastructure from traditional SOA. Many existing SOA frameworks, such as CCSOA 23, UCSOA 24, GSE 25 and UISOA 26 can be integrated into this layer. Figure 2 shows a possible SOA layer for SOCCA. Similar to CCSOA, not only services but also many other artifacts can be published and shared 17.
2.1.4 MULTI-TENANCY ARCHITECTURE (MTA)
As shown in Figure 2, SOCCA allows three diverse multitenancy patterns. In 27, the authors discussed the left two multitenancy patterns: Multiple Application Instance (MAI) and Single Application Instance (SAI). The authors pointed out, the former does not scale as well as the latter, but it provides a better separation among different tenants. Within SOCCA, a new multitenant pattern becomes possible: Single Application Instance and Multiple Service Instances (SAIMSI). The drive behind this design is that the workloads are often not distributed equally among application components, and the performance of the single application instance is restrained by the application components having lower throughput. Moreover, to improve scalability, we want to lessen redundancy as much as possible as opposed to Multiple Application Instances pattern 17.
2.2 CLOUD COMPUTING
Cloud computing has been devised as an umbrella term to define as a class of on-demand computing services initially offered by commercial providers, such as Amazon, Google, and Microsoft. It signifies a model on which a computing infrastructure is viewed as a “cloud,” from which businesses and individuals access applications from anywhere in the world on demand 28. The core opinion behind this model is offering computing, storage, and software “as a service” 29.
John McCarthy in the 1960s already envisioned that computing facilities will be provided to the general public like a utility 39. Moreover, the term “cloud” has also been used in various contexts, however, it was after Google’s CEO Eric Schmidt used the word to define the business model of providing services across the Internet in 2006, that the term really started to gain recognition. Since then, the term cloud computing has been used mainly as a marketing term in a variety of contexts to represent many different ideas. Certainly, the lack of a standard definition of cloud computing has generated a fair amount of skepticism and confusion. For this reason, recently there has been work on standardizing the definition of cloud computing. The embraced meaning of cloud computing in this project is given by The National Institute of Standards and Technology (NIST) 32, as it covers, all the imperative parts of cloud computing: NIST meaning of cloud computing; Cloud computing is a model for empowering helpful, on-request network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be quickly provisioned and discharged with negligible administration exertion or service provider interaction 30.
Most of the technologies used by cloud computing, such as virtualization and utility-based computing, are not new. Instead, cloud computing leverages these present technologies to meet the technological and economic obligation of today’s demand for information technology 31. The building block for cloud computing model is composed of the followings 30:
• Vital characteristics
• Layered models
• Service models
• Implementation models.
2.2.1 FUNDAMENTAL CHARACTERISTICS OF CLOUD COMPUTING:
The followings are the vital part of cloud computing:
• On-demand self-service
• Comprehensive Network Access
• Resource Pooling
• Swift Elasticity
• Measured Service
i. On-demand self-service: A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without needing human dealings with each service provider 30.
ii. Comprehensive Network Access: Capabilities are available over the network and accessed through standard devices that encourage use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations) 30.
iii. Resource Pooling: The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is an awareness of location independence in that the client generally has no control or cognizance over the exact location of the provided resources, however, might be able to assign location at a higher capacity of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth 32.
iv. Swift Elasticity: Capabilities can be swiftly delivered and relinquished in some cases automatically, to scale quickly outward and inward matching with the request. To the client, the capabilities available for provisioning often appear to be illimitable and can be appropriated in any amount at any time 30.
v. Estimated Service: Cloud frameworks automatically control and optimize resource use by leveraging a metering capability at some level of abstraction fitting to the type of service (e.g., storage, processing, bandwidth, and active utilizer accounts). Resource utilization can be observed, controlled, and reported, giving transparency for both the provider and consumer of the used service 32.
2.2.2 LAYERED MODELS OF CLOUD COMPUTING
The architecture of a cloud computing environment can be divided into four layers as shown in figure 2 31:
• Hardware Layer
• Infrastructure Layer
• Platform Layer
• Application layer.
Figure 2.2: Cloud Computing Layers 31.
i. The Hardware Layer: This is the layer in charge of handling the physical resources of the cloud, including servers, routers, switches, power and cooling systems. In practice, the hardware layer is typically implemented in data centers. A data center usually contains thousands of servers that are organized in racks and interconnected through switches, routers or other devices. Typical issues at the hardware layer include hardware configuration, fault tolerance, traffic management, power and cooling resource management 31.
ii. The Infrastructure Layer: Also known as the virtualization layer, this layer creates a pool of storage and computing resources by segregating the hardware resources using virtualization technologies such as Xen, KVM, and VMware. The infrastructure layer is an important element of cloud computing, since many key features, such as active resource allocation, are only made available through virtualization technologies 31.
iii. The Platform Layer: Built on top of the infrastructure layer, the platform layer consists of operating systems and application frameworks. The drive of the platform layer is to lessen the burden of installing applications directly into VM containers. For instance, Google App Engine works at the platform layer to offer API support for implementing storage, database and business logic of typical web applications 31.
iv. The Application Layer: At the peak level of the hierarchy, the application layer comprises the actual cloud applications. Dissimilar from traditional applications, cloud applications can leverage the automatic-scaling feature to achieve improved performance, accessibility and lesser operating cost 31.
2.2.3 CLOUD COMPUTING SERVICE MODELS:
There are three prominent service models of cloud computing, which are as follow 32s:
• Infrastructure-as-a-Service (IaaS)
• Platform-as-a-Service (PaaS)
• Software-as-a-Service (SaaS).
i. Infrastructure-as-a-Service (IaaS): The ability provided to the client is to provide processing, storage, networks, and other essential computing resources where the client is able to install and run the random software, which can include operating systems and applications. The client does not oversee or control the basic cloud infrastructure, yet has control over operating systems, storage, and deployed applications; and also, has limited control of select networking components 30.
ii. Platform-as-a-Service (PaaS): The facility offered to the client is to deploy onto the cloud infrastructure client-created or taught obtained applications created using programming languages, libraries, services, and tools supported by the provider 30. The client does not oversee the underlying cloud infrastructure containing network, servers, operating systems, or storage, but has control over the deployed applications and probably configuration settings for the application-hosting environment 30.
iii. Software-as-a-Service (SaaS): This is an application hosted on a remote server and accessed through the Intranet or Internet 2. The capability provided to the client is to use the provider’s applications running on a cloud infrastructure.
The applications are accessible from sundry client contrivances through either a web browser or a program interface (e.g., Citrix XenCentre etc.). The client does not oversee the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of constrained user precise application configuration settings 32.
Figure 2.3: Software-as-a-Service (SaaS) Application Delivered to Client 2.
2.2.4 CLOUD COMPUTING IMPLEMENTATION MODELS:
All cloud computing resources are not the same 31. Thus, Cloud computing has a number of diverse deployment models. A deployment model is a specific method of providing a service 1. There are various ways to implement cloud computing resources, which are as follows:
• Public Cloud
• Community Cloud
• Hybrid Cloud.
• Private Cloud
• Virtual Private Cloud 30 31.
i. Public Cloud: The cloud infrastructure is provisioned for open use by the general public. It may be owned, controlled, and run by a business, academic, or government organization, or some mixture of them. It exists in the locations of the cloud provider 30. Thus, public clouds are clouds where the provider delivers a cloud service to any user who wishes to retrieve it. There are no conditions regarding the user’s ownership in using the cloud; it is simply delivered to any user who wishes to fund the payment model 1.
ii. Community Cloud: The cloud infrastructure is provisioned for selective use by a precise community of users from organizations that have shared interests (e.g., mission, security requirements, policy, and agreement concerns). It may be owned, managed, and managed by one or more of the organizations in the community, a third party, or some combination of them, and it may be on or off premises 30.
iii. Hybrid cloud: The cloud infrastructure is a constitution of two or more separate cloud infrastructures (private, community, or public) that remain exclusive entities, but are bound together by uniform technology that enables data and application compactness (e.g., cloud bursting for load balancing between clouds) 30.
iv. Private Cloud: The cloud infrastructure is provisioned for special use by a single organization comprising many users (e.g., business units). It may be owned, managed, and run by the organization, a third party, or some mixture of them, and it may be on or off premises 30, 30.
v. Virtual Private Cloud: An alternate answer to addressing the boundaries of both public and private clouds is called Virtual Private Cloud (VPC). A VPC is basically a platform running on top of public clouds. The main distinction is that a VPC leverages virtual private network (VPN) technology that allows service providers to invent their own topology and security settings such as firewall rules. VPC is basically a more complete design since it not only virtualizes servers and applications but also the fundamental communication network as well. Additionally, for most companies, VPC provides a seamless transition from a proprietary service infrastructure to a cloud-based infrastructure, owing to the virtualized network layer 31.
2.3 CLOUD COMPUTING AND RELATED TECHNOLOGIES
The roots of clouds computing can be tracked by studying and linking the improvement of several technologies, which are as follows 29:
• Distributed Computing (clusters, grids)
• Systems Management (autonomic computing, data center automation)
• Hardware (virtualization, multi-core chips)
• Internet technologies (Web services, service-oriented architectures, Web 2.0) 29.
Figure 2.4: Shows the convergence of technology fields that significantly advanced and contributed to the advent of cloud computing 29.
2.3.1 DISTRIBUTED COMPUTING
Distributed computing comprises grid computing and cluster computing, each of them is discussed in detail below:
i. Grid computing is a distributed computing archetype that coordinates networked resources to achieve a mutual computational objective. The development of Grid computing was originally driven by scientific applications which are usually computation-intensive. Cloud computing is similar to Grid computing in that it also employs distributed resources to achieve application-level objectives. However, cloud computing takes one step further by leveraging virtualization technologies at multiple levels (hardware and application platform) to realize resource sharing and dynamic resource provisioning 31.
ii. Utility Computing: Utility computing denotes the model of providing resources on-demand and charging clients based on usage rather than a flat rate. Cloud computing can be perceived as a realization of utility computing. It adopts a utility-based pricing scheme entirely for economic reasons. With on-demand resource provisioning and utility-based pricing, service providers can truly maximize resource utilization and minimize their operating costs 31.
2.3.2 SYSTEM MANAGEMENT
The system is basically made of autonomic computing and data center automation.
i. Autonomic Computing: Originally coined by IBM in 2001, autonomic computing aims at building computing systems capable of self-management, i.e. reacting to internal and external observations without human intervention. The goal of autonomic computing is to overcome the management complexity of today’s computer systems. Although cloud computing exhibits certain autonomic features such as automatic resource provisioning, its objective is to lower the resource cost rather than to reduce system complexity 31.
ii. Data Centre Automation: Data centers of cloud computing providers must be managed in an efficient way. In this sense, data center automation, perform tasks such as management of service levels of running applications; management of data center capacity; proactive disaster recovery; and automation of VM provisioning 33.
2.3.3 INTERNET TECHNOLOGIES
This comprises Software Oriented Architecture (SOA), Web Service, Web 2.0 and Mashups.
i. SOA: Software resources are packaged as “services,” which are well-defined, self-contained modules that provide standard business functionality and are independent of the state or context of other services. Services are described in a standard definition language and have a published interface 34.
ii. Web Services: The emergence of Web services (WS) open standards has significantly contributed to advances in the domain of software integration 34. Web services can glue together applications running on different messaging product platforms, enabling information from one application to be made available to others, and enabling internal applications to be made available over the Internet 34.
iii. Mashup: In the client’s Web, information and services may be programmatically aggregated, acting as building blocks of complex compositions, called service mashups 34.
This is the physical infrastructure and resources of the data center, including physical servers, routers, switches, power and cooling systems. Hardware infrastructure is delivered to the cloud computing by virtualization 2 31. Thus, it leads us to the concept of virtualization.
Virtualization is a technology that extracts away the details of physical hardware and provides virtualized resources for high-level applications 31. It is the act of creating a virtual version of something including 35:
• Computer Hardware Virtualization
• Platform Virtualization
• Storage Virtualization
• Network Resources Virtualization 35.
A virtualized server is commonly called a virtual machine (VM). Virtualization forms the foundation of cloud computing, as it provides the capability of pooling computing resources from clusters of servers and dynamically assigning or reassigning virtual resources to applications on-demand 31.
i. Computer Hardware Virtualization: Hardware virtualization allows running multiple operating systems and software stacks on a single physical platform. As shown in Figure 4, a software layer, the virtual machine monitor (VMM), also called a hypervisor, mediates access to the physical hardware presenting to each guest operating system a virtual machine (VM), which is a set of virtual platform interfaces 36.
Figure 2.5: A hardware virtualized server hosting three virtual machines, each one running separate operating system and user-level software heap 31.
ii. Storage Virtualization: Virtualizing storage means abstracting logical storage from physical storage. By consolidating all available storage devices in a data center, it allows creating virtual disks independent from device and location. Storage devices are commonly organized in a storage area network (SAN) and attached to servers via protocols such as Fibre Channel, iSCSI, and NFS; a storage controller provides the layer of abstraction between virtual and physical storage 37.
iii. Network Resources Virtualization: Virtual networks allow creating an isolated network on top of a physical infrastructure independently from physical topology and locations 38. A virtual LAN (VLAN) allows isolating traffic that shares a switched network, allowing VMs to be grouped into the same broadcast domain. Additionally, a VLAN can be configured to block traffic originated from VMs from other networks. Similarly, the VPN (virtual private network) concept is used to describe a secure and private overlay network on top of a public network (most commonly the public Internet) 39.
There are several ways to implement virtualization. Two leading approaches are full virtualization and para-virtualization 9.
2.4.1 FULL VIRTUALIZATION
Full virtualization is a system in which a complete installation of one machine is run on another. The outcome is a system in which all software running on the server is within a virtual machine. Full virtualization is designed to provide a total abstraction of the underlying physical system and creates a complete virtual system in which the guest operating systems can execute. No alteration is required in the guest OS or application; the guest OS or application is not aware of the virtualized environment so they have the capability to execute on the VM just as they would on a physical system 40.
Figure 2.6: Full Virtualization 2.
Para-virtualization presents each VM with an abstraction of the hardware that is similar but not identical to the underlying physical hardware. Para-virtualization techniques require modifications to the guest operating systems that are running on the VMs. As a result, the guest operating systems are aware that they are executing on a VM allowing for near-native performance 40.
Figure 2.7: Para-virtualization 2.
2.4.3 VIRTUALIZATION DEPLOYMENT TECHNIQUE
There are two ways of implementing virtualization which are:
i. Host Operating System-Based.
ii. Bare-Metal Hypervisor 41.
188.8.131.52 HOST OPERATING SYSTEM-BASED VIRTUALIZATION
A host-based virtualization system requires an operating system (such as Windows or Linux) to be installed on the computer.
• VMware Server
• VMware Workstation 41
Figure 2.8: Host Operating System Based Virtualization 42
184.108.40.206 BARE METAL HYPERVISOR
A bare-metal hypervisor system does not require an operating system. The hypervisor is the operating system 41.