Debunking Hybrid Cloud Misconceptions Whitepaper (Part 1)

by: Katie Sloane on Dec, 14, 2017 | 0 Comments
 hero Image

AdobeStock_184565301_cloud_technology_IT

Photo Credit: © kras99 - stock.adobe.com

The following is an excerpt from our whitepaper, "Debunking Hybrid Cloud Misconceptions."

Download Whitepaper

The hybrid cloud is defined as a computing infrastructure that spans one or more public and private clouds. Within the hybrid cloud model, private cloud services may be hosted either on- or off-premises, in a dedicated section of a cloud service provider’s data center. Utilizing the hybrid cloud model can provide seamless integration across multiple cloud environments. Capable of connecting various deployments, many enterprises combine public and private cloud services with this model. This model can bridge the gap between environments via a robust, dedicated connection from local server rooms to cloud infrastructure, in which layers and resources are automated and provide management with infrastructure transparency.

Despite the many benefits that are provided by using a hybrid cloud model, widespread misconceptions remain. These misconceptions may include:

  • Hybrid Cloud cannot provide the agility or performance of on-premises infrastructure
  • It is more expensive than the public cloud
  • It is more difficult to manage and less secure than a purely on-premises environment
  • Hybrid cloud environments are all the same


Throughout this whitepaper we will aim to debunk these misconceptions. Prior to debunking, let’s examine a brief timeline of the series of events from which the need for hybrid cloud emerged.

The Rise of the Hybrid Cloud

While hybrid cloud technology may sound somewhat cutting-edge, it is important to understand that cloud computing dates back to 1950s mainframe computing. Mainframes were accessed by users via terminals, as it was not practical to purchase a computer for each employee. End users did not require the storage capacity or processing power of a mainframe. In the 1960s, Joseph Carl Robnett Licklider coined the term “Intergalactic Computer Network”, a vision in which everyone in the world could be interconnected and access data anywhere, anytime. This was the basis for the concept of  the internet: the heart of cloud computing.

It wasn’t until around 1970 that virtual machines entered the computing scene. Using virtualization via hypervisors, more than one operating system could be used simultaneously within one environment. This allowed servers to host multiple applications, whose workloads could be shifted as necessary, ultimately lowering the cost of cloud implementation. This jump from mainframe usage to virtualization was a massive stride forward in computing technology.

In the 1980s a large shift was made to distributed computing with smaller servers. Servers were capable of providing more resources: more memory, CPU, and storage than any mainframe could before. It was then, in the 1990s, that companies offered virtualized private network connections at a time when only dedicated point-to-point data connections were previously offered. Users could share access to the same physical infrastructure at a reduced cost. In 1999, Salesforce began delivering applications using a simple website. The applications were delivered over the internet, and the vision of having computing sold as a utility came to fruition.

2006 marked the launch of the first computing service, which was AWS’ Elastic Compute cloud, enabling users to rent storage space to run applications. Cloud computing truly came to life in 2009, when browser based applications, like Google Apps, were launched. End users were then able to perform tasks in the cloud that were previously only accomplished on local machines, perhaps prompting the development of Office 365 and OneDrive, and ultimately popularizing cloud usage.

As the increasingly convenient cloud model took off, the imperfections and oversight in the cloud came to light. While cost-efficiency and scalability were self-evident, security and reliability suffered. As cloud providers experienced outages and breaches, and found themselves out of compliance with regulatory requirements, it became clear that they were putting businesses at great risk. The need for security, reliability and performance gave rise to the need for a fluid bridge between the cloud and traditional, on-premises IT resources: the hybrid cloud.

Debunking Misconceptions: Hybrid Cloud Cannot Provide the Agility or Performance as On-Premise Infrastructure

2014-11-14-cloud-statistics.pngAs the cloud has advanced, the tools and services for managing the cloud have become increasingly robust. Cloud environments were once considered to be less reliable than on-premises systems. While this statement may have rung true in the past, today, most cloud providers strive to provide 99% availability to customers. As for performance, resources can be fine-tuned to suit workloads in a hybrid cloud setting. With the ability to bundle components into workloads to move them across public and private clouds, it has become easier to optimize applications. The number of ways that workloads can be distributed are nearly unlimited, giving you the power to manage performance as you see fit.

Agility

The hybrid cloud is known for its elasticity and its ability to grow and shrink in conjunction with workloads. In a business environment, a hybrid cloud model can grow in parallel with the speed of business growth. Having the ability to provision huge numbers of virtual machines in minutes, that can provide applications with the necessary resources at the click of a button or API call, helps businesses expand faster and reduce time-to-market.

CIOs need to opt for technology strategies that help their business achieve positive outcomes. The flexibility offered by hybrid cloud provides the level of control of a private cloud and the ability to use the public cloud when necessary. Having the ability to replicate a production environment in development and testing is an ideal way to facilitate application lifestyle management, and test at scale without any risk to live workloads and without an additional data center. Planning for the unforeseen success or failure of a project enables companies to continue encouraging developmental innovations.

Workload Management

A business has a myriad of project and workload types. The differing requirements across workloads would be a good motivator to work in a hybrid cloud model. Batched workloads, for instance, are generally run in the background and contain a large amount of data. These types of workloads, due to their sheer volume, require significant computing resources. While they are large workloads that have the potential to monopolize resources, they are predictable and can be run on a set schedule. Being that they are resource-taxing, the public cloud’s nearly unlimited scale might be ideal for this type of workload.

Another type of workload with varying computational requirements is a transactional workload. This type of workload might be responsible for the automation of billing and order processing.  In the case of e-commerce, processing may traverse partners and suppliers, and would be better suited to the private cloud with regards to compliance requirements. High-performance workloads require very specific computing power. This type of workload might be suited to the public cloud—which is optimized for performance—as resources are essentially unlimited. Probably the most common, database workloads, can affect every moving part of an environment. The workload needs to be adjusted for the service that requires access to specific data. This might be best suited to bare-metal hardware, discussed later in this paper, to optimize performance.

Workload isolation ensures that workloads do not, and cannot, interfere with each other. Applications running in parallel can be ported across different clouds. Resources can be isolated by computational requirements, separating groups of CPU and memory capacity. Problematic elements in one workload, such as a process causing a CPU or memory leak, won’t affect other running workloads. Policies can be defined addressing any necessary interactions between such workloads, which can become critical in multi-tenant environments. In addition to workload isolation, network isolation facilitates the separation of applications, as any communication between resources would run through separate network connections. 

 Download Whitepaper

Learn about managed cloud services

Tags: Managed Services

Related Articles

 
6 Ways to Reduce the Risk of Cyber Attacks

Cybersecurity

6 Ways to Reduce the Risk of Cyber Attacks

The number of companies that fall victim to cyber attacks steadily increases every year. Cybint

Read More >

CYBER ALERT: A Zero-Day, Zero-Click Exploit Infecting Apple Devices.

Cybersecurity

CYBER ALERT: A Zero-Day, Zero-Click Exploit Infecting Apple Devices.

Apple has issued an urgent software software update for iPhones to address a critical vulnerability

Read More >