Microsoft Secure Tech Accelerator
Apr 03 2024, 07:00 AM - 11:00 AM (PDT)
Microsoft Tech Community
Success with Hybrid Cloud: Best Practices for Planning Your Hybrid Cloud
Published Sep 07 2018 10:51 PM 657 Views
Iron Contributor
First published on CloudBlogs on Dec, 05 2013

This might seem like an overly obvious statement, but it is worth emphasizing: The early planning stages of your Hybrid Cloud strategy are some of the most important discussions that IT leaders and implementers can have.

The majority of the CIOs I’ve spoken to over the last year tell me that they are considering a multi-year transformation to a fully hybrid environment – and that transition is going to start by assessing how their application portfolio will evolve within this kind of infrastructure.

This planning process is critical , and, in this post (the first of a five-part look at “Best Practices” for planning, building, deploying, and maintaining a Hybrid Cloud), I will dive into some of the most important questions and considerations every organization needs to examine when planning for a Hybrid environment.

The goal of this planning period is to evaluate the specific IT needs of your business and identify how a Hybrid Cloud environment can address these needs. Every organization is going to have different needs and different available resources for addressing these needs, and this makes a thoughtful and honest discussion within your IT team so important.

Planning a Hybrid Cloud is also an opportunity to rethink the way you operate . If there are inefficiencies you’ve identified in the past, this planning process is an opportunity to isolate them and characterize improved processes. In other words, resist the temptation to migrate how you do things today to your new hybrid environment – instead, focus on how this kind of infrastructure can improve how your organization operates at every level.

As this discussion develops, I believe the flexible, adaptable nature of the Microsoft Hybrid Cloud will seem like an ideal way to support both your current needs and your future plans.

In this post I’ll share five steps that I think are critical in the planning process of any Hybrid Cloud. These steps take into account workload requirements, geographic restrictions, infrastructure limitations, cost considerations, non-cost considerations, and relationships.

Step 1:  Carefully and objectively assess today's environment and workloads so that you can know what you have to work with, what you want to do with it, and what your industry will require.

To do this, your IT team and other key decision makers within your organization need to collaboratively take the time to identify your organizations desired IT architecture, app needs (design, security/privacy, response times), and the workloads you’ll be running.

Workload Requirements

There are a lot of potential workloads running in any given datacenter, and every organization presents unique factors that affect how these workloads are performing. I’ve written previously about how Microsoft apps (like SQL, SharePoint, Exchange) run best on Microsoft platforms (like Windows Server, Hyper-V, System Center, and SQL Server), and this is very good news for Microsoft Hybrid Cloud environments. Organizations operating a Microsoft Hybrid Cloud environment can choose to run these critical Microsoft workloads in their own datacenter, within a service provider cloud, or in Windows Azure – all while enjoying the confidence that comes from knowing that it is the same Windows Server Hyper-V that will power their business in either case. Plus, with our networking solutions , we enable you to easily extend your datacenter into Windows Azure so that you have the flexibility to choose where you’d like to deploy your workloads.

Hybrid Clouds also enable a great use case for temporary requirements like dev/test projects. By testing an app in the public cloud first you can validate that it is effective and operational before deploying it to the production environment in your private cloud. The amount of money this can save dev and test organizations (in terms of having to set up test labs, for example) is pretty enormous. On top of these savings, the elastic, bottomless capacity that the public cloud (like Windows Azure) provides can give your dev/test orgs the ability to simulate and run real-world test cases that are not constrained by the burden of maintaining the test environments themselves (not to mention that your organization just converted this capex into opex while paying only for the capacity that you used!).

In recent weeks here in the US there has been a big example of a situation where this type of arrangement could have saved countless hours and dollars: The healthcare.gov website . If the companies responsible for the dev/test work for healthcare.gov had used a public cloud to simply load test the site, they could have likely avoided the unpleasant surprises caused by the flood of traffic (and resulting crash and embarrassment) to the site. Under-utilizing test resources still happens far too often, and it seems this was a big issue with the healthcare site (as noted by CNN , Reuters , Ad Age , Politico , WaPo , USA Today , Forrester , and many others). In a Hybrid Environment, having to sacrifice testing on account of your in-house resources is a thing of the past.

To see a positive example of this principle in action, check out how Telenor , a large telecom company in Norway, is using Windows Azure infrastructure for their SharePoint workloads and dev/test environments.

It’s also important to be mindful of how you initially select your workload – i.e. identify if your main operations are going to be offloading a specific application (this is where you get great cost/scalability benefits), or if you need transient environments such as dev/test (the benefit here is that a Hybrid environment is a great place to test your apps before a broad deployment).

As you begin to take advantage of Hybrid deployments, remember that you do not have to move everything immediately. Instead, take your time to select a few specific applications to get your feet wet and start identifying where the big opportunities are for your organization. With Microsoft’s solutions, you can be assured of the flexibility necessary to get your workloads back to on-premises (if and when you need to) or be aggressive about migrating your workloads to a public cloud faster (to maximize the scale and reach benefits that it provides).

Geographic Restrictions

Where your public and private data sits can be a major factor depending on your industry or the regulatory climate around your company’s offices. Taking these factors into account is an important element of the planning process because there are countless situations around the world that require sensitive information to never leave a specific building or to not the leave the borders of a specific country.

These requirements are something we have aggressively and directly addressed by building out Windows Azure globally to provide organizations with worldwide reach without needing to invest in datacenter capacity where it isn’t required.

Another geographic consideration is housing your workloads as close to your customers as possible. During the holiday season in the US, for example, many East Coast retailers will rent space on West Coast servers to meet the needs of consumers in that region of the country. For a company focused on fast and smooth end-user experiences, keeping this data as close as possible to the customers who are accessing it makes a big impact on performance. With a Hybrid Cloud model, this kind of seasonally specific geo-location is just an extension of your permanent datacenter. You can see a list of Microsoft datacenter locations housing customer data here .

Be mindful, however, of the flip-side of geo-location: For the sake of business continuity, put enough geographic distance between your primary and secondary datacenter locations in the event of a wide-ranging disaster. We have chosen our datacenter locations carefully to mitigate any disaster recovery risks.

Step 2:  Select a partner (for private cloud, public cloud) that meets the criteria you identify in Step 1.

Choosing a partner is a topic I’ve talked a lot about in the past, and I really can’t overstate the importance of this selection. Also recognize that this process is different for every organization.

My advice is this : Look for a cloud partner that offers a comprehensive solution that can deliver consistency across clouds – this will ensure you avoid lock-in by enabling flexible workload mobility. If your partners can deliver cross-cloud cloud consistency along with a unified approach to management, you will have future proofed yourself in your datacenter transformation journey.

Cost Considerations

Within your Hybrid Cloud you have a lot of options for how and where your workloads can run. Determining the best set up for you is a matter of weighing how many (or how much) of these workloads you want to keep on-premises vs. what can be moved external, and what kind of performance you want from these workloads.

Often when we talk about creating a Hybrid environment, many companies already have a collection of resources that have hybrid traits, but what’s really needed is a way to expand capacity quickly and seamlessly when a need arises to take advantage of more capacity, reduced costs, or geo-location (more on this below). Using the public cloud as a tier within your datacenter is a really smart choice when it comes to a short-term need.

In context, a short-term need for increased public cloud capacity might be a retailer that is planning ahead for the holiday season by moving entire applications and/or workloads to the cloud before their datacenter runs out of space. This kind of capacity planning with the public cloud underscores the fact that there’s simply no reason to make a big capex investment for hardware you’ll only need for a few months out of the year.

These cost savings can be seen in the way companies maximize their performance , scalability and infrastructures, or how they avoid the “heavy lifting” of setting up the complex infrastructure (vendors, procurement, capex, configuration, operations) that is required for enterprise or service provider performance. A Hybrid Cloud also gives businesses the agility they need to quickly spin up dev/test environments without the time commitment or struggle associated with less nimble architectures. And, perhaps most obviously, by strategically using public and private resources, businesses pay for only the services they use.

Simply put, deploying a Microsoft Hybrid Cloud saves time and money – period. A couple great examples include the work we’ve done with enterprises like Lufthansa and Aston Martin , and with service providers like Hostway and Convergent Computing .

Of those four companies, Aston Martin is a great example of how easy it is to move to a Hybrid model, as well as some of the initial things that can be easily moved to the public cloud.  In Aston’s case, they used Hyper-V Replica and Hyper-V Recovery Manager for backup and disaster recovery.  Backup and DR are two of the easiest ways to maximize your Hybrid Cloud environment since they immediately reduce the cost of taped backup, and they do not affect any of your on-prem operations.  These strategic moves by Aston represent the foundation of their new business continuity strategy – and they are things that every organization can benefit from implementing.

Another big costs savings comes from the time savings of a Hybrid Environment – e.g. the way Windows Azure manages all of the underlying fabric and the complexities of the workloads you import.

In this scenario, the Microsoft Hybrid Cloud empowers you to extend your private cloud into the public space, and this means you have access to extra capacity without the accompanying requirement to manage the additional infrastructure that is tied to this capacity. And most importantly, Windows Azure runs the same virtualization platform that operates your on-premises workloads for Windows Server. Hyper-V powers top-tier workloads like SharePoint and SQL, and now you can run them on Windows Azure (and, of course, you can move these workloads back on-premises in the future without any lock-in). In the event you need to migrate from elsewhere, don’t forget about the surprisingly simple Migration Automation Toolkit .

A great example of this is HD Insight . There are some enormous benefits that organizations are gaining from running HD Insight, but it is undeniably complex. With Azure, everything it needs to run is already set up and you can hit the ground running without the otherwise requisite set-up time.

Another critical part of these cost considerations is whether or not you can get what you pay for .  This thought may seem obvious, but when it comes to cloud partners, you’d be surprised by how often your expectations for service can go unfulfilled.  With Microsoft, there is a monetary penalty in the event we miss an SLA – and this type of guarantee is simply not the case with other cloud providers. We put our money on the line to protect and support your organization.

Non-cost Considerations

The items that fall into this category include compliance/regulation/governance issues (from governments or regulatory bodies), privacy, and security.

The Hybrid Cloud model is helpful with these things because it gives you a variety of cloud options. In industries or scenarios that insist upon on-premises deployments, the Microsoft Hybrid Cloud offers that flexibility that can make your business compliant where it needs to be, and publicly scalable everywhere else. The Microsoft Hybrid Cloud empowers you to build up a strong private cloud while getting the benefits of public cloud scale when you need it.

When it comes to compliance and security , the Microsoft Hybrid Cloud is noteworthy in some really critical areas. Azure has been regularly reviewed and audited by third-parties to verify its security, and the reports of these audits are available to customers who want to see how our public cloud services comply with the security, privacy and compliance requirements. Azure also publishes detailed information about how it fulfills the CSA Cloud Controls Matrix in the CSA’s Security Trust and Assurance Registry (STAR). You can read about all of this and more at the Windows Azure Trust Center .

I also recommend learning more about our extensible compliance framework. The Microsoft Compliance Framework for Online Services is a highly detailed map of controls to a broad range of regulatory frameworks. This framework enables us to design and build services for the long-term using a single set of controls to streamline compliance across a range of regulations.

On the topic of public cloud security, check out this list of Windows Azure certifications:

  • ISO 27001
    Windows Azure is certified annually for ISO 27001, a broad international information security standard, and undergoes annual audits for ISO compliance.
  • SSAE16 SOC
    Windows Azure is audited annually against the Service Organization Control (SOC) reporting framework for SOC 1 Type 2, attesting to the design and operating effectiveness of its controls, and SOC 2 Type 2, which includes a further examination of its controls related to security, availability, and confidentiality.
  • Cloud Security Alliance Cloud Controls Matrix
    Windows Azure has completed a third party assessment against the Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) as part of its SOC 2 audit as a means of meeting the assurance and reporting needs of the majority cloud services users worldwide.
  • FedRAMP
    Windows Azure has been granted a Provisional Authorities to Operate (P-ATO) from the Federal Risk and Authorization Management Program (FedRAMP) Joint Authorization Board (JAB) having verified that the service meets government security standards.
  • United Kingdom G-Cloud Impact Level 2 Accreditation
    Windows Azure has been awarded Impact Level 2 (IL2) accreditation, further enhancing Microsoft and its partner offerings on the current G-Cloud procurement framework and CloudStore for UK public sector organizations who require 'protect' level of security for data processing, storage and transmission.
  • HIPPA
    To help customers comply with HIPAA and HITECH Act security and privacy provisions, Microsoft offers a HIPAA Business Associate Agreement (BAA) to healthcare entities with access to Protected Health Information (PII).

As noted above, you can read more about all of this at the Windows Azure Trust Center.

Relationships

Very few organizations build or operate a Hybrid Cloud by themselves. Any technical undertaking of this type will generally require more expertise than can currently be found in the building on any given day, and this is where a trusted vendor or service provider becomes an important part of this process. I encourage you to get input from your Microsoft reps and your peers about partners that can deliver the kind of viability, trust, and execution you’ll need in order for your Hybrid Cloud to be a success.

Throughout this planning process, and during each of your discussions considering each of these steps, keep a couple things in mind about how a Microsoft Hybrid Cloud will operate once it is up and running:

  • Regardless of how your app will be deployed within the Microsoft Hybrid Cloud, you can always use Visual Studio to write and debug applications that are deployed to Azure and Windows (and vice versa).  Java and other languages are also supported.
  • Because of the consistency within the Microsoft Hybrid Cloud, your developers only need to create one user-experience, no matter where the app will be used. This means they can spend their time learning a single UX.
  • All of the Microsoft clouds use the same web-friendly protocols – and all of these can be automated with System Center Orchestrator.
  • No matter where your workload is within the Hybrid environment, you can always monitor it and know if it’s healthy – from a single interface in System Center Operations Manager .

Geo-Location

If your scenario does require specific geo-locating, the Hybrid Cloud model offers some real flexibility: If you need to keep your data nearby, you can host it on-premises with Windows Server and SQL Server, or in the public cloud in one of Azure’s globally distributed datacenters. If the public cloud isn’t an option, or if you’re in one of the rare areas where our datacenters don’t have a geo-located presence, it’s easy to partner with one of our trusted service provider partners.

Step 3:  Include the management tools you will need as a part of this planning process.

You need a unified management capability (and toolset) to manage your Hybrid environments in a controlled manner. I’ve seen many cloud deployment attempts get out of hand in a hurry due to the lack of a robust management strategy – but the good news is that the Microsoft Hybrid Cloud offers a proven, battle-tested monitoring capability with Operations Manager (as noted above).

With Operations Manager you can monitor all of your workloads and apps across all of the clouds you use – all from a single console . I discussed previously that Operations Manager has, historically, been too complex for anyone outside of the IT team to use consistently – but it is now much more user friendly . Operations Manager is now widely used because of its ability to provide a customizable view of the data and can show any set of metrics, KPI’s, environments, or apps – all without needing to navigate between different tools. Its data visualization capabilities are also an important way to look at current and emerging data, rather than relying on reporting about what’s already happened.

With Operations Manager (and I recommend reading more about it here and here ) you can view, manage, and control your public and private clouds holistically – thus, if one of these tiers go down you can investigate the problem from one interface and see the app as a single element, rather than multiple pieces across multiple tiers.

To extend this, Orchestrator and Operations Manager can combine to enable application elasticity through automation that spans across clouds. These tools even include functionality that allows you to easily pre-define application scale and performance thresholds, which, when exceeded, can automatically trigger provisioning of additional infrastructure in Windows Azure to support your application needs (e.g. in situations of peak demand at certain times of the year). Orchestrator also has Integration Packs that can automate workflows to deploy compute and storage instances on-premises or within Windows Azure.

In regards application and infrastructure provisioning, Microsoft has published Service Templates for System Center Virtual Machine Manager 2012 (see below for links) that will significantly reduce the hours you would have otherwise spent preparing many workloads for automated delivery. This documentation can serve as a guided reference for each of the published workloads as you shift from manual installations towards a managed, intelligent application platform that understands concepts like redundancy, high availability, and how they impact service models for change management – and this is a core knowledge area that will continue to evolve.

Check out the Building Clouds blog more information regarding Microsoft workload Service Templates:

Step 4:  An enterprise needs to identify how its IT team will operate like a cloud provider for the organization.

As you start to look at your Hybrid Cloud long term, you’ll quickly see a more sophisticated view of this environment that blurs the boundaries across all of them (private, public, and partner). The objective of the Microsoft Hybrid Cloud is to make your datacenter as agile as possible when moving amongst these clouds via self-service provisioning (discussed in detail here ).

You should also use this planning stage as an opportunity to identify how you can retool your datacenter to operate like a cloud services provider. The objective of organizing a datacenter in this way is to allow for self-service options for your teams – which, in turn, allows them to be more productive and agile with the services they need.

Another important element to consider (and which is also a bit of a shift) is to think of the IT department of your organization as a provider of things like on-demand provisioning, shared/pooled resources, broad network access, rapid and elastic resource allocation/de-allocation, and real-time resource metering.

To get started on this aspect of your planning, Microsoft provides a wealth of architectural guidance to help you blueprint, design and deploy your Hybrid Cloud infrastructure.

Step 5:  Determine how your organization will continue to build and improve your private cloud while you continue to leverage public cloud resources.

Sometimes it seems like there is nothing in the world that ages quite as fast as enterprise technology, but the flexible nature of a Hybrid Cloud means that you can continually modify, adjust, and improve this environment over time. This means taking advantage of new processes and tools (elastic resource allocation, integrated monitoring and orchestration, self-service, metering) and staying abreast of what will have the most impact for your business.

Back during the What’s New in 2012 R2 series, I noted that the Windows Azure Pack demonstrates the commitment Microsoft has made to improving private clouds by leveraging the successes and power of our global public cloud.  Specifically, we are taking what we learn from our innovations in Windows Azure and delivering them through Windows Server, System Center, and the Windows Azure Pack (WAP) for you to use in your data center. The functionalities in WAP come from the innovations we’ve developed in the public cloud, battle hardened at scale, and then used effectively in the cloud services we offer.

I really can’t say enough good things about the Windows Azure Pack and the technologies it brings to your private cloud (like rich, self-service, multi-tenant services and experiences that are consistent with Microsoft’s public cloud offering).  These features have been especially valuable for service providers who can now offer new functionality to their customers – and do it more often.

WAP is generally available along with Windows Server 2012 R2 and System Center 2012 R2 at no additional charge.  All of this represents a delivery on our promise to enable consistency across clouds.

* * *

With these five steps for Hybrid Cloud planning in mind, don’t let this planning process overwhelm you. Many organizations already have most of the necessary components for a Hybrid Cloud – and the planning stage is an opportunity to take stock of these resources and find the best way to most efficiently leverage them. As you make your way through these five sequential steps, I think you’ll see some big opportunities for your organization, and you’ll see some clear areas where we can work together.

In the next post I’ll examine how you make the time and energy spent planning really pay off: I’ll look at the best practices for building a Hybrid Cloud.

Version history
Last update:
‎Sep 07 2018 10:51 PM
Updated by: