Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: Containers Expo Blog, Microsoft Cloud, Silverlight, Apache

Containers Expo Blog: Blog Post

Replication with Hyper-V Replica - Part I

Replication Made Easy Step-By-Step

Overview: Disaster recovery scenarios, simple site-to-site replication, or the Prod-to-Dev refresh scenario are generally what drive IT administrators to look into virtual machine replication.  We want to build our environments so that in the event something happens in our primary data center, our critical machines and data will be up and running somewhere else.  Our developers may reside in a different location but want to work with the most recent datasets available.  There are a slew of questions asked about delivering on results for these different types of requirements.  Replication over wide area networks takes careful planning and consideration for any solution, in this article I focus on achieving results with Windows Server 2012 Hyper-V, however the methodology applies to almost any replication environment.

Important Questions: I was talking with a fellow IT Pro at one of our recent camps, he asked me, "How do I know what kind of bandwidth I need to perform replication from my main data center to my secondary site?"  Great question, and one of many that I have received in my past 7 years of virtualization consulting.  Many people go out and build an infrastructure to support replication functionality, identify the virtual machines they want to replicate and then just give it a whirl.  Most often times, they face long replication times, time outs, and other logistical issues if not immediately then a few weeks down the road.  A discouraging process at times I know, however I believe that with proper planning these scenarios are quite doable, and may not require near as much budget as one would think.  Even if we have identified the virtual machines that would be necessary for replication, the very next thing we should accomplish is understanding how much time can be lost in the event of an outage, and also how quickly can we recover at the alternate location.  For those of you who have already defined your requirements and just want to get down to the more advanced configurations fast forward to the Bandwidth Restrictions in Part II of this series.  If you want to get started but still need a 180 day free trial of Server 2012 click here.

So let's take a peek at the entire process.

1)   Identify the critical workloads and any dependencies these may have (i.e. Active Directory would be required before a File Server)

2)   Identify the current and requested recovery point objective (RPO) for each workload. (i.e. How much time can I afford to lose this computing?)

3)   Identify the current and requested return-to-operations objective (RTO) for each workload.

a)   How fast can I recover to my RPO for this VM?

b)   This value may be more about your infrastructure's abilities than the request of the application owner.

4)   Determine the size of the actual footprint of the workload

5)   Determine the amount of change occurring inside the given workloads.

6)   Review the requirements with the application owners

a)   Hint, the application owner will always say they need 100% uptime, so we need to ask the proper questions.

b)   More on this topic later.

7)   Determine the amount of open bandwidth available, as well as the times of day/week that the maximum available bandwidth is available.

8)   Test replication and bandwidth between site A and site B for performance and reliability.

9)   Document the steps necessary to fail over to the alternate site, then fail back to the production site per application.

One of the most overlooked tasks in a project like this is how quickly can I fail back to my primary site when all is said and done! Windows Server 2012 takes this into consideration and allows for Reverse Replication automatically when a failback event occurs.

Now that we have a process to work from, and believe me, the process shown above can take many different turns and angles, we need to work with a set of tools. Since I work at Microsoft, the first tool that comes to mind is a spreadsheet! I just so happen to have said spreadsheet handy, I will share it with you here.

image

Please continue reading at: Replication with Hyper-V Replica - Part II

More Stories By Tommy Patterson

Tommy Patterson began his virtualization adventure during the launch of VMware's ESX Server's initial release. At a time when most admins were only adopting virtualization as a lab-only solution, he pushed through the performance hurdles to quickly bring production applications into virtualization. Since the early 2000s, Tommy has spent most of his career in a consulting role providing assessments, engineering, planning, and implementation assistance to many members of the Fortune 500. Troubleshooting complicated scenarios, and incorporating best practices into customer's production virtualization systems has been his passion for many years. Now he share his knowledge of virtualization and cloud computing as a Technology Evangelist in the Microsoft US Developer and Platform Evangelism team.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...