Click here to close now.


Containers Expo Blog Authors: Elizabeth White, Carmen Gonzalez, Liz McMillan, Peter Silva, Yeshim Deniz

Related Topics: Containers Expo Blog, Microservices Expo, @CloudExpo, @BigDataExpo, SDN Journal, @DevOpsSummit

Containers Expo Blog: Article

Edge Virtualization and the MicroCloud

Benefits and Difference from Private and Public Clouds

The benefits of public and private clouds based on virtualization are varied and well known. In 2013, more than 40 percent of enterprises have or are adopting virtualized private clouds in the data center, and another 40 percent are evaluating virtualization solutions. Nevertheless, less than 10 years ago, the number of enterprises doing any kind of private cloud virtualization was almost nonexistent.

Some of the benefits driving this rapid adoption in the enterprise, apply equally well for small-to-medium businesses (SMBs) and the edge. These benefits include:

  • Application compartmentalization - containment within the application's own O/S processor and I/O space (prevents single applications from consuming a platform's resources or affecting other applications due to problems)
  • Simplified security and quality of service (QoS) policies - administration across sites, applications, and networks
  • Automated application integration and orchestration - simplification of installation, upgrades, and migrations without platform reboots or network downtime
  • Better scaling and platform optimization - scale is simple addition
  • Improved survivability and performance - treat multiple platforms as one system

For the purposes of this article, "edge virtualization" is described as the MicroCloud - to distinguish it from "public" and "private" clouds typically associated with the data center. The following are distinctive attributes of the edge MicroCloud (versus private and public clouds).

  • It is located at the WAN interface of an SMB (typically the Internet) or a remote site in a larger enterprise (typically MPLS)
  • Network bandwidth is typically constrained
  • The south side of the edge (facing the LAN) is typically less than 200 devices/users
  • Policy (security, QoS, NAC/Network Access Control) is typically required
  • Firewall, NAT and subnet functionality are required
  • The "edge" is typically price and operationally constrained
  • The edge typically applies not only to network functionality but to edge applications as well (e.g., session border control, Wi-Fi controller management, etc.)

It is expected that edge virtualization and software defined networks (SDNs) will completely replace purpose-built appliances and integrated applications at the edge. These are all compelling reasons behind the move to virtualization in the data center, and these same attributes apply equally to the SMB and enterprise edge. When considering a transition to edge virtualization and SDN, you need to look for a solution that provides both powerful networking and orchestration capabilities.

The table below illustrates some of the benefits of virtualization at the edge and is followed by a brief description of each.

Edge Virtualization Feature Example: "Application Compartmentalization"

Virtualization Feature Overview:
One of the advantages of running on a virtual platform, versus adding an application on top of an existing O/S, is the fact that the application can run on the O/S it is optimized for, with resources dedicated for its use. This becomes especially important when the applications are deep and complete, such as with a session border controller or a voice IP key system, particularly when these might need to run on the same platform together or with another complex-type network application.

Example Description:
The following diagram illustrates one of the primary benefits of virtualization: the ability to allow an application to run in its own optimized O/S space with efficiently apportioned resources.

In this diagram, the "Orchestration and Network Manager VM" manages the configuration of the SBC VM as it relates to the disk, network, processor, and RAM. Any additional applications are then appropriately plumbed with proper resource management. This resource allocation is very difficult to do in the absence of virtualization, inasmuch as applications tend to compete with one another in the "user space" of the O/S.

Virtualization allows for quick integration of applications within the same platform. With proper orchestration it is possible to balance application resource needs with platform capabilities. It is not necessary to fine-tune applications to a host O/S, as is done with traditional edge devices.

Edge Virtualization Feature Example: "Simplified Policy Management"

Virtualization Feature Overview:
Policy management is one of the most complex components of any networking application. It becomes particularly complex at the edge when policy needs to be applied across platforms and geographies. Examples include "guest" and "corporate" policies-particularly for wireless access. Policy is typically used to define/limit/grant access to particular resources, such as bandwidth or data for users or devices. The complexity of policy is usually prohibitive in terms of use. Virtualization with proper orchestration greatly simplifies this required but very complex component.

Example Description:
The following diagram illustrates the simplification of policy management across sites. Superimposed upon a real site/policy map are guide blocks that emphasize sites (in columns) and policy (rows). The blue guide block emphasizes where policy (and routing) is set.

Policy management for security and QoS is typically complex and prone to error. Virtualization with proper orchestration greatly simplifies this critical component while improving upon the specific attributes of security and QoS.

Edge Virtualization Feature Example: "Automatic App Integration & Orchestration"

Virtualization Feature Overview:
Virtualization orchestration creates several important benefits. One of the most important of these is the ability to perform automatic integration of applications with respect to the network (automatic wiring) and its associated QoS and security policies. In a traditional implementation without the benefit of virtualization orchestration, integration tends to be fraught with errors, particularly when applied across geographies and between applications. Additionally, updates and changes in a virtual environment can usually be orchestrated as a simple switch from a running VM to the upgraded VM, whereas a traditional environment will typically require a platform reboot-thus causing all applications to lose connectivity for a period of time.

Example Description:
The following diagram illustrates the edge architecture that yields automatic app integration with virtual wiring.

Each of the colored lines represents a virtual wire (circled in red). Orchestration automatically connects these lines to the appropriate virtual switch, interface, or application.

Applications are, in turn, instantiated, configured, and plumbed by the same orchestration software. Each VM will run in its own operating system and be allocated appropriate resources. Additionally, the host hypervisor O/S and each of the VMs are isolated from each other and the WAN and LAN networks by the "network flow manager." This isolation provides both a level of security and an improvement of application upgrades/configurations.

Virtualization and orchestration eliminate many of the problems associated with traditional all-in-one appliances that attempt to run applications that must interact with each other and the network. Configuration mistakes are avoided, and upgrades happen with no downtime.

Edge Virtualization Feature Example: "Scalability and Optimization"

Virtualization Feature Overview:
Traditional methods of application integration usually require platform replacements in order to increase in scale. Additionally, platform optimization tends to be dependent upon the most computing-intensive application, making it difficult to balance between size and number of applications. On the other hand, virtualization has demonstrated excellent scalability and optimization value through simple addition. In fact, the trend is to reduce the size and cost of the platform, allowing more linear growth and optimization.

Example Description:
The following diagram illustrates the evolution of a typical edge configuration towards smaller and less costly virtual platforms that can provide scalable and optimized application and network support.

In order to scale, once a single platform has maximized the number of applications that it runs, it is only necessary to add a second (or third, etc.) platform. This will hold true for most full-size applications, such as web services, databases, file systems, etc., that can inherently take advantage of multiple instances. Furthermore, it is possible to move VMs from one platform to the next in order to optimize the resources of a particular application on a particular platform.

Virtualization in the data center has demonstrated real-world scalability and optimization for applications much more effectively than traditional dedicated platforms. These same attributes will also hold true for edge virtualization.

Edge Virtualization Feature Example: "Survivability and Performance"

Virtualization Feature Overview:
Virtualization not only yields a performance benefit, but also greatly simplifies and improves survivability and distribution (yielding further performance benefits). Survivability in a virtual environment means that even if any application(s) fail(s), the

hypervisor operating system, virtual machines, or other applications do not fail. Applications can be "spun" up in sub-second times when events cause an application, platform, or site failure. Additionally, because of network virtualization, these applications can be distributed across geographies both from a survivability and performance perspective.

Example Description:
From a performance perspective, traditional edge solutions have relied on proprietary and purpose-built hardware, resulting in high costs and underperformance. On the very low end of traditional edge solutions, most hardware is ARM-based, with minimal memory and storage. These solutions typically are purpose-built and rely on open-source applications with a small amount of software integration. Consequently, they are almost never capable of supporting the required performance of commercial or high-end applications. Additionally, because of their singular focus, they tend to be stand-alone devices incapable of surviving any type of failure. Two concrete examples running on the same platform are SDN-based networking and elastic cloud backup. The following figure represents these examples:

In the diagram, there are several points of survivability: 1) loss of connectivity to the data center, 2) platform loss, and 3) primary network loss. In each case the survivability components allow operations to continue, albeit at a reduced level (e.g., LTE speeds vs. Ethernet, routing with no updates, etc.).

Virtualization (platform and network) yields multiple levels of survivability and performance that are difficult to attain with traditional dedicated platforms.

Edge virtualization or MicroClouds can provide enterprises and SMBs with efficiencies that legacy, purpose-built appliances cannot even begin to achieve. The better management of application resources, simpler policy administration, automated application integration and orchestration, and improved scalability, survivability, and performance all lead to significant and measurable cost savings.

Managed service providers and distributed enterprises would both benefit from deploying an edge virtualization strategy. In an example use case scenario of 50 sites where MicroClouds were deployed, there was a 3:1 up-front CAPEX savings and a 5:1 average OPEX savings over 3 years.

Edge virtualization and SDN solutions are here today and ready for production deployments. Integrating them into today's enterprise data centers and SMB environments will establish a foundation for a more efficient, optimized and manageable network over the long term.

More Stories By Richard Platt

Richard Platt is CTO and vice president of engineering at Netsocket, where he is responsible for establishing the company’s technical vision and leading all aspects of its technology development. He has over 25 years experience defining, developing, and commercializing emerging technologies in both start-up and Fortune 100 environments.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context w...
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new data-driven world, marketplaces reign supreme while interoperability, APIs and applications deliver un...
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Electric power utilities face relentless pressure on their financial performance, and reducing distribution grid losses is one of the last untapped opportunities to meet their business goals. Combining IoT-enabled sensors and cloud-based data analytics, utilities now are able to find, quantify and reduce losses faster – and with a smaller IT footprint. Solutions exist using Internet-enabled sensors deployed temporarily at strategic locations within the distribution grid to measure actual line loads.
The Internet of Everything is re-shaping technology trends–moving away from “request/response” architecture to an “always-on” Streaming Web where data is in constant motion and secure, reliable communication is an absolute necessity. As more and more THINGS go online, the challenges that developers will need to address will only increase exponentially. In his session at @ThingsExpo, Todd Greene, Founder & CEO of PubNub, will explore the current state of IoT connectivity and review key trends and technology requirements that will drive the Internet of Things from hype to reality.
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data shows "less than 10 percent of IoT developers are making enough to support a reasonably sized team....
You have your devices and your data, but what about the rest of your Internet of Things story? Two popular classes of technologies that nicely handle the Big Data analytics for Internet of Things are Apache Hadoop and NoSQL. Hadoop is designed for parallelizing analytical work across many servers and is ideal for the massive data volumes you create with IoT devices. NoSQL databases such as Apache HBase are ideal for storing and retrieving IoT data as “time series data.”
The IoT market is on track to hit $7.1 trillion in 2020. The reality is that only a handful of companies are ready for this massive demand. There are a lot of barriers, paint points, traps, and hidden roadblocks. How can we deal with these issues and challenges? The paradigm has changed. Old-style ad-hoc trial-and-error ways will certainly lead you to the dead end. What is mandatory is an overarching and adaptive approach to effectively handle the rapid changes and exponential growth.
Today’s connected world is moving from devices towards things, what this means is that by using increasingly low cost sensors embedded in devices we can create many new use cases. These span across use cases in cities, vehicles, home, offices, factories, retail environments, worksites, health, logistics, and health. These use cases rely on ubiquitous connectivity and generate massive amounts of data at scale. These technologies enable new business opportunities, ways to optimize and automate, along with new ways to engage with users.
The IoT is upon us, but today’s databases, built on 30-year-old math, require multiple platforms to create a single solution. Data demands of the IoT require Big Data systems that can handle ingest, transactions and analytics concurrently adapting to varied situations as they occur, with speed at scale. In his session at @ThingsExpo, Chad Jones, chief strategy officer at Deep Information Sciences, will look differently at IoT data so enterprises can fully leverage their IoT potential. He’ll share tips on how to speed up business initiatives, harness Big Data and remain one step ahead by apply...
There will be 20 billion IoT devices connected to the Internet soon. What if we could control these devices with our voice, mind, or gestures? What if we could teach these devices how to talk to each other? What if these devices could learn how to interact with us (and each other) to make our lives better? What if Jarvis was real? How can I gain these super powers? In his session at 17th Cloud Expo, Chris Matthieu, co-founder and CTO of Octoblu, will show you!
As a company adopts a DevOps approach to software development, what are key things that both the Dev and Ops side of the business must keep in mind to ensure effective continuous delivery? In his session at DevOps Summit, Mark Hydar, Head of DevOps, Ericsson TV Platforms, will share best practices and provide helpful tips for Ops teams to adopt an open line of communication with the development side of the house to ensure success between the two sides.
SYS-CON Events announced today that ProfitBricks, the provider of painless cloud infrastructure, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. ProfitBricks is the IaaS provider that offers a painless cloud experience for all IT users, with no learning curve. ProfitBricks boasts flexible cloud servers and networking, an integrated Data Center Designer tool for visual control over the cloud and the best price/performance value available. ProfitBricks was named one of the coolest Clo...
SYS-CON Events announced today that IBM Cloud Data Services has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IBM Cloud Data Services offers a portfolio of integrated, best-of-breed cloud data services for developers focused on mobile computing and analytics use cases.
SYS-CON Events announced today that Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, will keynote at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Developing software for the Internet of Things (IoT) comes with its own set of challenges. Security, privacy, and unified standards are a few key issues. In addition, each IoT product is comprised of at least three separate application components: the software embedded in the device, the backend big-data service, and the mobile application for the end user's controls. Each component is developed by a different team, using different technologies and practices, and deployed to a different stack/target - this makes the integration of these separate pipelines and the coordination of software upd...
Mobile messaging has been a popular communication channel for more than 20 years. Finnish engineer Matti Makkonen invented the idea for SMS (Short Message Service) in 1984, making his vision a reality on December 3, 1992 by sending the first message ("Happy Christmas") from a PC to a cell phone. Since then, the technology has evolved immensely, from both a technology standpoint, and in our everyday uses for it. Originally used for person-to-person (P2P) communication, i.e., Sally sends a text message to Betty – mobile messaging now offers tremendous value to businesses for customer and empl...
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
WebRTC converts the entire network into a ubiquitous communications cloud thereby connecting anytime, anywhere through any point. In his session at WebRTC Summit,, Mark Castleman, EIR at Bell Labs and Head of Future X Labs, will discuss how the transformational nature of communications is achieved through the democratizing force of WebRTC. WebRTC is doing for voice what HTML did for web content.
Nowadays, a large number of sensors and devices are connected to the network. Leading-edge IoT technologies integrate various types of sensor data to create a new value for several business decision scenarios. The transparent cloud is a model of a new IoT emergence service platform. Many service providers store and access various types of sensor data in order to create and find out new business values by integrating such data.