Containers Expo Blog Authors: Carmen Gonzalez, Lori MacVittie, Liz McMillan, Yeshim Deniz, Elizabeth White

Related Topics: SDN Journal, Java IoT, Industrial IoT, Microservices Expo, Containers Expo Blog, @CloudExpo

SDN Journal: Article

Software Defined Networking – A Paradigm Shift

Now it's all about orchestrated service delivery

The networking industry has gone through different waves over last 30+ years. In the '80s, the first wave was all about connecting and sharing; how to connect a computer to other peripheral devices and other computers. There were many players who developed technology and services to address that, e.g. Novell, 3Com, Sun, IBM, DEC, Nortel. Across the industry, small islands of various protocols were created with multiple gateways to bridge them.

In 90's and 00's, Cisco dominated the industry and did a brilliant job of pushing the industry towards a common approach built on Ethernet.  They built a hugely successful business and ecosystem and even created new markets like VoIP on the proposition that networking should be on a common highway. We also saw isolation of networks from the rest of the IT infrastructure, in the sense that software innovations continued in the server and storage environments independent of the network area. The focus also remained on different components of the infrastructure and not on the ‘service' delivered by the combination of those infrastructure components, i.e., server, storage and network.

Now, it is all about orchestrated service delivery which requires standards-based open approach. According to Gartner reports on Emerging Technology Analysis and Key Issues for Communications Strategies, a) over 50% workloads will be virtualized by the end of 2012 thanks to Cloud computing, and b) more than 80% of traffic will be server-to-server by 2014 due to federated applications and virtualization.

In this article, I attempt to highlight why we have reached limits of current network technology, how Software Defined Networking will lead the next wave of innovations and its benefits to the IT industry. Today, network elements like switches and routers have resident software in each box. The software in the box provides intelligence using distributed algorithms to decide how each packet should be handled by it. In order for the entire network to function properly, the software in each box must work in coordination with other boxes.  This approach has served us well so far.

The coordinated distributed algorithms however make it difficult to introduce a change on the fly. We have to reconfigure the embedded software on all network components (often called boxes) to implement any change.  On the other hand, the wave of virtualization demands flexible, adaptive and nimble networks. This wave exposes limitations of the current networking approach, which is inflexible and protocol-heavy. As distributed algorithms are used, not one box has a global view of the network. This results in over provisioning at the time of designing and guess-work while trouble-shooting. For large cloud deployments, compute and storage environments can be virtualized and consumed easily but because of the limitations of networks, its full potential is not realized.

Typically, a network administrator spends a lot of time planning and then configuring the network components with changing business requirements and varying network traffic. Network administrators learn a lot by trial and error and the resulting expertise based on experience is limited to the experienced few.

OpenFlow History
Research students at Stanford, Berkley and other universities found it hard to experiment with their networks because the software is embedded in each switch or a router and any change has to be coordinated between vendors to make the distributed algorithms interoperable to provide the functionality they needed for research & experimentation. It is with this simple objective that the idea of OpenFlow was born. The first step that these researchers took was to develop ability to program switches, from a remote controller. The OpenFlow protocol was developed to support communication between a switch and a controller. It allows external control software to control the data path of a switch, bypassing traditional L2 and L3 protocols and associated configurations. OpenFlow protocol defines messages, such as packet-received, send-packet-out, modify-forwarding-table, and get-stats. The researchers added OpenFlow support to existing boxes and allowed OpenFlow controller to program part of Flow-Table entries for research and experimentation while rest of the box worked as before. This gave them control over switches from a controller running on a remote industry standard server. This was the start of OpenFlow which basically separated the physical or data layer from the control layer.

ONF Background
OpenFlow and SDN became quite popular in the research community and several service providers and some vendors started to see the value of this approach. Researchers from Stanford and Berkeley took the lead but Open Networking Foundation (ONF) was founded by leading providers (Google, Yahoo!, Microsoft, Facebook, Deutsche Telecom, and Verizon). Some vendors, like HP, expressed their support from the beginning. ONF is the body which defines, standardizes and enhances OpenFlow protocol. ONF has a bigger charter with SDN that goes beyond OpenFlow protocol. It promotes SDN and may standardize different parts of SDN. As a policy, vendors cannot join its board but can become members of ONF and lead some working groups. Vendors have influence over the emerging standard though they don't set the overall agenda and they don't make final decisions on what is standardized and what is not.

Another interesting point is that ONF wants to do as little standardization as possible to encourage creativity. At first it sounded a bit conflicting but ONF looks at the software industry and tries to follow it by taking its best practices. When you look at the software industry, there are fewer standards than the network industry and it has created more innovations and jobs than the network industry. The Network industry has too many protocols defined and standardized, resulting in more complexity and fewer innovations. Academicians are influencing ONF and ensuring that we don't end up with another rigid, inflexible and protocol heavy networking world. ONF has 66 members today and its membership costs $30k/year. This is relatively high compared to other such bodies and the reason could be to ensure that only genuinely interested parties become members. We know that breakthrough innovations would come from small start-ups, some of whom would find it difficult to spend so much for the annual membership.  On the other hand, ONF ensures that the development made as part of their body is made available to all members at no charge or royalty etc. One would end up spending more than $30k in lawyer's fees to get the royalty arrangements sorted out.

Early Adopters
Google, Amazon, Rackspace, etc., have already implemented OpenFlow based networks, using proprietary hardware and in-house developed software. We see many new start-up focused on this new area to develop applications that leverage virtualized network. Most cloud providers manage huge data centers. "Every day Amazon Web Services (AWS) adds enough new capacity to support all of Amazon.com's global infrastructure through the company's first 5 years, when it was a $2.76 billion annual revenue enterprise" according to Jim Hamilton, their VP at large.

Google embraced OpenFlow very early on. Google's inter-datacenter production network, largest in the world by traffic, runs on OpenFlow and SDN. Google proved that OpenFlow based networks can scale and deliver its promise. The biggest use case, according to Google, for Central controllers is the fact that we can do re-routing, anticipating an event, e.g. if we know that we are introducing a new service which will lead to traffic load, we can pre-provision network in a way to best optimize infrastructure resources. If a small business, say a Flower shop, expects more traffic and compute power on a Valentine day, it is easy to have compute and storage power made available with standard virtualization technology available today. But to make network resources available on demand is challenging. This is where an OpenFlow controller controlling switches can easily provide necessary bandwidth and then tear it down or redirect the network resources for other requests. Google example is impressive but one could argue that how many enterprise customers could afford or dare to do what Google can do. Moreover, just because it made a business case for Google does not mean that it can make a business case for everyone. Each customer will have to evaluate their network, future growth requirements etc and see if there is a positive business case.

Flexibility Galore
Software Defined Networking (SDN) can help you make the network ready for Cloud-bursting as and when required. SDN opens up many possibilities. For example;

  1. Packet Flow redirection: There is a lot of video traffic coming from sources we trust. Security services on such traffic are not required for some applications. As security services are extremely infrastructure-hungry and CPU-intensive, passing all data to it leads to a sprawl of security devices (many IDS/ IPS, DPI appliances) to monitor traffic. With OpenFlow we can easily redirect traffic away from the costly resources for trusted traffic.
  2. Policy Management: Because you now have global view of the network and can control the network with software running on OpenFlow controller, defining and implementing business policies become easier, e.g. better bandwidth management: In case of excess traffic which is not anticipated, the controller can make sure to program the network in such a way that higher priority business traffic is given more resources than low priority traffic.
  3. Virtual Application Network: The OpenFlow controller lets us create virtual networks for different applications on one physical network, such that different applications can have different bandwidth and QoS based on their requirements, with auditable network isolation between applications and simpler compliance (a requirement for the financial industry). One can provide each customer a separate virtual domain for them to manage
  4. Network Security: OpenFlow can be used to make networks more secure and agile. The OpenFlow controller allows us to monitor and manage network security and
    -Dynamically insert security services at any point in the network (on-demand firewall or IDS/IPS, for example)
    -Monitor traffic and re-direct suspect flows for full inspection
    -Combine per-flow QoS control with network management systems to leverage traffic and end-user identity information
    -Dynamically detect and mitigate attacks due to infected PCs by using  signature/reputation database to create rules that address specific attacks
  5. Proprietary Appliances: It is very common today to deploy appliances in the network to deliver specific functionalities. These proprietary appliances can be replaced with an OpenFlow controller and a software application delivering the specific functionality. Communication Service Providers have a significant number of network services that can take advantage of virtualization and industry standard servers. Many application specific appliances that are running on custom ASIC (WAN optimization, Firewalls, DPI, SPAM/MAIL appliances, IDS etc) are good candidates for the SDN approach.
  6. As SDN matures, a couple of years down the road, more futuristic use case is to monitor traffic patterns, generate intelligence and then use the intelligence to anticipate traffic patterns and  optimize available resources. Using this kind of intelligence, we can actually reduce power consumption, too. For example, if we know the usage of the network is less during the nights and early mornings, we can shut off parts of the network in such a way that we still get complete connectivity, yet not have the complete network up.

My Take
The list of use cases is growing on a daily basis and will continue to grow even faster as the pace of innovation increases. The number of new start-ups in this area is increasing rapidly. Finally, the networking field, which has been quite dull from the perspective of new innovations, is going to be more vibrant and exciting with new possibilities. Moreover, if ONF is successful in maintaining ‘Open standards', SDN will allow plug and play with multivendor products, empowering IT and Network operators to be more cost-effective and adaptive to agility requirements of a business. We will see that with SDN, the network industry will mirror the innovations and developments seen in the server and storage fields.

Some vendors want to have API's well-defined for applications to leverage OpenFlow controllers or have more protocols supported. It is prudent on the part of ONF not to define and standardize too much and let the market define what an acceptable standard is. It is important to keep OpenFlow protocol unrestricted by defining and standardizing not more than what is absolutely required. This will fuel innovations.

OpenFlow protocol is in its infancy but it has generated tremendous interest from customers, researchers as well as vendors. One can argue that it is not fully matured or ready for prime time but most agree that it will change the network industry fundamentally. It will make the industry more flexible, nimble and drive more innovations. This train has left the station while some debate that its destination is not well-defined or its ETA is not known. The hardware vendors will have to accept the fact that networking hardware will be commoditized just like servers and storage. OpenFlow/SDN, for sure, opens up opportunities for different network based applications. This is where current vendors will have to focus on to continue to play a major role in the future. Network administrators will not be spending hours reconfiguring switches and routers. They will have to get skilled on how to control, manage, test and implement changes from a central controller.

Although the OpenFlow protocol is defined, there are not many vendors in the market supporting its latest version 1.3. Moreover, there is a lack of tools to test, monitor and manage this new environment. HP and other major vendors have openly embraced OpenFlow and are investing in it. HP was one of the first major network vendors to invest in this area, with 60+ deployments of 16 different switches supporting OpenFlow. HP is also leading one of the task forces of ONF to evolve the OpenFlow protocol. With its traditional strength in IT performance & operations (test, monitor and manage) management and telecom OSS, HP is well-positioned to deliver a complete future-proof infrastructure solution, (consisting of server, storage, networking, software, security and analytics) for enterprise IT as well as telecom service providers.

More Stories By Kapil Raval

Kapil Raval is an experienced technology solutions consultant with nearly 20 years of experience in the telecom industry. He thinks ‘the business’ and focuses on linking business challenges to technology solutions. He currently works for HP and drives strategic solutions in the telecom vertical.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
@ThingsExpo has been named the Top 5 Most Influential Internet of Things Brand by Onalytica in the ‘The Internet of Things Landscape 2015: Top 100 Individuals and Brands.' Onalytica analyzed Twitter conversations around the #IoT debate to uncover the most influential brands and individuals driving the conversation. Onalytica captured data from 56,224 users. The PageRank based methodology they use to extract influencers on a particular topic (tweets mentioning #InternetofThings or #IoT in this ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, will discuss how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team a...
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service. 

OnProcess Technology has announced it will be a featured speaker at @ThingsExpo, taking place November 1 - 3, 2016, in Santa Clara, California. Dan Gettens, OnProcess’ Chief Analytics Officer, will discuss how Internet of Things (IoT) data can be leveraged to predict product failures, improve uptime and slash costly inventory stock. @ThingsExpo is an annual gathering of IoT and cloud developers, practitioners and thought-leaders who exchange ideas and insights on topics ranging from Big Data in...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and ...
Successful digital transformation requires new organizational competencies and capabilities. Research tells us that the biggest impediment to successful transformation is human; consequently, the biggest enabler is a properly skilled and empowered workforce. In the digital age, new individual and collective competencies are required. In his session at 19th Cloud Expo, Bob Newhouse, CEO and founder of Agilitiv, will draw together recent research and lessons learned from emerging and established ...
There is growing need for data-driven applications and the need for digital platforms to build these apps. In his session at 19th Cloud Expo, Muddu Sudhakar, VP and GM of Security & IoT at Splunk, will cover different PaaS solutions and Big Data platforms that are available to build applications. In addition, AI and machine learning are creating new requirements that developers need in the building of next-gen apps. The next-generation digital platforms have some of the past platform needs a...
I'm a lonely sensor. I spend all day telling the world how I'm feeling, but none of the other sensors seem to care. I want to be connected. I want to build relationships with other sensors to be more useful for my human. I want my human to understand that when my friends next door are too hot for a while, I'll soon be flaming. And when all my friends go outside without me, I may be left behind. Don't just log my data; use the relationship graph. In his session at @ThingsExpo, Ryan Boyd, Engi...
Smart Cities are here to stay, but for their promise to be delivered, the data they produce must not be put in new siloes. In his session at @ThingsExpo, Mathias Herberts, Co-founder and CTO of Cityzen Data, will deep dive into best practices that will ensure a successful smart city journey.
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessi...
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
The Internet of Things can drive efficiency for airlines and airports. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Sudip Majumder, senior director of development at Oracle, will discuss the technical details of the connected airline baggage and related social media solutions. These IoT applications will enhance travelers' journey experience and drive efficiency for the airlines and the airports. The session will include a working demo and a technical d...
@ThingsExpo has been named the Top 5 Most Influential M2M Brand by Onalytica in the ‘Machine to Machine: Top 100 Influencers and Brands.' Onalytica analyzed the online debate on M2M by looking at over 85,000 tweets to provide the most influential individuals and brands that drive the discussion. According to Onalytica the "analysis showed a very engaged community with a lot of interactive tweets. The M2M discussion seems to be more fragmented and driven by some of the major brands present in the...
The Internet of Things (IoT), in all its myriad manifestations, has great potential. Much of that potential comes from the evolving data management and analytic (DMA) technologies and processes that allow us to gain insight from all of the IoT data that can be generated and gathered. This potential may never be met as those data sets are tied to specific industry verticals and single markets, with no clear way to use IoT data and sensor analytics to fulfill the hype being given the IoT today.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
If you had a chance to enter on the ground level of the largest e-commerce market in the world – would you? China is the world’s most populated country with the second largest economy and the world’s fastest growing market. It is estimated that by 2018 the Chinese market will be reaching over $30 billion in gaming revenue alone. Admittedly for a foreign company, doing business in China can be challenging. Often changing laws, administrative regulations and the often inscrutable Chinese Interne...
Enterprise IT has been in the era of Hybrid Cloud for some time now. But it seems most conversations about Hybrid are focused on integrating AWS, Microsoft Azure, or Google ECM into existing on-premises systems. Where is all the Private Cloud? What do technology providers need to do to make their offerings more compelling? How should enterprise IT executives and buyers define their focus, needs, and roadmap, and communicate that clearly to the providers?