Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Blog Feed Post

Unlocking Opportunity in Software-Defined Services with ESP

As an industry, our visions of progress in SP offerings are filled with virtualized functions, elastic workloads, orchestration, APIs, scalable infrastructures and almost everything software-defined.  None of this is flawed, and we get strong, regular glimpses of the potential in such heavily virtualized models in an array of domains regularly.  For example, intriguing offerings are presented to us mid-session in streaming video services based on our individual profiles and histories. Auto-scaling in hybrid cloud IT for cost-effective performance improvements is enabled using the provider’s service management API.  And, optimizations in cost and performance of multi-layer network services is proven to deliver measurable benefits by managing each layer as part of a unified, abstract service design applied to each underlying layer.

 

With this progress in hand toward realizing a more agile SP market, we have not yet arrived at a point where complete automation of service composition and deployment across the array of applications operators target has been achieved.  We are on the cusp of realizing complete frameworks and deployment kits to put that goal within reach, but still have important hurdles to overcome in making the vision real.

 

Let’s look at where the improvements in integration remain to be made.

 

If we look at it as a ‘start-to-finish’ checklist of requirements for an elastic, software-driven environment, we can start by noting that strong solutions for automating development and deployment of applications into cloud-based services in real-time have started to dramatically shorten the time to delivery for new application functions.  Introducing new features into Google or Netflix or other ‘web-scale’ application systems is ample demonstration of this method’s value.  Expanding DevOps capabilities to new functional areas (like inserting new functions into programmable networks) is key to expanding the value of services.  Similarly dramatic progress has been made in creating and managing virtual infrastructures for compute, storage and network functions in many XaaS environments, and we see intriguing service offerings and bundles emerging ‘one by one’ as operators overcome their individual service delivery barriers.  But, can we blend these individual ‘buckets’ of virtualized service delivery solutions into more complete end-to-end service creation frameworks?

 

As we make progress in individual domains, a hurdle remaining to be overcome is the integration of toolsets into a suite of offerings that fits multiple SP situations and enables them to generate services in a manner that will fit their adoption profile at the pace that works for them.  To add urgency to this point, one of the critical success factors recently identified by the ETSI Network Functions Virtualization (NFV) working groups in the achievement of their goals is development of a suitably strong management and orchestration (MANO) framework that will enable virtualized network functions to be deployed effectively in conjunction with the other elements of service providers’ multi-faceted infrastructures. 

 

Helping overcome these barriers is a task that can be taken on by suppliers with sufficient range of vision, offerings and resources to enable SPs’ virtualized solution deliveries productively.  Let’s break this down into two overall dimensions where packaging, functionality, and go-to-market leadership can be provided:  having the right functionality, and having the right consumption models for SPs to choose between.

 

On functionality, if a supplier starts by being able to profile the business parameters of an SP’s offering and populate them into the charging, cataloguing, provisioning, and other OSS/BSS systems needed to commercialize the service, this accomplishes a key element in the flow of automation that results.  From there if the supplier can offer a portfolio of functional elements – either virtual or physical – to support a wide range of operator services, and provides them in a manner that integrates with orchestration and management required for operation, a second key step in deployment simplification has been taken. It’s easy to see how this aspect of portfolio breadth is impactful in infrastructure functions like firewalling, switching, policy enforcement, computing platforms, and others.  And it is impactful in application-oriented software offerings like video content libraries, collaboration system modules, and vertical industry applications (smart grid, health care, transportation, etc.).  If the supplier is also in a position to offer templates and policies that can be used by an operator to jump-start implementation in the target virtual infrastructure, we start to have the level of integrated portfolio operators have been hungry for as the virtualized and software-defined infrastructure transformations have been developing.

 

Going beyond this, recognizing that operators are functioning at different paces of development in their own portfolios, it’s equally meaningful if suppliers can deliver their solutions at varying levels of integration to facilitate uptake by SPs. For example, an individual function such as content inspection or SP wireless LAN may be of interest to an operator a la carte, ready for integration into a service delivery framework as the operator prefers.  Or, a different operator may be more interested if those functions are available as part of a pre-integrated, certified solution including orchestration, templates, and the functional components themselves, ready to deploy as a pre-tested module. Or still, another operator who’s interested in lower risk in trialing and deployment may want to obtain these capabilities ‘as a service’ from the supplier to minimize training, deployment and go-to-market overhead in determining if the service is going to ‘take’ in their subscriber base.  Offering such a range of consumption models provides a boost to the SP community in being able to accelerate deployments at a pace each SP deems is best for them.

 

At this early stage of the developing market for virtualized infrastructure solutions, coming to the table with that many areas of requirement addressed – and still being open to work with an open, standards-based ecosystem of suppliers to contribute to working SP solutions – is a crucial part of accelerating progress.  We look at Cisco’s introduction of the Evolved Services Platform (ESP) as a leading example of the top-to-bottom portfolio breadth and flexibility in how elements can be adopted by SPs that will foster development of innovative and virtualized services faster.  At this stage of the market for virtualized service delivery platforms ESP provides a unifying framework for integration of capabilities productively, and still leaves the door open for innovation in all the key components of the value chain.  It should prove very effective in stimulating uptake for powerful, versatile service delivery platforms in a wide range of application categories.  In this sense it helps operators overcome many of the barriers to adoption they have faced when considering how to take advantage of important innovations at different points in the virtualized service delivery platforms landscape.  By overcoming those barriers it becomes an ingredient for achieving real progress and benefits in the broad transformation of SP infrastructures to becoming more fully software-driven infrastructures.

 

Read and download for more information: ACG Research Paper: Business Case for Cisco Evolved Services Platform and NFV

Read the original blog entry...

More Stories By Deborah Strickland

The articles presented here are blog posts from members of our Service Provider Mobility community. Deborah Strickland is a Web and Social Media Program Manager at Cisco. Follow us on Twitter @CiscoSPMobility.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...