Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: Containers Expo Blog, @CloudExpo

Containers Expo Blog: Blog Feed Post

2010 is the Year of the Federated Cloud Computing

Also known as hybrid clouds, the notion of federation has been around since cloud computing began

In this first post of 2010, I’d like to look at one of the most important cloud issues that enterprises want to tackle: federation in the cloud — across clouds and between the cloud and the data center. Also known as hybrid clouds, the notion of federation has been around since cloud computing began, but as a long-term vision rather than a working solution. This year that gap is going to close.

What Is Cloud Federation?

Federation brings together different cloud flavors and internal resources so companies can select a computing environment on demand that makes sense for a particular workload. It opens the door to a range of useful scenarios that take advantage of cloud capabilities:

  • Using multiple clouds for different applications to match business needs. For example, Amazon AWS or Rackspace could be used for applications that need large horizontal scale, and Savvis or Terremark for applications that need stronger SLAs and higher security. An internal cloud is another federation option for applications that need to live behind the corporate firewall.
  • Allocating different elements of an application to different environments, whether internal or external. For example, an application could run in a cloud while accessing data stored internally as a security precaution. (We call this concept “application stretching.”)
  • Moving an application to meet requirements at different stages in its lifecycle, whether between public clouds or back to the data center. For example, Amazon or Terremark's V-Cloud Express could be used for development, and when the application is ready for production it could move to Terremark's Enterprise Cloud or similar clouds. This is also important as applications move towards the end of their lifecycle, where they can be moved to lower-cost cloud infrastructure as their importance and duty-cycle patterns diminish.

Enterprise users don’t typically talk about federation per se; they speak in terms of application-specific and general business requirements. While some applications will always belong in their data center, they may have others (possibly hundreds) that could run more cost-effectively in the right cloud. Our customers and prospects tell us that they would love to take advantage of different clouds to get the computing performance they need, along with the desired service levels, scalability, security and price points. And since they aren’t that clear yet on their cloud requirements, and cloud services are in early stages and will continue to evolve, they want the ability to pick up their applications if necessary and move them to other clouds or back to the data center with minimal effort.

The problem is that the cloud is not a homogenous entity, but covers a broad landscape of computing environments, with no consistency between any of them or with the enterprise data center. Federation is the missing link, providing a structure that bridges these disparate environments so enterprise cloud computing can become as seamless and straightforward as it needs to be. Let’s examine some of the key issues and see what CloudSwitch is doing to make federation work.

Bridging the Differences

An application should to be able to run “as is” in any cloud with the resources to support it. But each cloud has its own server platforms, operating system versions, APIs, network settings, storage options—a whole landscape of varied characteristics. Without federation, each cloud deployment becomes a custom “one-off” exercise to meet the requirements of a particular cloud environment. That’s not acceptable internally, and companies are now demanding the ability to leverage different clouds without the underlying engineering efforts required to make it happen.

A unique capability of CloudSwitch is the ability to integrate at an infrastructure level between the data center and different clouds. We sit in the middle of all of these resources and automatically bridge the differences, regardless of variations in virtualization platforms, operating systems, APIs, storage infrastructures or other characteristics of the different clouds. Both internal and cloud resources appear as if they’re running locally, using a common interface spanning multiple clouds and the local environment.

Setting Consistent Rules

Rules and permissions about what employees can do in the cloud must be consistent with those in the data center. Role-based controls are required, for example, to enable a particular individual or group to create servers but not to delete or modify them. However, in these early days of cloud computing, the standard procedure is to allow cloud users access to the cloud credentials; essentially every user has full control and access to the cloud resources. This not only causes control issues but makes auditing and problem resolution difficult since it is unclear who is responsible for any particular action.

CloudSwitch solves this problem by holding the credentials for external clouds and serving as the gateway to cloud accounts. Rather than users accessing their accounts directly, they interact with the cloud through the gateway, which consolidates permissions for all users and multiple clouds for management by an administrator. The approach provides consistent policies governing user and management roles, whether internal or external.  

Streamlining Cloud Management

Federation also means that administrators should be able to manage applications running in one or more clouds as if they were running locally, using their familiar tools and processes for application lifecycle management, monitoring, compliance management, etc. But cloud computing involves a wide assortment of isolated environments to keep track of and manage. Adding to the complexity, cloud providers often have their own management tools that users or administrators need to learn, all different from each other and from what enterprises have internally.

CloudSwitch keeps things simple for the enterprise by replicating the existing IT infrastructure and mapping it to the target cloud. The approach allows current management tools to work seamlessly in the cloud, just as if an application were running locally. Using consistent tools and policies, applications and resources can be managed with the same flexibility, security and control regardless of their location.

Bringing the Vision to Life

Federation is required for cloud computing to be successful, particularly as computing needs continue to expand. Enterprise users want to take advantage of all the capabilities available in the cloud, but without the complexity or risk. The ability to federate this heterogeneous ecosystem—to create a uniform environment spanning external and internal clouds—is going to allow IT organizations to meet user and corporate needs with an agility and economy not previously possible. CloudSwitch is part of an emerging ecosystem that’s making federated cloud a reality.

Read the original blog entry...

More Stories By Ellen Rubin

Ellen Rubin is the CEO and co-founder of ClearSky Data, an enterprise storage company that recently raised $27 million in a Series B investment round. She is an experienced entrepreneur with a record in leading strategy, market positioning and go-to- market efforts for fast-growing companies. Most recently, she was co-founder of CloudSwitch, a cloud enablement software company, acquired by Verizon in 2011. Prior to founding CloudSwitch, Ellen was the vice president of marketing at Netezza, where as a member of the early management team, she helped grow the company to more than $130 million in revenues and a successful IPO in 2007. Ellen holds an MBA from Harvard Business School and an undergraduate degree magna cum laude from Harvard University.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...