Welcome!

Virtualization Authors: Ignacio M. Llorente, Pat Romanski, Andrew Phillips, Brian Vandegrift, Elizabeth White

Related Topics: Cloud Expo, Java, Open Source, Virtualization, Big Data Journal, SDN Journal

Cloud Expo: Article

VCE: Driving the Velocity of Change in Cloud Computing

VCE's new specialized systems are key to de-risking mission critical application deployments

When you think Cloud, whether Private or Public, one of the key advantages that comes to mind is speed of deployment. All businesses crave the ability to simply go to a service portal, define their infrastructure requirements and immediately have a platform ready for their new application. Coupled with that you instantly have service level agreements that generally center on uptime and availability. So for example, instead of being a law firm that spends most of its budget on an in house IT department and datacenter, the Cloud provides an unavoidable opportunity for businesses to instead procure infrastructure as a service and consequently focus on delivering their key applications. But while the understanding of Cloud Computing and its benefits have matured within the industry, so too has the understanding that maybe what's currently being offered still isn't good enough for their mission critical applications. The reality is that there is still a need for a more focused and refined understanding of what the service level agreements should be and ultimately a more concerted approach towards the applications. So while neologisms such as speed, agility and flexibility remain synonymous with Cloud Computing, its success and maturity ultimately depend upon a new focal point, namely velocity.

Velocity bears a distinction from speed in that it's not just a measure of how fast an object travels but also in what direction that object moves. For example in a Public Cloud whether that be Amazon, Azure or Google no one can dispute the speed. Through only the clicks of a button you have a ready-made server that can immediately be used for testing and development purposes. But while it may be quick to deploy, how optimised is it for your particular environment, business or application requirements? With only generic forms the specific customization to a particular workload or business requirement fails to be achieved as optimization is sacrificed for the sake of speed. Service levels based on uptime and availability are not an adequate measure or guarantee for the successful deployment of an application. For example it would be considered ludicrous to purchase a laptop from a service provider that merely stipulates a guarantee that it will remain powered on even though it performs atrociously.

In the Private Cloud or traditional IT example, while the speed to deployment is not as quick as that of a public cloud, there are other scenarios where speed is being witnessed yet failing to produce the results required for a maturing Cloud market. Multiple infrastructure silos will constantly be seen to be hurrying around, busily firefighting and maintaining "the keeping the lights on culture" all at rapid speed. Yet while the focus should be on the applications that need to be delivered, being caught in the quagmire of the underlying infrastructure persistently takes precedent with IT admin having to constantly deal with interoperability issues, firmware upgrades, patches and multi-management panes of numerous components. Moreover service offerings such as Gold, Silver, Bronze or Platinum are more often than not centered around infrastructure metrics such as number of vCPUs, Storage RAID type, Memory etc. instead of application response times that are predictable and scalable to the end user's stipulated demands.

For Cloud to embrace the concept of velocity the consequence would be a focused and rigorous approach that has a direction aimed solely at the successful deployment of applications that in turn enable the business to quickly generate revenue. All the pieces of the jigsaw that go into attaining that quick and focused approach would require a mentality of velocity being adopted comprehensively from each silo of the infrastructure team while concurrently working in cohesion with the application team to deliver value to the business. This approach would also entail a focused methodology to application optimization and consequently a service level that measured and targeted its success based on application performance as opposed to just uptime and availability.

While some Cloud and service providers may claim that they already work in unison with a focus on applications, it is rarely the case behind the scenes as they too are caught in the challenge of traditional build it yourself IT. Indeed it's well known that some Cloud hosting providers are duping their end users with pseudo service portals where only the impression of an automated procedure for deploying their infrastructure is actually provided. Instead service portals that actually only populate a PDF of the requirements which are then printed out and sent to an offshore admin who in turn provisions the VM as quickly as possible are much closer to the truth. Additionally it's more than likely that your Private Cloud or service provider has a multi-tenant infrastructure with mixed workloads that sits behind the scenes as logical pools ready to be carved up for your future requirements. While this works for the majority of workloads and SMB applications, with more businesses looking to place more critical and demanding applications into their Private Cloud to attain the benefits of chargeback etc. they need an assurance of an application response time that is almost impossible to guarantee on a mixed workload infrastructure. As the Cloud market matures and the expectations that come with it with regards to application delivery and performance, such procedures and practices will only be suitable for certain markets and workloads.

So for velocity to take precedent within the Private Cloud, Cloud or even Infrastructure as a Service model and to fill this Cloud maturity void, infrastructure needs to be delivered with applications as their focal point. That consequently means a pre-integrated, pre-validated, pre-installed and application certified appliance that is standardized as a product and optimised to meet scalable demands and performance requirements. This is why the industry will soon start to see a new emergence of specialized systems specifically designed and built from inception for performance optimization of specific application workloads. By having applications pre-installed, certified and configured with both the application and infrastructure vendors working in cohesion, the ability for Private Cloud or service providers to predict, meet and propose application performance based service levels becomes a lot more feasible. Additionally such an approach would also be ideal for end users who just need a critical application rolled out immediately in house with minimum fuss and risk.

While there may be a number of such appliances or specialized systems that will emerge in the market for applications such as SAP HANA or Cisco Unified Communications the key is to ensure that they're standardized as well as optimised. This entails a converged infrastructure that rolls out as a single product and consequently has a single matrix upgrade for all of its component patches and firmware upgrades that subsequently also correspond with the application. Additionally it encompasses a single support model that includes not only the infrastructure but also the application. This in turn not only eliminates vendor finger pointing and prolonged troubleshooting but also acts as an assurance that responsibility of the application's performance is paramount regardless of the potential cause of the problem.

The demand for key applications to be monitored, optimised and rolled out with speed and velocity will be faced by not only Service providers and Private Cloud deployments but also internal IT departments who are struggling with their day to day firefighting exercises. To ensure success, IT admin will need a new breed of infrastructure or specialized systems that enables them to focus on delivering, optimizing and managing the application and consequently not needing to worry about the infrastructure that supports them. This is where the new Vblock specialized systems being offered by VCE come into play. Unlike other companies with huge portfolios of products, VCE have a single focal point, namely Vblocks. By now adopting that same approach of velocity that was instilled for the production of standardized Vblock models, end users can now reap the same rewards with new specialized systems that are application specific. Herein lies the key to Cloud maturity and ultimately the successful deployment of mission critical applications.

More Stories By Archie Hendryx

SAN, NAS, Back Up / Recovery & Virtualisation Specialist.

@ThingsExpo Stories
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
"There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O'Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...

ARMONK, N.Y., Nov. 20, 2014 /PRNewswire/ --  IBM (NYSE: IBM) today announced that it is bringing a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix, IBM's platform-as-a-service. The new platform enables developers to build ap...

Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the "Internet of Things" (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his General Session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, discuss the real benefits to focus on, how to understand the requirements of a successful solution, the flow of ...
The security devil is always in the details of the attack: the ones you've endured, the ones you prepare yourself to fend off, and the ones that, you fear, will catch you completely unaware and defenseless. The Internet of Things (IoT) is nothing if not an endless proliferation of details. It's the vision of a world in which continuous Internet connectivity and addressability is embedded into a growing range of human artifacts, into the natural world, and even into our smartphones, appliances, and physical persons. In the IoT vision, every new "thing" - sensor, actuator, data source, data con...
"BSQUARE is in the business of selling software solutions for smart connected devices. It's obvious that IoT has moved from being a technology to being a fundamental part of business, and in the last 18 months people have said let's figure out how to do it and let's put some focus on it, " explained Dave Wagstaff, VP & Chief Architect, at BSQUARE Corporation, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Focused on this fast-growing market’s needs, Vitesse Semiconductor Corporation (Nasdaq: VTSS), a leading provider of IC solutions to advance "Ethernet Everywhere" in Carrier, Enterprise and Internet of Things (IoT) networks, introduced its IStaX™ software (VSC6815SDK), a robust protocol stack to simplify deployment and management of Industrial-IoT network applications such as Industrial Ethernet switching, surveillance, video distribution, LCD signage, intelligent sensors, and metering equipment. Leveraging technologies proven in the Carrier and Enterprise markets, IStaX is designed to work ac...
C-Labs LLC, a leading provider of remote and mobile access for the Internet of Things (IoT), announced the appointment of John Traynor to the position of chief operating officer. Previously a strategic advisor to the firm, Mr. Traynor will now oversee sales, marketing, finance, and operations. Mr. Traynor is based out of the C-Labs office in Redmond, Washington. He reports to Chris Muench, Chief Executive Officer. Mr. Traynor brings valuable business leadership and technology industry expertise to C-Labs. With over 30 years' experience in the high-tech sector, John Traynor has held numerous...
The 3rd International @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades.