Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: Containers Expo Blog, Microservices Expo, @CloudExpo

Containers Expo Blog: Blog Feed Post

A Rose by Any Other Name - Appliances Are More Than Systems

Once you virtualize an appliance, you have two things – a virtualized appliance AND a virtual computer

One of the majors Lori and my oldest son is pursuing is in philosophy. I’ve never been a huge fan of philosophy, but as he and Lori talked, I decided to find out more, and picked up one of The Great Courses on The Philosophy of Science to try and understand where philosophy split off from hard sciences and became irrelevant or an impediment. I wasn’t disappointed, for at some point in the fifties, a philosopher posed the “If you’re a chicken, you assume when the farmer comes that he will bring food, so the day he comes with an axe, you are surprised” question. Philosophers know this tale, and to them, it disproves everything, for by his argument, all empirical data is suspect, and all of our data is empirical at one level or another. At that point, science continued forward, and philosophy got completely lost. The instructor for the class updated the example to “what if the next batch of copper pulled out of the ground doesn’t conduct electricity?”

image This is where it shows that either (a) I’m a hard scientist, or (b) I’m too slow-witted to hang with the philosophers, because my immediate answer (and the one I still hold today) was “Duh. It wouldn’t be called copper.” For the Shakespearian lament “that which we call a rose by any other name would smell as sweet” has a corollary. “Any other thing, when called a rose, would not smell as sweet”. And that’s the truth. If we pulled a metal out of the ground, and it looked like copper, but didn’t share this property or that property, while philosophers were slapping each other on the back and seeing vindication for years of arguments, scientists would simply declare it a new material and give it a name. Nothing in the world would change.

This is true of appliances too. Once you virtualize an appliance, you have two things – a virtualized appliance AND a virtual computer. This is significant, because while people have learned how many virtuals can be run on server X given their average and peak loads, the same doesn’t yet appear to be true about virtual appliances. I’ve talked to some, and seen email exchanges from other, IT shops that are throwing virtual appliances – be they a virtualized ADC like BIG-IP LTM VE from F5 or a virtualized Cloud Storage Gateway from someone like Nasuni – onto servers without considering their very special needs as a “computer”. In general, you can’t consider them to be “applications” or “servers”, as their resource utilization is certainly very different than your average app server VM. These appliances are built for a special purpose, and both of the ones I just used for reference will use a lot more networking resources than your average server, just being what they are.

Compliments of PDPhoto.org

When deploying virtualized appliances, think about what the appliance is designed to do, and start with it on a dedicated server. This is non-intuitive, and kind of defeats the purpose, but it is a temporary situation. Note that I said “Start with”. My reasoning is that the process of virtualizing the appliance changed it, and when it was an appliance, you didn’t care about its performance as long as it did the job. By running it on dedicated hardware, you can evaluate what it uses for resources in a pristine environment, then when you move it onto a server with multiple virtual machines running, you know what the “best case” is, so you’ll know just how much your other VMs are impacting it, and have a head start troubleshooting problems – the resource it used the most on dedicated hardware is certainly most likely to be your problem in a shared environment.

Appliances are generally more susceptible to certain resource sharing scenarios than a general-service server is. These devices were designed to perform a specific job and have been optimized to do that job. Putting it on hardware with other VMs – even other instances of the appliance – can cause it to degrade in performance because the very thing it is optimized for is the resource that it needs the most, be it memory, disk, or networking. Even CPUs, depending upon what the appliance does, can be a point of high contention between the appliance and whatever other VM is running.

imageIn the end, yes they are just computers. But you bought them because they were highly specialized computers, and when virtualized, that doesn’t change. Give them a chance to strut their stuff on hardware you know, without interference, and only after you’ve taken their measure on your production network (or a truly equivalent test network, which is rare), start running them on machines with select VMs. Even then, check with your vendor. Plenty of vendors don’t recommend that you run an virtualized appliance that was originally designed for high performance on shared hardware at all. Since doing so against your vendor’s advice can create support issues, check with them first, and if you don’t like the answer, pressure them either for details of why, or to change their advice. Yes, that includes F5. I don’t know the details of our support policy, but both LTM-VE and ARX-VE are virtualized versions of high-performance systems, so it wouldn’t surprise me if our support staff said “first, shut down all other VMs on the hardware...” but since we have multi-processing on VIPRION, it wouldn’t surprise me if they didn’t either.

It is no different than any other scenario, when it comes down to it, know what you have, and unlike philosophers, expect it to behave tomorrow like it does today, anything else is an error of some kind.

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...