Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: Containers Expo Blog

Containers Expo Blog: Article

Virtualization for Deeply Embedded Applications

Virtualization has penetrated far into the enterprise; now it's begun the march into portable electronics:

In networking applications, primarily using multi-core devices, there are considerable advantages in virtualization.  For example virtualization allows for considerably more efficient load balancing as it is now possible to move virtual machines, and their hosted process, from core to core dynamically as conditions change. This same mechanism can drive power savings as it’s now possible to consolidate processing on fewer cores during low traffic periods and shut down unused cores. Higher up-time is possible as it is now possible to download updated firmware in the background, validate the new image, and then migrate process to the new firmware, all without taking the system off line. In systems where it’s necessary to support many different firmware versions this capability is enormously compelling.
 
In highly secure environments it is now possible to add a secure processing element to an SOC, without having to have a separate security processor. The Payment Card Industry Pin Entry Device (PCI-PED) certification imposes an extremely rigorous set of requirements on manufacturers from the standpoint of separating the user interface from the pin entry device.  With virtualization what had previously required two devices, can now be accomplished with a single physical device, with a hypervisor hosting multiple secure execution environments, one for the user interface, and one for the pin entry device.
 
In applications where there is a concern about how best to preserve proprietary IP, and still get the benefit from using open source code released under GPL, virtualization provides a way of isolating those two domains.  Integrate GPL code with your proprietary IP, and under the terms of the license, you have to release the full source.  With virtualization it’s now possible to compartmentalize the GPL code, and control the amount of proprietary code that must be released to the public.  (http://www.trango-vp.com/dynamic/front_downloadFile.php?fileName=TGO-TEC-0340-TRANGO_GPL.pdf registration required)
 
Key Criteria in Selecting a Hypervisor
There are numerous ways of creating virtual machines for embedded applications. While just assigning a name to a particular approach does very little to illuminate the critical issues, it is important to understand the foundation upon which a product design is undertaken as it quite often has substantial impacts on the design’s final character. 
 
We’ve labeled the most typical approaches to virtualization that we run across in our day to day work as microscheduler, microkernel, and ‘nanokernel’ (I’ll explain the quotes later).  After a quick once-over of each approach I’ll try to focus on key attributes that customers should be aware of.
 
In a microkernel, an OS kernel is stripped down to its bare essence by removing services that are not strictly required to allow the microkernel to run.  This leaves thread management, interprocess communications, scheduling, and address management.  Hooks and catches are then put in place that allow designers to add those services at a user level.  What this means in practice is that the user mode/kernel mode separation is maintained so a high level of security and robustness is similarly achieved.  But, due to the nature of the originating kernel architecture, there are architectural preferences in the nature of the hosted OS.  In other words, a Linux derived microkernel will have an affinity for hosting Linux as a guest OS.
 
A microscheduler is a closely related approach to that of a microkernel but while the scheduling portion itself runs in kernel mode or the highest privilege level of the system as is the case with a microkernel, at the same time guest operating systems are also allowed to run at this same extremely high privilege level.  What this means in practice is that the guest operating system must be well behaved both from a performance and a security perspective.  This partially eliminates one of the key strengths of virtualization; security.  Robustness is also compromised as a crash on the part of a privileged guest OS or application can still do extensive damage as it’s running “bare metal” and able to bypass protections that are available in a fully virtualized processing environment.
 
Another approach to creating a hypervisor, is to create a hardware abstraction layer or HAL, and add services such as time management, memory management, and interprocess communications to make a useful hypervisor.  “Nanokernel” is a term that I use with some fear and trepidation as it seems that the word was coined more to separate more modern and streamlined microkernel implementations from first-generation implementations such as “Mach.”  While the term may be imprecise, it will have to do until a more precise way of describing this approach comes along.  “HAL-Like” really doesn’t do it justice and, full disclosure, this is the approach that Trango subscribes to.  The key practical difference in this approach and that of typical microkernels is this; as the basis for the creation of the HAL is the underlying SOC, rather than an OS port that just happened to target that SOC, the hypervisor is typically thinner and lighter, and the hypervisor is less ‘picky’ about the specific details of a hosted OS.  In other words, as an approach it tends to be more OS agnostic and a better reflection of the underlying hardware.
 
The good news is that there are lots of good choices out there, and the technology has enormous capabilities.  It’s all a matter of looking at the CPU as one of many virtual devices rather than as unitary and fixed and of keeping an eye out for applications for embedded device programming’s newest tool.
 

More Stories By Frank Altschuler

Frank Altschuler is in charge of marketing for Trango Virtual Processors, a leading provider of embedded virtualization IP. He has just recently joined Trango from Newisys where he was in charge of marketing for their X86 scaling solutions. He has previously held marketing positions at Starcore LLC, a DSP Intellectual property firm, and Cirrus Logic, a fabless semiconductor company. Prior to moving into marketing, Altschuler spent 15 years in engineering design and development in areas such as communications and electro-optics.
He has earned a bachelor's degree in electrical engineering from North Carolina State University. For more information on Trango Virtual Processors, please visit http://www.trango-vp.com or email [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...