Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: Containers Expo Blog, Open Source Cloud

Containers Expo Blog: Article

Wasting Cache

Top 10 mistakes to avoid with virtual desktops

It almost sounds like I'm talking about personal finances. You better plan your cache appropriately or you will run out. I'm not talking about money; I'm talking about system memory (although if you plan poorly we will quickly be talking about money).

It comes down to this… system cache is a powerful feature allowing a server to service requests extremely fast because instead of accessing disks, blocks of data are retrieved from RAM. Provisioning services relies on fast access to the blocks within the disk image (vDisk) to stream to the target devices. The faster the requests are serviced, the faster the target will receive. Allocating the largest possible size for the system cache should allow Provisioning services to store more of the vDisk into RAM as opposed to going to the physical disk. 

Not planning system cache appropriately is the 8th mistake made when deploying virtual desktops

10.  Not calculating user bandwidth requirements

9.    Not considering the user profile

8.   Lack of Application Virtualization Strategy

7.  Improper Resource Allocation

6.  Protection from Anti-Virus

5. Managing the incoming storm

4. Not Optimizing the Desktop Image

Unfortunately, many environments are not configured optimally. Simply adding RAM to a Provisioning services server is not enough; the system must be configured appropriately.

Parameter Description
Operating System The operating system plays a large role in how large the system cache can become.
  • Windows Server 2003/2008 x32: 860 MB
  • Windows Server 2003/2008/2008 R2 x64: 1 TB

Because the 64 bit operating system can have a larger system cache, a larger portion of the vDisk can be stored in RAM, which is recommended.

Windows 2008 is recommended over 2003 because of the improvements in the memory manager subsystem, which has shown some improvements.This information was obtained from the Microsoft Memory Limits for Windows Releases page.

RAM 8-32GB of RAM

The more RAM allocated for the server, the larger the system cache can become. The larger the cache means vDisks reads will be faster. If you have more vDisks, you will need more RAM. A quick estimate is to plan for 2GB of RAM/Cache for each vDisk you will host. If you want more details, then I recommend the great article: Advanced Memory and Storage Considerations for Provisioning Services created by Dan Allen (Sr. Architect at Citrix). It goes into the details of how Windows deals with cache.

vDisk Storage The vDisk can be stored on just about any type of storage (iSCSI, Fiber, local, NFS, CIFS, etc). However, there are a few instances where the storage selected will have an impact on how the Provisioning services server's operating system caches the vDisk blocks.
  1. Network Drive: If the Provisioning services server sees the vDisk drive as a network drive via a UNC path, the server will not cache the file.
  2. CIFS Share: If the storage infrastructure is a network CIFS share, Provisioning services can still cache the vDisk in memory if appropriate settings are done.
Optimizations In Windows Server 2003, large system cache must be enabled by configuring the server's performance options, which is shown in the figure to the right.

In Windows Server 2008, this setting is not required due to the enhancements in the memory allocation system. Windows 2008 utilizes a dynamic kernel memory assignment that reallocates portions of memory on-the-fly, while previous versions had these values hard set during startup. As Windows 2008 requires more system cache, the operating system will dynamically allocate.

More Stories By Daniel Feller

Daniel Feller, Lead Architect of Worldwide Consulting Solutions for Citrix, is responsible for providing enterprise-level architectures and recommendations for those interested in desktop virtualization and VDI. He is charged with helping organizations architect the next-generation desktop, including all flavors of desktop virtualization (hosted shared desktops, hosted VM-based desktops, hosted Blade PC desktops, local streamed desktops, and local VM-based desktops). Many of the desktop virtualization architecture decisions also focuses on client hypervisors, and application virtualization.

In his role, Daniel has provided insights and recommendations to many of the world’s largest organizations across the world.

In addition to private, customer-related work, Daniel’s public initiatives includes the creation of best practices, design recommendations, reference architectures and training initiatives focused on the core desktop virtualization concepts. Being the person behind the scenes, you can reach/follow Daniel via Twitter and on the Virtualize My Desktop site.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...