Welcome!

Containers Expo Blog Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui, Pat Romanski

Related Topics: Containers Expo Blog

Containers Expo Blog: Article

Reducing TCO Through Mainframe Resource Optimization

And meet the demands of the customers and business

Ordering additional mainframe hardware was once a regular, accepted part of the budget cycle. This process made capacity planning a far less challenging task than it is today. Performance problems, regardless of the cause, were easily addressed by adding more hardware. Performance analysts and capacity planners were able to deal with performance issues with little concern about the cost.

Economic uncertainty and recent world events have changed this paradigm. Now that every dollar in the information systems budget must yield a maximum return on investment, hardware upgrades are delayed as long as possible. In today's world, simply adding new hardware is not the most efficient or cost-effective way to manage performance problems. At the same time, reductions in staff from downsizing and the retirement of experienced mainframe technicians are causing mainframe technical expertise to diminish. These issues make performance management that much more of a challenge.

The demand for continuous systems availability and reliability is increasing exponentially. What was once a reasonably controlled user population has expanded to everyone with an Internet connection. Web-enabled legacy applications are causing transaction volumes to explode, putting a greater strain on IT resources.

"Do more with less," is the mantra, but what is the best way to accomplish this while providing the required service and performance? While hardware costs are dropping, software and people costs are increasing. As the total cost of ownership (TCO) rises, each business transaction becomes more costly. One of your many challenges is to control costs while meeting service level objectives. In today's world, simply adding new hardware is not the most efficient or cost-effective way to manage performance problems. Contrastingly, TCO can be best reduced by optimizing your existing resources, improving application performance, and deferring costly CPU upgrades.

The Old Ways Aren't Enough
The traditional methods of dealing with performance issues are seldom adequate in today's environment. Many system programmers and performance specialists have learned to work around performance issues.

Not too long ago, well-defined batch and online processing windows made it possible to change processing times in order to take advantage of well-known periods of low activity (valleys), where resources were more plentiful. Today, while batch is still a key workload, online processing occurs 24/7, turning the picture of yesterday's peaks and valleys into a plateau of near-constant demand. Online applications are the priority workloads day and night. Deferring work is not an option, and moving it can be a risky proposition without a way to test the impact.

Because of this, many companies looked to migrate work to distributed systems (DS) such a UNIX and Windows, but the costs of rewriting applications often proved to be prohibitory. In addition, three-tier environments were heralded as the "next new thing," but many companies became aware of the lack of cross-system expertise to manage the enterprise.

Adding to the challenge, hardware upgrades and tweaking system parameters often resulted in smaller performance improvements than expected, considering the outlay of time and money. In many shops, more than half of performance problems originated from inefficient application design and, with pressing business deadlines, programmers are forced to make it work, rather than make it work well, allowing for errors.

If optimization and tuning opportunities are ignored during the development cycle, you will pay for it later - in time, people, dollars, or an application's inability to scale. No matter how much CPU or system timing is done, inefficient applications place additional demand on the system. Industry analysts have demonstrated that it is 10 times more costly to resolve a performance problem in production than during development and testing. Time and again, these performance problems translate into lost business opportunities.

The New Ways Exist
The mainframe environment is dynamic, with daily changes for maintenance and new development. The ability to tune a complex batch window or to manage a high-demand CICS system is rapidly becoming a lost art. The right tools are essential to manage these dynamic and complex environments. The old manual tuning and optimization processes that worked so well in the past are simply not adequate to meet the demand and data volumes that exist on today's systems.

To address the changing environment, companies must leverage performance and capacity management solutions that enable you to get the best results from existing resources. These automated solutions should:

  • Model performance and plan for growth
  • Manage application quality
  • Optimize batch processing
  • Optimize CICS processing
Model Performance and Plan for Growth
To reduce costs and process data efficiently, identify targets - workloads that use a large amount of costly resources - without putting excessive artificial loads on the system. To do so, companies should implement a performance management solution that allows IT managers to track work down to the individual address space and drill down to find candidates for resource optimization. In addition, enabling such a solution will allow users to analyze CICS, IMS, DB2, and MQ transactions.

After you have identified candidates for resource optimization, it is important to test tuning options and moving workloads (or parts of workloads), and changes in the transaction mix, to ensure that production response time and turnaround remains within agreed limits.

With these tools, it is easy to test a myriad of solutions and select the best price/performer. If the hardware costs of disaster recovery (DR) are of concern, DR strategies can be tested to ensure that acceptable performance can be achieved in a variety of situations. Though the CPU impact can generally be assessed with a spreadsheet, the impact on throughput and response times requires an understanding of queuing theory (which is the core of analytic modeling).

A common question that is posed to capacity planners is: "How much will this new application cost when everyone is using it?" Users can answer this question by modeling volume changes down to the individual address space. This process demonstrates the cost of maintaining acceptable performance at the new volumes. Knowing exactly how much hardware is needed, and when it's needed, simplifies the budget process.

Application Tuning
Applications and systems are more complex now than ever before. Specialization has become the norm. Application developers and systems programmers can have an impact on performance, but they do not focus explicitly on application performance. Without an automated way to manage application quality, a shop might never recognize a poorly designed application that consumes excessive resources yet meets acceptable service levels as a performance improvement opportunity.

The demand for quick time-to-market coding forces developers to push code into production too quickly. Time is limited for adequate design analysis and testing, and other factors, like high-level languages, further complicate the environment in which they run. Due to the lack of attention to application tuning, it is often difficult for an application programmer to know which code structures will result in less efficient processing.

Until recently, the goal of improving performance through application tuning was not considered critical - to reiterate, new features and increased functionality were the goals. In reality, many application-tuning opportunities relate to problems that were introduced when the application was coded. Therefore, application tuning is a significant opportunity for large cost savings.

Application quality management (AQM), a methodology for proactively optimizing mainframe application performance throughout the application life cycle, automatically targets candidates for performance analysis while prioritizing opportunities for analysis. The AQM process provides automated application measurement, automated and targeted performance diagnosis, and prioritizing performance analysis, which results in significant IT savings through deferred upgrades and resource optimization.

Manual tuning procedures are time consuming and inefficient, and few organizations have the luxury to operate this way anymore. By using this process in the development cycle and automating the application tuning process, you can avoid performance disasters.

More Stories By John Albee

John Albee is director of mainframe solutions, BMC Software.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Moroccanoil®, the global leader in oil-infused beauty, is thrilled to announce the NEW Moroccanoil Color Depositing Masks, a collection of dual-benefit hair masks that deposit pure pigments while providing the treatment benefits of a deep conditioning mask. The collection consists of seven curated shades for commitment-free, beautifully-colored hair that looks and feels healthy.
The textured-hair category is inarguably the hottest in the haircare space today. This has been driven by the proliferation of founder brands started by curly and coily consumers and savvy consumers who increasingly want products specifically for their texture type. This trend is underscored by the latest insights from NaturallyCurly's 2018 TextureTrends report, released today. According to the 2018 TextureTrends Report, more than 80 percent of women with curly and coily hair say they purcha...
The textured-hair category is inarguably the hottest in the haircare space today. This has been driven by the proliferation of founder brands started by curly and coily consumers and savvy consumers who increasingly want products specifically for their texture type. This trend is underscored by the latest insights from NaturallyCurly's 2018 TextureTrends report, released today. According to the 2018 TextureTrends Report, more than 80 percent of women with curly and coily hair say they purcha...
We all love the many benefits of natural plant oils, used as a deap treatment before shampooing, at home or at the beach, but is there an all-in-one solution for everyday intensive nutrition and modern styling?I am passionate about the benefits of natural extracts with tried-and-tested results, which I have used to develop my own brand (lemon for its acid ph, wheat germ for its fortifying action…). I wanted a product which combined caring and styling effects, and which could be used after shampo...
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected pat...
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
Codete accelerates their clients growth through technological expertise and experience. Codite team works with organizations to meet the challenges that digitalization presents. Their clients include digital start-ups as well as established enterprises in the IT industry. To stay competitive in a highly innovative IT industry, strong R&D departments and bold spin-off initiatives is a must. Codete Data Science and Software Architects teams help corporate clients to stay up to date with the mod...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Druva is the global leader in Cloud Data Protection and Management, delivering the industry's first data management-as-a-service solution that aggregates data from endpoints, servers and cloud applications and leverages the public cloud to offer a single pane of glass to enable data protection, governance and intelligence-dramatically increasing the availability and visibility of business critical information, while reducing the risk, cost and complexity of managing and protecting it. Druva's...
BMC has unmatched experience in IT management, supporting 92 of the Forbes Global 100, and earning recognition as an ITSM Gartner Magic Quadrant Leader for five years running. Our solutions offer speed, agility, and efficiency to tackle business challenges in the areas of service management, automation, operations, and the mainframe.