Home    ITIL  Index

Predictive Modeling Is Critical For Capacity Planning

By David Wagner Occasionally, revolutionary changes occur that completely rewrite the landscape in terms of best practices management. Such a change is occurring in today's distributed computing environments.
Jun 26, 2005
By

ITSM Watch Staff





By David Wagner

Today, management is faced with a series of mutually conflicting imperatives: assure appropriate service levels to all constituencies; align IT expenditures and initiatives with business imperatives; and do all this while driving significant cost out of the equation. Making matters worse, continual dynamic technology changes (outside of business control),add unpredictable variables and can upset even the best laid plans and management processes.

The discipline of IT Service Management offers guidance on best practice approaches to attacking these challenges by dividing them into Service Support and Service Delivery.

Capacity planning, considered a process/methodology within Service Delivery, is primarily associated with determining over time, the optimally sized IT resources to ensure appropriate performance and throughput. When CPU was the most expensive component of the IT infrastructure, capacity planning focused on optimizing that spend. As other aspects of infrastructure assumed fiscal prominence, optimizing those resources was added (e.g. storage, networks, etc.).

Capacity planning always focused on resource utilization rates (e.g. %CPU), or resource capacity rates (e.g. amount of free disk space) - measured over time. Methods such as trending, linear regression, simulation and/or analytic modeling are used to "predict" the optimum points of resource required versus response time and throughput requirements.

Each offered a series of strengths/weaknesses; e.g. linear trending is simple, but grossly inaccurate in terms of predicting response time and throughput; simulation modeling is highly accurate, but difficult and time consuming, analytic modeling is accurate, but requires measured performance data. And, the accuracy directly equated to risk and cost reduction.

Used correctly as part of overall best practice implementation of Service Management, capacity planning has driven significant cost out of IT budgets while simultaneously minimizing the business risks associated with potential service outages and slowdowns.

Occasionally, revolutionary changes occur that completely rewrite the landscape in terms of best practices management. Such a change is occurring in today's distributed computing environments. Initial server consolidation efforts concentrated on fewer, larger (typically) Unix servers from HP, IBM, Sun and others. Cost savings were primarily realized by elimination of underutilized servers, which had sprawled during the 90's boom years.

David Wagner is Director of Product Marketing and Management for BMC Software's Enterprise Performance Assurance solutions. In this role, he is responsible for driving overall product strategy, pricing, requirements and positioning of BMC Software's family of proactive performance analysis and predictive modeling solutions across all leading enterprise platforms (Unix, Windows, Mainframe, Linux, AS/400 and OpenVMS) and their associated applications.


    1 2 >> Last Page


Comments  (click to add your comment)

Comments

    Name or nickname

    Email address

    Website

    Write comment
    You have characters left. (Maximum characters: 1200).

     


    IT Management Daily Newsletter




    Most Popular