Predictive Modeling Is Critical For Capacity PlanningBy David Wagner Occasionally, revolutionary changes occur that completely rewrite the landscape in terms of best practices management. Such a change is occurring in today's distributed computing environments.
Today, management is faced with a series of mutually conflicting imperatives: assure appropriate service levels to all constituencies; align IT expenditures and initiatives with business imperatives; and do all this while driving significant cost out of the equation. Making matters worse, continual dynamic technology changes (outside of business control),add unpredictable variables and can upset even the best laid plans and management processes.
Capacity planning, considered a process/methodology within Service Delivery, is primarily associated with determining over time, the optimally sized IT resources to ensure appropriate performance and throughput. When CPU was the most expensive component of the IT infrastructure, capacity planning focused on optimizing that spend. As other aspects of infrastructure assumed fiscal prominence, optimizing those resources was added (e.g. storage, networks, etc.).
Capacity planning always focused on resource utilization rates (e.g. %CPU), or resource capacity rates (e.g. amount of free disk space) - measured over time. Methods such as trending, linear regression, simulation and/or analytic modeling are used to "predict" the optimum points of resource required versus response time and throughput requirements.
Each offered a series of strengths/weaknesses; e.g. linear trending is simple, but grossly inaccurate in terms of predicting response time and throughput; simulation modeling is highly accurate, but difficult and time consuming, analytic modeling is accurate, but requires measured performance data. And, the accuracy directly equated to risk and cost reduction.
Used correctly as part of overall best practice implementation of Service Management, capacity planning has driven significant cost out of IT budgets while simultaneously minimizing the business risks associated with potential service outages and slowdowns.
Occasionally, revolutionary changes occur that completely rewrite the landscape in terms of best practices management. Such a change is occurring in today's distributed computing environments. Initial server consolidation efforts concentrated on fewer, larger (typically) Unix servers from HP, IBM, Sun and others. Cost savings were primarily realized by elimination of underutilized servers, which had sprawled during the 90's boom years.
David Wagner is Director of Product Marketing and Management for BMC Software's Enterprise Performance Assurance solutions. In this role, he is responsible for driving overall product strategy, pricing, requirements and positioning of BMC Software's family of proactive performance analysis and predictive modeling solutions across all leading enterprise platforms (Unix, Windows, Mainframe, Linux, AS/400 and OpenVMS) and their associated applications.