Home    ITIL  Index

Predictive Modeling Is Critical For Capacity Planning


Jun 26, 2005
By

ITSM Watch Staff





By David Wagner

With the introduction of new technologies such as virtualization, blade servers, and processor technologies available from Intel and others, the industry is moving toward leveraging what we'll call "Commodity Computing" - the use of multiple, inexpensive CPUs in highly dense physical structures. These are installed, managed and operated as a single logical entity.

These technologies are enabling the next step in cost-effectively optimizing resource utilizations required to deliver acceptable response time and throughput. They include enhanced automation for repeated deployment of required incremental processing capacity, simplified management approaches, as well as significant reduction in server sprawl, while still allowing for the physical separation of computing environments desired by business constituencies, which ironically led to server sprawl in the first place!

As with any technology change, there are important new realities (in this case imposed by laws of physics) which make predictive capacity planning even more appropriate.

While Moore's Law continues to deliver greater than ever computing power at lower costs, it also increases transistor density in the latest generations of integrated circuits. New technologies (90uM versus 130uM), and packaging of those technologies (e.g. 6U blade servers) ensure that where previously there might be 1 to 4 physical CPUs per square foot of floor space, there can now be as many as 50 to 100.

And, each of those CPUs is using substantially more electrical power than previously. Worse, the electrical power per unit of increased processing power is not linear. The use of power per square foot of floor space is also no longer linear. This has resulted in a situation where implementation of these newer technologies has run into inviolable laws of physics and reality; there is only so much electricity available to a given data center and increased heat generation must be cooled, etc.

Basically, an entirely new set of non-linear phenomena are beginning to supplant the more traditional obstacles imposed by the law which states "response time varies non-linearly with respect to transaction rates and resource utilizations".

For many data centers, the largest items in the budget are now floor space and power related. In some cases it is physically impossible to get any more electricity! In others, there is no more space available on the floor. These types of capacity plans have significantly longer lead times than simply "turning on another idle processor".

The new law can be paraphrased: "power, cooling and floor space increase non-linearly with respect to processing power". Capacity planning must consider these new paradigms as fundamental architectural decisions are made, or one risks adoption of technology that simply trades one set of understood costs (servers, software) for another set of less well known (and potentially riskier) costs!

Business forecasts are planned and calculated, but externally driven changes including technology advancement, competition, disaster recovery, mergers and acquisitions, and macro economic factors are less easily predicted.

Because of the non-linearity of the response time (resource relationship), evaluations of sufficient capacity must take into account peak business demand during all expected operating conditions. Critical questions in this phase include:

  • What and when are normal and peak demand cycles?
  • How do various workloads track to such cycles?
  • What allowances should be made for external change?
  • Which workloads are the most critical?
As always, proper capacity planning must factor in all the variables needed to "right size" (and "right choose") computing technologies so they deliver the required performance, at the right time, in a cost effective fashion to enable business services. The best practice capacity planning process will now add power, floor space and cooling considerations to those more traditionally evaluated.

The best technology choices for capacity planning remain those best able to factor in the increasing importance of non-linearity between resource utilizations and response times, as the magnitude of the downside risks due to inaccuracy have just accelerated with the latest technologies.

David Wagner is Director of Product Marketing and Management for BMC Software's Enterprise Performance Assurance solutions. In this role, he is responsible for driving overall product strategy, pricing, requirements and positioning of BMC Software's family of proactive performance analysis and predictive modeling solutions across all leading enterprise platforms (Unix, Windows, Mainframe, Linux, AS/400 and OpenVMS) and their associated applications.




Comments  (click to add your comment)

Comments

    Name or nickname

    Email address

    Website

    Write comment
    You have characters left. (Maximum characters: 1200).

     


    IT Management Daily Newsletter




    Most Popular