Home    ITIL  Index

On Demand Data and the CMDB

Are we overdoing our CMDB reporting requirements, asks ITSM Watch columnist Rob England?
Oct 31, 2008
By

Rob England





Consider the option of on-demand operational data, and a specialist team to provide it. I want to propose the idea of on-demand operational data. I’ll discuss it in the context of Configuration Management and CMDB but it applies just as much to any data requirements. Within the audience reading this, that would also include areas like service level reporting, capacity trending, support responsiveness, and many others.

I’ll start with CMDB. Instead of chasing the rainbow of building a consolidated, federated, integrated, reconciled CMDB, we could assemble the configuration data on demand in response to a requirement. Let me say from the outset that this is an idea rather than proven practice, but then ITIL v3 includes relatively untested ideas like the SKMS so I am in good company.

The idea of on-demand data first came to me in the satirical book Introduction to Real ITSM:

“Known elsewhere as “assets” or “configuration”, real ITSM manages stuff. Records should be kept of all stuff except where records are not kept … Management or auditors will periodically demand lists of stuff. Given that their requirements are entirely random and unpredictable, the most efficient way to deal with this is to respond on an ad-hoc basis by reallocating staff to collate the data. This is known as on-demand processing. The technology of choice is MS-Excel.”

That proposition was made tongue-in-cheek, but let us consider it seriously.

Does this scenarios sound familiar?

The Service Desk and Support staff just will not record incidents in the correct category. Incidents get left in the first category chosen even when it emerges they were actually something quite different. There are multiple categories being used for the same thing depending on who you ask. The taxonomy is so complex that new staff use “Other” for weeks while they come to grips with the job before they are pressured into learning the proper categorisations. Level 2 insist on using their own subset of categories. And the external service providers never specify category at all. The Incident manager spends hours every week trying to keep it clean. Every few months there is a housekeeping sweep to fix it all up. It comes up at every service desk team meeting, over and over, but the message never seems to get through.

Or this one?

The Security Manager wants to introduce an authentication “dongle” that plugs into the USB port. He wants to know how many desktops have a USB port accessible on the front. But there is no record of which desktops have a spare USB port at all, let alone on the front. This identifies a deficiency in the asset database, so a new field is added and a project launched to go capture the information from thousands of desktops and laptops across the organisation.

Here is a hypothetical future scenario for the real geeks reading this:

The grid computing network reconfigures itself under load, but because the network is in mid-reconfiguration when the updates get sent to the CMDB and because the updates are not two-phase-commit (there is no multi-phase commit in the CMDB architecture), the updates keep getting lost and the vendors seem incapable of adding store-and-forward to the update mechanism. So we never really know in real-time which servers are running which services.

In all three of those scenarios, consider if we created the configuration data when we needed it in response to some particular situation instead of trying to maintain it all the time in a CMDB.

Formalising What We Do Anyway

This is nothing new—it is what we do now. We create data ad-hoc anyway when we have to. If the data is not there or not right and management wants the report, we gather it up and clean it up and present it just in time, trying not to look hot and bothered in the process.

How much better would it be if we had a team, expert in producing on-demand configuration information? They would have formal written procedures for accessing, compiling, cleaning and verifying data, which they would practice and test. They would have tools on the ready and be trained in using them. Most of all they would “have the CMDB in their heads”, e.g., they would know where to go and who to ask to find the answers. They would have prior experience in how to do that and what to watch out for.

So instead of ad-hoc amateurs responding to a crisis, experts would assemble on-demand data as a business-as-usual process. When management wants a report on the distribution of categories of incidents, they would sample a few hundred incidents, categorise them properly according to what the requirements are this time (after all how often does an existing taxonomy meet the needs of a new management query anyway?) and respond accordingly.

They would be an on-call team, responsive to emergency queries. “The grid computing system has died and the following servers are not dynamically reconfiguring. Which services are impacted and which business owners do we call on a Saturday?” They may not know the answers off the top of their heads but they will know, better than just about anyone, where and how to look—and, just as importantly, how long that is going to take.

 


    1 2 >> Last Page


Comments  (click to add your comment)

Comments

    Name or nickname

    Email address

    Website

    Write comment
    You have characters left. (Maximum characters: 1200).

     


    IT Management Daily Newsletter




    Related Articles

    Most Popular