On Demand Data and the CMDB
Certainly we would need some basic CMDB data kept continually. This would be the stuff we discover auto-magically already, such as procurement-driven asset databases, or discovered network topologies and desktop inventories, or the transactional information captured by the service desk. Add to that the stuff we document on paper already (or ought to), the service catalogue, phone lists, contracts and so on.
The savings in not trying to go beyond that base CMDB data would be great. The price paid for those savings would be that on-demand does not mean instantaneous. It might mean hours or days or even weeks to respond to the demand. So a business analysis needs to be done to find out how current the data really needs to be (as compared to what the technical perfectionists say). In some organisations the criticality demands instant data and they need to trudge off down the CMDB path. But for the majority of organisations this just isnt so.
The original premise was in jest but I believe it raises a serious idea worth considering. As a vocal critic of CMDB I am sometimes asked, Well, what then?, and the idea of on-demand configuration data goes some way towards answering that.
More Than Configuration
Nor is this idea limited to CMDB of course. As we mentioned at the start, service levels could be more efficiently reported using sampling techniques and manual collation than by implementing some huge automated service level reporting system. Any system needs to be dynamic; the businesss demands for service level reporting are often changing. An expert ad-hoc reporter is more dynamic than service-level-tracking software.
Likewise much of the operational data which we track, such as performance and capacity, could be on-demand. Rather than implement a big historical database, get the on-demand team to dip as needed into the streams of out of the box data coming from storage managers, operating systems, databases, and event monitors. The IT industry is prone to fads, especially anything that involves a revolutionary new way to do something and doubly especially if it involves a technological solution to a people or process problem.
So much effort and expense would be saved and disruption avoided if we looked first to formalise and optimise the way we do things now before launching off into the wild blue yonder, usually following some vendors promise of a better place. Before we build a technology solution, look at whether existing process can be tightened up and improved to achieve the same result for less.
The challenge will be overcoming two things: excessive technical fastidiousness (the geeks compulsion to do everything right) and that be prepared ethic. Remember the fable of the grasshopper and the ant? The ant slaves away all summer storing food while the grasshopper plays around, then come winter the ant is well fed while the grasshopper starves. It is obviously not true. The world is full of grasshoppers. They are more resourceful than that. They work something out.
There are many organisations and societies who understand that effectively and efficiently dealing with the things that happen, and not wasting time and money preparing for the things that dont happen, is in fact a net saving in effort. In this case, instead of building infrastructure to continually record data we might one day need we can quickly gather only the data we do need in reaction to a requirement. There is some linkage here with the concepts of Lean IT. Something to considerespecially when you look at that expensive CMDB project.
Rob England is an IT industry commentator and consultant, and nascent internet entrepreneur, best known for his blog The IT Skeptic.