Last week, during the AIIM Conference, I heard a lot of speakers talk about failure. I heard speakers suggesting that we should “avoid impossible projects” in order to insure that failure won’t be an option. I heard speakers suggest that “if we never fail, we aren’t reaching far enough” and I heard speakers suggest that “an imperfect solution isn’t a failure!” Oh, wait, I was the guy who made that last comment – but I liked it.
I no longer see success vs. failure as being binary in nature. Too many things today are moving too fast to lock in on a point in time, a set of attributes and a series of results and say “we have a success!” What would that even mean? Creating information management solutions isn’t like launching a rocket; it’s more like the Mercury-Gemini-Apollo program continuum. If you think back to the 60’s, we watched NASA execute a long series of successful rocket launches along with some spectacular failures, as they raced toward putting a couple of guys on the moon. If failure wasn’t an option, as stated by NASA officials, then how do they define success? I think the space program has been a tremendous success, even though it has been marred by loss-of-life failures, major setbacks, altered schedules and massive cost overruns. I think making progress toward a goal, implementing interim solutions, reaching plateaus and learning lessons are all forms of success, and I think that same logic applies to information management.
I admit that for some projects, success is binary. For example, I listened to a speaker talk about a Documentum upgrade; clearly that was a project that fell into the Done/Not Done category of measurement. I also listened to many speakers sharing their efforts to be more agile, more mobile, or to provide a better user experience; things which can’t be easily measured. If we are trying to build a system that is easier to use, how do we know when we are done, and do we really ever want to say that we’re done?
The comment that I made during my presentation was in the context of decision making. There was a lot of talk at the AIIM Conference last year about how we are on “the second half of the chessboard” and how as Moore’s Law continues to drive innovation and capabilities, things are changing at an ever increasing pace, and the number of things that are changing is also increasing. In response to that situation, I think that we have to get comfortable making decisions faster, and with less certainty. Thornton May will tell you that Big Data (well, analytics) will save the day, but today isn’t that day, and if you’re trying to digest “all available” information before making a decision, you probably aren’t willing to decide. That brings me to the other thing I pointed out in my presentation; that there is a cost associated with not deciding – the cost of doing nothing.
One final point that I made, is that the cost of making the wrong decision is decreasing, and may actually be lower than the cost of not deciding. For example, 14 months ago, I had to decide whether to buy my boss an iPad or wait and buy him a Surface when they became available. We are a Microsoft shop, so, in theory the Surface would be a better fit. I bought him the iPad. He liked it, and he has been using it for 14 months. If in fact the Surface is a better idea, I can get him one when the time comes to upgrade his iPad, so if I’ve lost anything, it was a 6-8 month period where he could have a slightly better tablet, and that’s a huge if.
New Yorker magazine recently published an article that suggested that “hard choices never matter.” (I’ll let you Google that, because some hits I got look like the New Yorker wants them taken down). The rationale for that conclusion is that “the reason the choice is hard, is because the products or services are close to being equally good”, in other words, neither choice is likely to be a failure.#failure #analytics #AIIMConference #BigData