Whether you rely on the workflow you get with an ECM suite or on the heavier functionality of a BPM product, I recommend these 7 best practices. Some of them may be new to you – like how you should plan to fail, avoid razzle dazzle, and apply the 3 Year Rule.
1. Always design to optimize partial failure. Almost everyone – 100% of the vendors and 99% of organizations – assume that the new process and technology will work as advertised. They then analyze the efficiency impacts based on that assumption in their business cases, operational planning, etc. But the systems never work exactly as planned, and the implementation never follows the roadmap exactly as designed. It may be an acceptable or even happy failure, in that what happens is okay or even better than anticipated – but your final production process always differs from how it was originally designed. And the impact, usually, is bad. So be sure to model failure and “half-baked” scenarios -- scenarios where you have to stop at various points in your roadmap. Make sure you can optimize a completely uneventful, successful implementation. But have lots of “Plan Bs.”
2. Focus on blocking and tackling first – and don’t be seduced by razzle dazzle. What’s most important is the blocking and tackling: the proven ability to manage processes in production without faltering as the requirements increase in scale and complexity. Everything else is secondary. I’ll repeat this: you should focus on what will lead to successful execution in production, more than on attractive but unproven features. Some of the attractive secondary capabilities include analytics and reporting, “advanced case management”, “social workflow”, and integration with social media, as well as other high profile features marketed in the process management market today. It’s okay to go after these secondary benefits on top of the foundational blocking and tackling – but make sure you don’t fail in production first. Many BPM products fail in one of two ways: either they may provide the foundation but lack differentiators, or they provide the sexy differentiators without the foundation. The latter is far worse.
3. Profile your candidate processes and then fit to the right kind of BPM tool. This is critical as an early step. How do you profile? There are good and bad ways to do this, but here’s a simple and effective way that we use. You need to look at more than just “scalability,” but your analysis has to be simple and fast. So classify your processes as to where they fit in the following dimensions: 1) pervasiveness (scalability in terms of numbers of users), 2) level of capabilities (basic to advanced; specific to general), and 3) types of users and applications (knowledge or process). Then use the right tool for that fit. Every BPM tool has a different profile regarding where it fits great, where it fits okay (and can be used if you already have it in your portfolio), and where it would be a disaster.
4. If you do BPM in the cloud, address the five primary issues. The five primary issues for cloud-based BPM today are:
Security, segmentation, and confidentiality
Business continuity (i.e. disaster recovery) and availability
Accountability (whose neck is on the line when there’s a problem)
Flexibility and customization
Vendor, product, and project risk are the big issues for cloud-based BPM. “Vendor risk” means your initiative will fail because the vendor dies or gets bought. “Product risk” means that your initiative fails because the offering (the “product”) fails to be adequate. “Project risk” means that your initiative fails because the vendor doesn’t provide adequate service and support.
5. Apply the “3-Year Rule”. This is a simple, effective way to help ensure you’re controlling vendor, product, and project risk – and following tips #2 and #4 above (i.e. blocking and tackling versus razzle- dazzle, and cloud versus non-cloud). Ask yourself: Has any BPM offering from this vendor been successful in production for 3 years? Most BPM vendors fail this test – and therefore are just trying a different approach (“advanced case management!”). The safest approach is to go only with products (including cloud services) that have been in successful production for 3 years. An aggressive but still risk-controlling approach is to go with a product (or cloud service) that has not passed the 3-year test, but which comes from a vendor that has a track record of at least one 3-year stretch in production. Stay away from vendors who can’t even do this much.
6. In most cases, it’s best to do BPM in two steps. This rule has definite exceptions, but it’s often best to digitize the paper-laden process first, and then live with the new electronic environment for 6 months or so before redesigning and automating the processes. This has three benefits:
It avoids the cliché situation of “automating the paper process”– i.e. badly designing the new process because you haven’t properly understand the impact that electronic documents will have.
It’s often necessary for change management. Let the workers adjust to the new electronic environment before they take the next dramatic step of automation. This is an application of the best practice rule that’s been learned after considerable expenditure of blood and tears: In order to optimize the two requirements of full worker participation (everyone plays the game) and high work quality, you should focus on participation first, and then incrementally ratchet up the expectations and responsibilities of high quality. If you try to optimize both too early, you will fail.
You should proceed in your BPM implementation in relatively small steps (“baby” steps are best), which allow you to stop at any step and declare victory. Half-completed workflow projects are not “’halfway there” – they are failures. Not only did they break the old process, but the old process has not been replaced with an adequate new one. Digitizing first and then pausing provides a good way to control this risk.
7. Design information lifecycle management and records management (RM) into the workflow. The processes worth automating are going to have high-value, high-risk content and documents, so you’re going to have to address their risk and regulatory requirements at some point anyway. RM is most successful when the documents to be managed for RM are already under the control of a structured process and a managed document system. The context of the process and systems that they are in allows you automate most of the stuff you need to do for RM, so users don’t have to waste time being file clerks and records managers.
#BusinessProcessManagement #cloud #requirements #BPM #implementationplanning #Collaboration #workflow #bestpractices #failure