There are two basic math problems on the chalkboards of our earthly cosmos:
A. Quantification: How much of something there is.
B. Qualification: How to set up an equation that helps resolve A.
The worlds of finance and budgeting revolve around A. Even the turgid arts of economic law and theory either fly or crash through the lens of quantification. The sciences and engineering have a stake in A but they play prominently in the analytics governing B.
The shear act of counting stuff is banal. What metric to use and which comparison to make is the qualification tail wagging the quantification dog – the less the dog knows, the more accurate the count.
Don’t get me wrong. Counting stuff correctly is critical to the success of any system. But it is as clarifying in its simplicity as it is marginalizing to the value of the counter. By definition commodifying means never having to add value – just digits.
On the other hand prioritizing, extrapolating, inferencing, and hypothesizing require multiple inputs in order to process, rationalize, and ultimately resolve. This is the heart of qualification. This is 1-5 on the to-do list of the multi-tasking analyst, although this province is ceding ground to machine logic.
In a piece on the under-the-bussing of the American middle-class, Don Peck writes in the current Atlantic Monthly that “Computer software can now do boilerplate legal work, for instance, and make a first pass at reading X-rays and other medical scans.”
Deeper insights, complex modeling and more expansive thinking is the higher ground for analysts living in higher cost countries. How many high-end dealmakers can American universities crank out is an absurd premise. It's not because the B-school rudiments don’t exist to foster these skills. It’s because our plutocratic food-chain principles favor a winner-take-all share of the spoils—a fight that would parch Darwin’s thirst for victory. Market cannibalization is the B-school term for this.
As Peck might ask what do we do as technologists to complement the work of people who do complex research and sophisticated analysis?
For starters we don’t count stuff without adding up the larger questions around the counting table.
We create facets around search results – not because the results are important (to anyone outside of Google and our SEO buddies) but because of the patterns in the facets that will help our users to steer decisions and influence outcomes.
We render single information units in multiple places. No two users have the same take on the same information just as no single document serves one form of usage.
We allow users to name stuff and resist the idea that it’s theirs versus ours. We’re kicking users off our systems the moment that access and territorial rites determine its value and undermine its usefulness.
We quality check our groupings so that our users know their sources and the sourcing. That means context right down to the motivation for letting the wider community in on the knowledge delivery.
All those deeper, richer, gooeyer analytics are formed in the interdependencies of the data we’re delivering. For our users connecting the pain point dots is a mish-mash of process flows and cross-organizational designs (and all those “single” login codes to prove one’s access-worthiness). The goose chase for “who-to-go-to” is the newest age-old business case for B-world qualifications for the A-world number counters, a.k.a. social media.
Place this challenge in the vendor community and you get white papers that sound something like this: Search engine getting you down? Let our recommendation engine reduce the anxiety of making an incorrect connection.
So what are the practical choices for ECM managers?
Stress out the go-to people? Nope.
Create relationships in the data that foster collaborations among the go-to and the go-from people is more like it. That's an answer that stands up to qualification.#ScanningandCapture #analysis #searchfacets #economy #binary #complexity #decisionsupport #GoogleAdwords #SharePoint