“How are we doing relative to our peer group or competitors?” We hear this question a lot from customers and partners. They want to know they are tracking against their industry in terms of read rates, throughput, and accuracy, among other factors. Are they underperforming, and if so, why? Although there are unique cases, we regularly discover a handful of factors that, solo or in concert, prevent companies from getting as much as they can from their automated forms process.
Scanner hardware is often a root culprit. When designing their automated forms process, we have seen some companies try to cut corners by sourcing scanners that aren’t production-grade (e.g., such as the ability to pre-process and correct captured images before they are sent on to the recognition stage). Or they use legacy equipment because it was cheaper or more convenient, rather than looking for what would give them the best performance.
Form design is another regular stumbling block, and a good example of how simple factors can impact the outcome. We often engage in critical conversations that center around questions like: Is the form well thought-out in terms of the type of information being requested? Has the person filling it out been given adequate instruction on when to print words, for example? Is the form too tight, creating the possibly for machine and hand print fields to spill into each other? Addressing problems like these at the form design stage lessens drag on your reading and recognition process.
It’s a standard function without which the process isn’t complete: Can your technology’s pre-recognition software clean up and improve captured images and catch imperfections that didn’t get caught at the hardware level? This encompasses capabilities such as stripping out combs and boxes; and recognizing patterns associated with a particular field, then dropping out extraneous noise. While hardware can deskew and despeckle images, pre-recognition capabilities add a further layer of quality assurance.
Adequate Contextual Recognition
Being able to read the context of a field improves recognition accuracy and overall system throughput. If your process is underperforming, this is likely to be one of the factors. Contextual recognition capabilities can tell the image recognition engine that because of its position in a field, a character is most likely a date, so it won’t waste time sorting through other recognition formats or engines (alphanumeric to handprint, for example). Our second factor above, smart form design, also has a bearing on contextual performance. Defining the parameters in form definition can help drive better contextual recognition — an extra payoff.
Jim Franklin is with Parascript, online at www.Parascript.com#Recognition #formsprocessing #ElectronicRecordsManagement #characterrecognition #ICR #OCR #ScanningandCapture