Work in progress. However, most of the core functionality is implemented.
To be able to build this, follow this Gist to setup the environment correctly: https://gist.github.com/skvark/49a2f1904192b6db311a
In short:
Add my repositories containing Tesseract OCR and Leptonica to the build machine targets.
Tesseract OCR is just plain engine so Leptonica is used for preprocessing the image.
Currently following steps will be done before the image is passed to the engine for recognition:
- Image is first opened using QImage, dpi is set to 300, image is rotated according to device angle and the image is saved in jpg format.
- Load the jpg image with Leptonica and convert the 32 bpp image to gray 8 bpp image
- Unsharp mask
- Local background normalization with Otsu's algorithm
- Skew angle detection and rotation (Leptonica decides if the image needs to be rotated)
After those steps the image is passed to the Tesseract.
The results are filtered based on the word confindence value. Confidence value is a number between 0-100. 0 means that Tesseract wasn't really sure about the detected word and 100 means that Tesseract is sure that the word is what it is.
I will make some kind of informative page which explains the parameters at some point.
Original:
Preprocessed
Extracted text:
This is a lot of 12 point text to test the
ocr code and see if it works on all types
of file format.
The quick brown dog jumped over the
lazy fox. The quick brown dog jumped
over the lazy fox. The quick brown dog
jumped over the lazy fox. The quick
brown dog jumped over the lazy fox.
D R I N K COFFEE
L Do Stupid Faster
With More Energy