1 | initial version |
@Bleach It depends on what text manipulations you are performing on the image data. Somehow I think the letters are closer together in your input to Tesseract after opening/manipulating the image data with OpenCV. I don't know if this has an impact, but reading the image data using OpenCV strips all the metadata, which includes DPI. The lack of DPI metadata might be affecting how Tesseract visualizes the image data when parsing it for text. Could potentially be the cause of the letters being even closer to together.