2019-11-20 13:25:45 -0600 | received badge | ● Famous Question (source) |
2018-07-13 10:48:34 -0600 | received badge | ● Notable Question (source) |
2017-12-13 00:47:18 -0600 | received badge | ● Popular Question (source) |
2016-01-28 01:17:07 -0600 | received badge | ● Scholar (source) |
2016-01-27 20:01:31 -0600 | received badge | ● Editor (source) |
2016-01-27 19:11:30 -0600 | received badge | ● Supporter (source) |
2016-01-27 19:10:34 -0600 | commented answer | OpenCV Image DPI I see, so how come when I use the coordinates (in pixels, judging from their values - 3digit values) from the other OCR engine that the regions of colored boxes are way off from the intended areas if I am using the coordinate values as-is? |
2016-01-27 06:12:13 -0600 | received badge | ● Student (source) |
2016-01-27 04:50:24 -0600 | asked a question | OpenCV Image DPI Hello, I have small openCV python code that covers a rectangle in an image with a solid color, basically covering it up. (Trying to censor out an image and some personal details from an ID to be specific) using coordinates from another OCR engine. The problem is when I use the coordinates from the other OCR engine's recognition, the white boxes (solid color to cover up the image) arent in the correct places. All I know is that the coordinates from the other OCR engine were from a 300dpi image, but I dont know what dpi openCV works with its images when dealing with coordinates. Any help? |