Detect screen zone with text?

asked 2018-12-21 04:14:54 -0600

neoliv gravatar image

Hello OpenCV community!
I'm trying to build a new tool (probably a Compiz plugin) to help visually impaired people. My goal is to improve text readability by optimizing fg/bg contrast. The usual filters are not working well because they are static and will not handle mixed text styles/colors.

My proposal is to implement a pipe that will:
1 - Detect onscreen text zones with similar foreground/background colors.
2 - Detect foreground/background colors in each text zone.
3 - Apply in each zone a dedicated color transformation that will bring original fg/bg to the user defined most readable colors (eg: white on black)

I need your input on how to perform every one of these three steps.
Here are some thoughts:
1 - This step is simpler than the general "text segmentation"problem. Text is assumed to be produced by the usual suspects (browsers, IDEs, pdf readers, ...). No need for things like EAST that would probably solve the problem but at a high CPU costs that is not compatible with a real-time processing of screen buffer.

2 - I assume that a simple histogram may work (two main peaks for fg/bg colors. But if you have more robust of efficient algorithms, please let me know.

3 - The naive way (replace original fg/bg with user fg/bg) will probably work but I fear sme nasty side effects with antialiased pixels. Any clever idea?

Thank you for reading me so far. I feel this kind of image processing will help a lot of people and I'm ready to put a lot of work to make it work. But as I'm quite new to OpenCV and image processing I need help from more experienced coders to do it "the right way"(tm).

Olivier

edit retag flag offensive close merge delete