# How can i determine the location of LCD/LED display in an image with OpenCV and perl

C:\fakepath\scratchcard.jpg

Hi aliam very new to OpenCV and indeed this forum, therefore my apologies in advance for any unobserved rules and also for asking a qtn that seem may seem lame to most.Having pointed that out my question is ... How can i determine the location of LCD/LED display in an image with OpenCV and perl, iam trying to dynamically locate the location of LCD/LED display in an image. My main point is to read the characters in the image and later on convert them to text. I would like to do with in perl or C++ plus openCV ?

please find the attached image i would like to read

many thanks in advance Gonxintel

edit retag close merge delete

Please give more information if your LCD/LED display is looking always similarly - maybe post an example image, or if you would like to detect each possible LCD/LED display.

( 2013-04-08 04:48:54 -0500 )edit

Hi Guanta thank you so much for your reply i have edited my initial post to attached the images i would like to read. they are not LCD per say but identifying the region where the value is will require to identify the LCD like region first and the read the individual segments for the digits this is where i would like some help on how to identify them dynamcically as i have to process a number of them

( 2013-04-08 09:46:21 -0500 )edit

Sort by » oldest newest most voted

Hi all

Thank you for your input/suggestion to my question how to determine the LCD/LED display in an image.

The figure 1 below is a scratchcard which we are interested to locate/segment based on the yellowish color part ,which i refer to as the LCD/LED display. After segmenting the LCD/LED area based on the Yellowish color we can extract the digits present

                 Figure 1 Scratchcard Image


For this this task we will use OpenCV and C++
OpenCV captures images in BGR (BLUE,RED and GREEN ) format, and not RGB ( RED, GREEN and BLUE) as one would expect.The captured images are 3 bytes ( 24 bits ) of data .The 3 bytes of data for each pixel is split into 3 different parts which are BLUE,RED and GREEN having 8-bits or 1 byte each. 1 byte can store a value from 0 to 255.This means that each BLUE having 256 variations of Blue , each GREEN having 256 variations of Green and each RED having 256 variations of Red.These primary colors can be mixed with different variations of each to get the desired color in this case yellowish or LCD/LED display color. Figure 2 shows the RBG colorspace.

Figure 2 RBG color space

In this task what is needed is to isolate the yellowish color in order to determine the LCD/LED display This is referred to as color based segmentation which is also know as With thresholding.However while OpenCV images are captured in BGR format ,The BGR format falls short for color based segmentation task. It seems using HSV color space shown in figure 3 will be more suitable.

Figure 3 HSV

HSV stands for Hue, Saturation, and Value.The Hue defines the color component ,Saturation defines how strong the color component is in other words how close that color is to white and Value defines the brightness of the color component or how close that color is to black..Therefore unlike RGB, HSV separates brightness in an image from the color information. This is very useful for this task at hand . Further it gives the the upper hand of having a single number color for the color of interest despite multiple shades of that color

in OpenCV the HSV values ranges are different from other application such as Gimp whose HSV ranges are: Hue ranges from 0 to 360, Saturation from 0 to 100 and Value from 0 to 100. while in OpenCV the HSV values ranges are Hue ranges from 0 to 180,Saturation ranges from 0 to 255, and Value ranges 0 to 255.

Gimp Hue values for the colors are : Orange 0-44 ,Yellow 44- 76, Green 76-150, Blue 150-260, Violet 260-320, Red 320-360

OpenCV Hue values for the colors Orange 0-22 Yellow 22- 38 Green 38-75 Blue 75-130 Violet 130-160 Red 160-180

For this task, the suitable ranges for the HSV ,after experimenting with different values to get the yellowish part , are Hue from ranges 20 to 70 ...

more

Thank you for providing some example images. If they look similar to those you can identify your lcd-region by color since it is the only one which is in a yellow area.

Steps I would do:

1. clean ore denoise the source-image (e.g. by using cv::fastNLMeans()) or blur your image using a Gaussian-kernel: cv::GaussianBlur()

2. optional: normalize contrast --> cv::equalizeHist()

3. Iterate through your images and identify yellow pixels (a simple if-condition here)

4. get the bounding box of all these yellow pixels, i.e. get the min, max x and y-locations of your yellow points (you could do this also in the for-loop of step 3.)

5. Pass the region of interest of step 4 to the open-source ocr software tesseract (http://code.google.com/p/tesseract-ocr/) to obtain an ocr result.

If your LCD is not always looking yellow-ish then you have several options, here are some ideas:

1. identify the lcd by context, maybe you know that the lcd-line is always the second one in your image then you could also give tesseract the whole image and get an ocr of the whole image and the second line is then your result

2. if you know it is the largest bounding box, apply the first two steps from above and find connected components (cv::findContours()) then pass the largest bounding box of connected components to tesseract.

3. more advanced: if you know that it has the fattest stroke lines you could get the thickness by computing the distances between gradients of the strokes

4. also advanced: if this was really just an example image and all your others look completely different then you need a learning approach, e.g. you could try out the cascade-classifier of OpenCV

more

Hi Guanta thank you so much for your elaborate steps , i will execute them and revert to you on my success.The steps seems feasible with regards to identifying the area of interest dynamically.As an experiment i cut out the area of interest " scratchcard_lcd_image_to_read.jpg" attached in my initial post and passed it to the tesseract unfortunately didn't yield any results .... is this because i have to denoise the source-image scratchcard_lcd_image_to_read.jpg or train tesseract with the font in the image ? once again thank you so much for your insight i really appreciate it

( 2013-04-09 01:41:09 -0500 )edit

Well unfortunately I haven't worked yet with low level computer vision using OpenCV (I made that with MATLAB) so I can´t give you specific advice on functions, I have here a link talking about the morphological operations: LINK

But let's explain a little more:

In order to use the morphological (or blob) operation we need to convert the image you have (in color) to black and white (not gray scale) in order to do this you should choose a threshold (it could be more than one depending of the needs).

The point is that after analyzing the histogram you could search for a "mode" corresponding to the yellow rectangle (or even the numbers if you take only the higher part of the image). Check this: LINK

Then you could make use of the blob operations.

Hope it helps.

more

Supposing you use always the same kind of card, you could probably by bounding/restraining the problem and work to do.

For example you can make that the card position and size (for the image) be more or less constant, so you could take for example just the upper part of the image.

Then you should segment the display.

This can be made by using some morphological operations (dilation or closing) to get a mask and the apply this mask to cut the important part for this. If you scratch all the card (so there aren't any near black points close to the number) this could work very well.

Another one could be segmentation using the histogram of the picture.

Hope it could help.

more

Hi Geomod

thank you so much for your reply i believe u are right ,"If you scratch all the card (so there aren't any near black points close to the number) this could work very well" will help to for the tesseract to easily read the individual digits.Kindly explain further on segmentation using the "histogram of the picture." approach thank so much ... one of the issues i ran into ... .As an experiment i cut out the area of interest " scratchcard_lcd_image_to_read.jpg" attached in my initial post and passed it to the tesseract unfortunately didn't yield any results. i suppose because some of the images digits where not clear enough but surely some of them would have been easily read. many thanks Geomod different approach this give me a different approach

( 2013-04-09 01:49:09 -0500 )edit

Well unfortunately I haven't worked yet with low level computer vision using OpenCV (I made that with MATLAB) so I can´t give you specific advice on functions, I have here a link talking about the morphological operations: LINK

But let's explain a little more:

In order to use the morphological (or blob) operation we need to convert the image you have (in color) to black and white (not gray scale) in order to do this you should choose a threshold (it could be more than one depending of the needs).

The point is that after analyzing the histogram you could search for a "mode" corresponding to the yellow

( 2013-04-11 17:10:46 -0500 )edit

Official site

GitHub

Wiki

Documentation

## Stats

Asked: 2013-04-08 04:26:24 -0500

Seen: 6,905 times

Last updated: May 27 '13