Ask Your Question

David_86's profile - activity

2018-11-13 17:09:09 -0500 commented question What quality can I expect to reach with shape measurement?

Why are you thinking about a stitching error? Stitching images can be very precise (near to pixel accuracy) if you manag

2018-11-12 16:20:42 -0500 commented question What quality can I expect to reach with shape measurement?

As far as I know you can calibrate a camera for a fixed focal length, so once done you can't zoom (zooming means changin

2018-11-06 06:10:11 -0500 commented question What quality can I expect to reach with shape measurement?

The most critical part is clearly your camera resolution. Assuming you're fine with a 0.5 mm precision (what do you mean

2018-11-06 06:08:27 -0500 commented question What quality can I expect to reach with shape measurement?

The most critical part is clearly your camera resolution. Assuming you're fine with a 0.5 mm precision (what do you mean

2018-11-06 06:07:43 -0500 commented question What quality can I expect to reach with shape measurement?

The most critical part is clearly your camera resolution. Assuming you're fine with a 0.5 mm precision (what do you mean

2018-11-06 06:07:43 -0500 received badge  Commentator
2018-05-31 08:17:04 -0500 commented question what is the unit of light intensity values in a grayscale image ?

What do you mean by "different lightning conditions"? You'll be measuring the amount of light reflected from the target

2018-05-31 02:32:52 -0500 commented question what is the unit of light intensity values in a grayscale image ?

Doesn't look correct to me since that value is the result of 2 processes: first conversion of the amount of radiation hi

2018-05-31 02:32:06 -0500 commented question what is the unit of light intensity values in a grayscale image ?

Doesn't look correct to me since that value is the result of 2 processes: first conversion of the amount of radiation hi

2018-05-31 02:31:45 -0500 commented question what is the unit of light intensity values in a grayscale image ?

Doesn't look correct to me since that value is the result of 2 processes: first conversion of the amount of radiation hi

2018-05-31 02:29:59 -0500 commented question what is the unit of light intensity values in a grayscale image ?

Doesn't look correct to me since that value is the result of 2 processes: first conversion of the amount of radiation hi

2018-05-21 06:04:33 -0500 commented question How to find pixel per meter

Where does that 0.39 factor come from? You can get a raw idea of the pixel/mm ratio dividing your resolution by the effe

2018-05-14 02:37:35 -0500 commented question Count real size of object in photo

Your formula is never going to work, depth information is lost when 3D real world points get mapped to the 2D image plan

2018-05-14 02:07:22 -0500 commented question Count real size of object in photo

Your formula is never going to work, depth information is lost when 3D real world points get mapped to the 2D image plan

2018-04-03 08:41:59 -0500 commented question Camera correction

@Radu Then forget what I said if your measurements need to be performed across all the FoV. By the way I suggest changin

2018-04-03 08:41:40 -0500 commented question Camera correction

@Radu Then forget what i said if your measurements need to be performed across all the FoV. By the way i suggest changin

2018-04-03 08:41:21 -0500 commented question Camera correction

@Radu Then forget what i said if you measurements need to be performed across all the FoV. By the way i suggest changing

2018-03-30 07:33:57 -0500 commented question Camera correction

It looks to me as a very difficult path to go by, because you would need to warp the image along concentric spheres with

2018-03-30 02:04:26 -0500 commented question Camera correction

What you see is actually the optical distortion introduced by the lens and it can be corrected through the camera calibr

2018-03-30 02:04:04 -0500 commented question Camera correction

What you see is actually the optic distortion introduced by the lens and it can be corrected through the camera calibrat

2018-03-28 09:14:33 -0500 commented question Precision Measurement with Opencv python

Borders in your image are well defined, but still you have 3-4 pixels transition with different grey-values (as far as I

2018-01-02 05:51:24 -0500 commented question I want to find Focal Length of my camera, Should I use Camera Calibration ?

Focal length is an input data for the Calibration procedure so you can't use it, otherwise you'd have 1 equation with 2

2017-10-10 22:25:22 -0500 received badge  Notable Question (source)
2017-06-26 07:04:23 -0500 commented question Real World Coordinate of Rotary table in Stereo Calibration

By "position of the rotary table" what do you mean? Its center? If so, why don't you simply calculate the 3D position of a reference point on the chessboard?

2017-03-09 08:01:52 -0500 commented question Image Stitching

You can quite easily stitch together consecutive frames, actually you could even down-sample your frame rate if the camera isn't moving too fast (two consecutive frames will represent almost the same image). This is gonna do your job unless the camera is moving inside the borehole AND rotating at the same time: in this case you'll end up with a "stepped" final image and not a clear rectangle.

2017-02-23 10:47:53 -0500 commented question algorithm to computer vision obstacle detection ??

With a single camera you won't get information on depth AT ALL. Seems like your strategy is gonna have a hard time if you find a small obstacle very close and a bigger one lets say a few meters away...how are you gonna face this situation? I'd rather move to a stereo setup for your purpose...

2017-02-03 01:28:48 -0500 received badge  Popular Question (source)
2016-12-05 06:13:56 -0500 commented question calculate the bottles in fridge

Green and grey bottles cap in the back are not even noticeable in this image, I don't see any chance to count them with images like this one. You could only make a guess. Even if you lift the camera (and you can't lift too much there) I'm quite sure you'll still be having problems with proper illumination because the back will always be dark, thus cap contours will not be clear.

I think the first critical issue here is to find proper acquisition set up, then you can think of a strategy to identify caps taking perspective distortion into account.

2016-10-20 01:50:22 -0500 commented question How to find shelves on image?

Have you already tried Hough Line and filter results along the horizontal direction? Horizontal books will introduce some results too, but you can filter them looking at length of the associated contour, since the shelve is the longest in image

2016-09-02 01:41:02 -0500 commented question I need to get a camera for my project 'development Image processing based sorting conveyor

I've never worked with web cameras but I guess they can somehow be set through their interface software

2016-08-31 01:54:08 -0500 commented question I need to get a camera for my project 'development Image processing based sorting conveyor

If you don't need to work with video flow don't worry about fps, to have a clear image if the object is moving just reduce the exposure time. I'd look for a camera where you can control iris and focus, so you will be sure that every image will look the same and will not be influenced by any function of the camera auto-adjusting the acquisition parameters.

2016-08-02 06:16:28 -0500 commented question Measure shot distance for basketball and golf?

@Crashalot yes, simulation of human eyes is what you need to retrieve metric information from the scene. I don't know how that app is working, maybe using multiple frames. No matter what kind of strategy you're using, from a single picture information about dimension/depth of objects is lost: think about a coin just in front of the camera and an elephant 50 m away, they'll look almost the same size. With a couple of cameras, knowing the distance between each other, you can triangulate every common point in the images and calculate its depth through the disparity. The accuracy of the measure depends on this: the far you go from the camera, the little will be the disparity leading to a less accurate triangulation (this is extremely simplified, just to get the idea behind).

2016-08-01 06:16:58 -0500 commented question Measure shot distance for basketball and golf?

With a single camera and without any known reference objects at fixed distances from the camera you can't retrieve depth information of the scene because it's being lost when the image is taken.

What do you mean by "shot tracking"? Video analysis or working with a single picture?

2015-10-05 07:55:56 -0500 commented question removing small blob from image?

Check this example, you can set parameters of the BlobDetector to filter smaller blobs. Maybe also the circularity could be usefull in your case, I can't tell without the original image

2015-10-05 07:00:44 -0500 commented question removing small blob from image?

You can also print blobs filtering by Size, you'll remove all the blobs standing alone

2015-10-05 06:10:32 -0500 commented question Identifying the left and right camera from 2 images captured at different time intervals

@Sriram Kumar if you can correctly rectify the images you are leading back to an ideal stereo-rig setup where both cameras are parallel to each other, so a correspondence will lie in the same horizontal (epipolar) line with the biggest X coordinate for left camera and smallest for the right camera.

2015-10-05 02:03:00 -0500 answered a question Shape detection, why try several threshold levels?

From my experience, it helps you to avoid those situations where you might not get a blob of isolated points belonging to the shape you'd like to detect.

An example: the first image comes from a threshold of 75, the second one from a threshold of 95. These are sections of two close squares, but in the first case you won't fine any result because the borders are still connected in some points.

image description image description

You can solve this by trying more threshold levels (well, not as much as in squares.cpp example because the last 3 iterations are removing most of the points in the resulting binary image) or perform an erode to remove those points between the borders (if they're small enough).

2015-10-05 01:44:01 -0500 commented question Curve detection

@sturkmen not yet, but to be honest I did not spend enough time on that because I've been working on other tasks, so this one went in "stand-by". I will let you know if I get any result.

2015-10-02 09:03:15 -0500 commented question Remove all background and show only words for OCR

If your letters are always black and it's the background changing colour you can convert the image into gray-scale and perform a binary threshold. You'll turn white everything above the threshold level.

2015-10-01 07:52:38 -0500 commented question Identifying the left and right camera from 2 images captured at different time intervals

Something is missing..cameras aren't fixed in a reciprocal position, are they? The first 2 images are fine, they look like if a stereo rig as been set up..but in the second couple of images at least one camera has been rotated, because the cup appears almost in the same position and the background changes..Did you try rectifing the images?

2015-09-28 16:41:00 -0500 commented answer eye blinks detection from video frames

Can you post the video or the frame where you can't track it? I think you're blurring the image too much, Size(3,3) will be fine..If you are missing some part of the pupil (because of the led lights reflected i suppose) you can try to erode and then dilate the image after thresholding, or adjust the radius range..that should fix the problem since the pupil is always clearly visible with a close-up image like that

2015-09-28 09:48:32 -0500 commented answer eye blinks detection from video frames

Edited the answer..in the documentation you will find easy examples on how to use those functions, added links too ;-)

2015-09-28 09:47:02 -0500 edited answer eye blinks detection from video frames

Just a quick try.. RESULT

When in a frame the circle cannot be detected then the eye is blinking.

What I've done in this test:

  1. Use GaussianBlur to smooth the image
  2. Threshold the image to keep only the darkest part, with THRESH_BINARY you can define a range of grey value to use as filter level
  3. Use HoughCircles to fit circle in the image. Since the pupil is the only circle-like part in your image you can define a min/max radius value and get a unique circle
  4. Draw the result