Ask Your Question

rueynshard's profile - activity

2017-09-30 04:43:18 -0600 received badge  Student (source)
2017-03-30 02:01:55 -0600 answered a question Fisheye lens calibration using OpenCV returning zero valued distortion matrix

Try the omnidir:: namespace instead: http://docs.opencv.org/trunk/db/dd2/n..., and add RECTIFY::PERSPECTIVE as one of the flags.

2017-03-20 19:33:46 -0600 answered a question CV CAMERA CALIB

You can try some of the following:

a) Take more chessboard images at the edges of the camera, and at more extreme angles

b) Use the omnidirectional camera methods - http://docs.opencv.org/trunk/dd/d12/t..., and select RECTIFY_PERSPECTIVE for the rectification type.

c) Use the cv::fisheye camera methods

d) Perhaps the camera has already been slightly undistorted/warped. If that's the case, try and see if you can get a pure/unmanipulated image.

2017-03-20 19:29:10 -0600 received badge  Supporter (source)
2017-03-02 21:04:39 -0600 asked a question What images to use for negative training data for SVM?

I'm currently using HOG (Histogram of Oriented Gradients) to generate feature vectors for each of my training images, and then using this data to train an SVM Classifier. Since the SVM classifies the incoming data into one of 2 classes, I understand I need to provide training images for both classes.

I hope to use this to detect pedestrians, and I have a set of a few hundred 128x64 images of different pedestrians. However, I'm not sure what images to use for the other class, i.e. the non-positive one. Should I just use images of different ambient scenes with no pedestrians in them?

2017-02-21 20:30:28 -0600 commented answer Difference between undistortPoints() and projectPoints() in OpenCV

Thank you! This was really helpful

2017-02-21 20:30:04 -0600 received badge  Scholar (source)
2017-02-21 08:49:24 -0600 asked a question Difference between undistortPoints() and projectPoints() in OpenCV

From my understanding, undistortPoints() takes a set of points on a distorted image, and calculates where their coordinates would be on an undistorted version of the same image. projectPoints() maps a set of object coordinates to their corresponding image coordinates.

However, I am unsure if projectPoints() maps the object coordinates to a set of image points on the distorted image (ie. the original image) or one that has been undistorted (straight lines)?

Furthermore, the OpenCV documentation for undistortPoints states that 'the function performs a reverse transformation to projectPoints()'. Could you please explain how this is so?