Ask Your Question

Nabeel's profile - activity

2018-04-02 15:29:47 -0500 commented question compute confidence value using Hough Transform

My mistake. Question is edited

2018-04-02 15:29:32 -0500 edited question compute confidence value using Hough Transform

compute confidence value using Hough Transform Hi All, I am using Hough Transform to detect lines on an image. I want t

2018-04-02 15:11:03 -0500 commented answer compute confidence value using Hough Transform

sorry my mistake.. I saw it on some discussion on a forum ... So, it means there is no way to associate a confidence val

2018-04-02 05:13:30 -0500 asked a question compute confidence value using Hough Transform

compute confidence value using Hough Transform Hi All, I am using Hough Transform to detect lines on an image. I want t

2018-03-25 04:52:51 -0500 received badge  Enthusiast
2018-03-24 15:43:46 -0500 commented answer detect lines on xray image

Thanks ,,, btw i have also added more details in my question

2018-03-24 15:43:20 -0500 edited question detect lines on xray image

detect lines on xray image Hi, I am looking to detect lines on an x-ray image using openCV. Please see the attached sam

2018-03-24 05:14:00 -0500 asked a question detect lines on xray image

detect lines on xray image Hi, I am looking to detect lines on an x-ray image using openCV. I am aware of Hough transfo

2017-10-14 20:28:21 -0500 received badge  Popular Question (source)
2017-10-02 23:46:15 -0500 received badge  Notable Question (source)
2017-03-20 19:06:19 -0500 commented answer detect point of interest on boundary of an object

but contour will return u one set of points ... as points are connected. How to infer from that?

2017-03-20 16:26:52 -0500 asked a question detect point of interest on boundary of an object


I have boundaries of semi-circle or eclipse shaped objects. Example images are

image description image description

The boundary are not smooth and often slightly jagged (when you zoom in). I am looking to detect a point of interest (location x and y) on these boundaries, where we see a definite change in the shape, such as

image description image description

There can be two outputs:

  1. No point of interest as we cannot find a definite change
  2. Point of interest with x and y location

Currently, I am using Python and OpenCV. I cannot think of an effective and efficient way to solve this problem

Any guidance in this regard will be really appreciated

2016-09-27 15:06:02 -0500 asked a question memory deallocation

OpenCV matrices can be declared in various ways. I am looking to declare openCV matrices in a main class and then populate them in member functions.

My question : I believe that openCV matrices will be deallocated automatically once the main class object goes out of scope !! Which means that I do not have to manually call release function of openCV matrices

Please correct me if I am wrong


2016-09-18 10:25:48 -0500 received badge  Popular Question (source)
2016-04-26 00:00:31 -0500 asked a question broken image edges with canny operator07

I am using canny edge detector to detect edges from the input image.

In every input image, there can be two objects (main object and another object inside it) as shown in the sample image. Therefore, I am supposed to detect two edges in such scenarios

image description

I am using canny edge detector to detect edges from the input image.

In every input image, there can be two objects (main object and another object inside it) as shown in the sample image. Therefore, I am supposed to detect two edges in such scenarios

enter image description here

I determine the upper and lower thresholds automatically from the input image (using median and sigma). Most of the time canny works well but sometimes when the contrast of the image is not very good then edge detection fail as shown in following examples (NOTE:- outer edge is always detected correctly problem occurs with the inner edge)

image description image description

Canny detected the edge for the outer boundary but failed for the inner object. At the moment, I am using openCV with python. Is there any way I can improve the results of canny edge detection

Any help will be really appreciated

2014-07-28 18:47:46 -0500 commented answer use SIFt detector with SURF algorithm for image matching

Both SIFT and SURF require orientation to compute description. SIFT algorithm also provides that information ... so what is the problem then ?

2014-07-28 18:25:01 -0500 asked a question use SIFt detector with SURF algorithm for image matching


I am trying to match few training images with query images using features.

have used SIFT detector to find key points from images. Then, I passed that key point structure to SURF algorithm to compute descriptors from images. But matching results are very poor

On the other hand, if I extract key point and descriptors both using SURF algorithm then image matching works fine.

Can anyone guide me what I am doing wrong ? ? Any help will be really appreciated

2014-07-21 21:45:39 -0500 asked a question cannot run compile opencv program

Hi Everyone,

I had OpenCV 2.4 on Ubuntu 12.04. My C++ codes were working fine.

Now, I installed OpenCV 2.4.9 today on the computer. My codes are now giving me errors.

For example, I run the following basic command in C++:

Mat image1 = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);

And I get following errors:

src/main.cpp:25:37: error: ‘CV_LOAD_IMAGE_GRAYSCALE’ was not declared in this scope src/main.cpp:25:60: error: ‘imread’ was not declared in this scope

But I can create OpenCV Mat objects without any problem.

I also checked the outputs by running pkg-config opencv --libs and pkg-config opencv --cflags commands. Outputs are:

-I/usr/local/include/opencv -I/usr/local/include /usr/local/lib/lib /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ .......

I don't know what's going wrong. Everything was fine before.

I suspect something to do with opencv2 module, which cannot be loaded.

Please help me out.


2013-12-17 14:56:28 -0500 asked a question Gradient Location and Orientation Histogram


I am comparing the performance of different features. I am wondering if someone know about the free source code of GLOH available either in opencv, C++ or MATLAB.

Any help will be really appreciated.

2013-11-14 20:53:07 -0500 asked a question knn matcher fails

i am extracting ORB features from 1500 images i.e. 200 features from every training image. I store features in yml files and load them in one matrix. When I try to apply knn search during the image matching i am getting "Assertion Error". When I try to use small number of training images then it works perfectly fine. This shows that knn is failing with large features such as 2 M . How to resolve this issue?

2012-10-01 00:43:24 -0500 commented answer pose estimation using RANSAC

hi one thing i am wondering . . can i use 0 for both cx and cy ? or should it be the centre of the image?

2012-09-22 14:59:41 -0500 received badge  Editor (source)
2012-09-22 02:45:24 -0500 asked a question camera calibration opencv error

I am doing camera calibration using opencv. I am using the same code given in "Cook book programming".

I am taking pictures from my smartphone of a chessboard. Then I am using opencv program to do camera calibration for me. Program worked for only one set of images when I have very large chess board (size 30 x 30). It does not work for other set of images and I get run time error "Assertion failed <ncorners> =0 . . ."

I dont know what going wrong in my code. The code is as follows:-

int main()
CameraCalibrator calibrateCam;
std::vector<std::string> filelist;
char buff[100];

for(int i=0;i<21;i++)
cv::Size boardSize(4,3);
double calibrateError;
int success;
success = calibrateCam.addChessboardPoints(filelist,boardSize);

class CameraCalibrator{
   std::vector<std::vector<cv::Point3f>> objectPoints;
   std::vector<std::vector<cv::Point2f>> imagePoints;
   //Square Lenght
   float squareLenght;
   //output Matrices
   cv::Mat cameraMatrix; //intrinsic
   cv::Mat distCoeffs;
   //flag to specify how calibration is done
   int flag;
   //used in image undistortion
   cv::Mat map1,map2;
   bool mustInitUndistort;
    CameraCalibrator(): flag(0), squareLenght(36.0), mustInitUndistort(true){};
    int addChessboardPoints(const std::vector<std::string>& filelist,cv::Size& boardSize){
        std::vector<std::string>::const_iterator itImg;
        std::vector<cv::Point2f> imageCorners;
        std::vector<cv::Point3f> objectCorners;
        //initialize the chessboard corners in the chessboard reference frame
        //3d scene points
        for(int i = 0; i<boardSize.height; i++){
            for(int j=0;j<boardSize.width;j++){
        //2D Image points:
        cv::Mat image; //to contain chessboard image
        int successes = 0;

        for(itImg=filelist.begin(); itImg!=filelist.end(); itImg++){
            image = cv::imread(*itImg,CV_LOAD_IMAGE_GRAYSCALE);

            bool found = cv::findChessboardCorners(image, boardSize, imageCorners);

            cv::drawChessboardCorners(image, boardSize, imageCorners, found);                      
            cv::cornerSubPix(image, imageCorners, cv::Size(5,5),cv::Size(-1,-1),
            //if we have a good board, add it to our data
            if(imageCorners.size() == boardSize.area()){

        return successes;
    void addPoints(const std::vector<cv::Point2f>& imageCorners,const std::vector<cv::Point3f>& objectCorners){
        //2D image point from one view
        //corresponding 3D scene points
    double calibrate(cv::Size &imageSize){
        mustInitUndistort = true;
        std::vector<cv::Mat> rvecs,tvecs;
            cv::calibrateCamera(objectPoints, //the 3D points
                cameraMatrix, //output camera matrix

    void remap(const cv::Mat &image, cv::Mat &undistorted){
        std::cout << cameraMatrix;
        if(mustInitUndistort){ //called once per calibration
            mustInitUndistort = false;
        //apply mapping functions

In camera calibration class, it fails on findChessboardCorners line . . .

I think it cannot find the corners. Plz help me in that. One sample chess board image is as follows. It fails on this image . . . image description

2012-09-22 02:43:21 -0500 received badge  Scholar (source)
2012-09-19 23:24:26 -0500 received badge  Student (source)
2012-09-19 19:17:44 -0500 asked a question pose estimation using RANSAC

Hi Hi Everyone,

As I posted before i had a problem with solvePnPRansac pose estimation. I solved that issues as I was doing something in mapping of my data.

I generate vectors of 2D points and corresponding 3D points (Top 20 matches). Then I generate a Camera Matrix =[fx 1 cx; 1 fy cy;0 0 1] and I assume distortion coefficients are zero. Then I apply solvePnPRANSAC to estimate the pose. I get inliers. I am using 10 Error Threshold in RANSAC function and run it for 200 iterations.

From pose, I reproject my points back. Some reprojected points are coming very far from actual image points. Please see the attached figure.

So I am wondering if there is any step needed before calling RANSAC to ensure good results. !

waiting for replies . . image description

2012-09-05 21:30:26 -0500 asked a question problem in my solvePnPRANSAC code

Hi Everyone,

I am having some problem with solvePnPRANSAC. I have 3D data obtained from bundler from images. I am matching 2D points (Query images) with the 3D points. Once I get the corresponding 2D to 3D points, I pick top 50 matches and estimating the pose using opencv function.

Sometimes my 2D match wrongly i.e. against a different 3D model. In such case, ideally I should not get any inlier. But I am getting inliers. In case of correct matches, the inliers come very less as well.

I dont know what I am doing wrong. The code snap is as follows:-

Mat op = Mat(modelPoints); //3d image points Mat ip =Mat(imagePoints); //2d image points

//defining the camera matrix double _cm[9] = {FOCAL_LENGTH, 0, 1, 0, FOCAL_LENGTH, 1, 0, 0, 1 }; camMatrix = Mat(3,3,CV_64FC1,_cm);

rvec = Mat(rv); tvec = Mat(tv);

double _dc[] = {0,0,0,0};
solvePnPRansac(op, ip, camMatrix, Mat(1,4,CV_64FC1,_dc), rvec, tvec);

Please guide me what I am doing wrong; The focal length is kept 2960 in my experiments.