2018-04-02 15:29:47 -0600 | commented question | compute confidence value using Hough Transform My mistake. Question is edited |
2018-04-02 15:29:32 -0600 | edited question | compute confidence value using Hough Transform compute confidence value using Hough Transform Hi All, I am using Hough Transform to detect lines on an image. I want t |
2018-04-02 15:11:03 -0600 | commented answer | compute confidence value using Hough Transform sorry my mistake.. I saw it on some discussion on a forum ... So, it means there is no way to associate a confidence val |
2018-04-02 05:13:30 -0600 | asked a question | compute confidence value using Hough Transform compute confidence value using Hough Transform Hi All, I am using Hough Transform to detect lines on an image. I want t |
2018-03-25 04:52:51 -0600 | received badge | ● Enthusiast |
2018-03-24 15:43:46 -0600 | commented answer | detect lines on xray image Thanks ,,, btw i have also added more details in my question |
2018-03-24 15:43:20 -0600 | edited question | detect lines on xray image detect lines on xray image Hi, I am looking to detect lines on an x-ray image using openCV. Please see the attached sam |
2018-03-24 05:14:00 -0600 | asked a question | detect lines on xray image detect lines on xray image Hi, I am looking to detect lines on an x-ray image using openCV. I am aware of Hough transfo |
2017-10-14 20:28:21 -0600 | received badge | ● Popular Question (source) |
2017-10-02 23:46:15 -0600 | received badge | ● Notable Question (source) |
2017-03-20 19:06:19 -0600 | commented answer | detect point of interest on boundary of an object but contour will return u one set of points ... as points are connected. How to infer from that? |
2017-03-20 16:26:52 -0600 | asked a question | detect point of interest on boundary of an object Hi, I have boundaries of semi-circle or eclipse shaped objects. Example images are
The boundary are not smooth and often slightly jagged (when you zoom in). I am looking to detect a point of interest (location x and y) on these boundaries, where we see a definite change in the shape, such as
There can be two outputs:
Currently, I am using Python and OpenCV. I cannot think of an effective and efficient way to solve this problem Any guidance in this regard will be really appreciated |
2016-09-27 15:06:02 -0600 | asked a question | memory deallocation OpenCV matrices can be declared in various ways. I am looking to declare openCV matrices in a main class and then populate them in member functions. My question : I believe that openCV matrices will be deallocated automatically once the main class object goes out of scope !! Which means that I do not have to manually call release function of openCV matrices Please correct me if I am wrong Thanks |
2016-09-18 10:25:48 -0600 | received badge | ● Popular Question (source) |
2016-04-26 00:00:31 -0600 | asked a question | broken image edges with canny operator07 I am using canny edge detector to detect edges from the input image. In every input image, there can be two objects (main object and another object inside it) as shown in the sample image. Therefore, I am supposed to detect two edges in such scenarios I am using canny edge detector to detect edges from the input image. In every input image, there can be two objects (main object and another object inside it) as shown in the sample image. Therefore, I am supposed to detect two edges in such scenarios enter image description here I determine the upper and lower thresholds automatically from the input image (using median and sigma). Most of the time canny works well but sometimes when the contrast of the image is not very good then edge detection fail as shown in following examples (NOTE:- outer edge is always detected correctly problem occurs with the inner edge)
Canny detected the edge for the outer boundary but failed for the inner object. At the moment, I am using openCV with python. Is there any way I can improve the results of canny edge detection Any help will be really appreciated |
2014-07-28 18:47:46 -0600 | commented answer | use SIFt detector with SURF algorithm for image matching Both SIFT and SURF require orientation to compute description. SIFT algorithm also provides that information ... so what is the problem then ? |
2014-07-28 18:25:01 -0600 | asked a question | use SIFt detector with SURF algorithm for image matching Hi, I am trying to match few training images with query images using features. have used SIFT detector to find key points from images. Then, I passed that key point structure to SURF algorithm to compute descriptors from images. But matching results are very poor On the other hand, if I extract key point and descriptors both using SURF algorithm then image matching works fine. Can anyone guide me what I am doing wrong ? ? Any help will be really appreciated |
2014-07-21 21:45:39 -0600 | asked a question | cannot run compile opencv program Hi Everyone, I had OpenCV 2.4 on Ubuntu 12.04. My C++ codes were working fine. Now, I installed OpenCV 2.4.9 today on the computer. My codes are now giving me errors. For example, I run the following basic command in C++:
And I get following errors:
But I can create OpenCV Mat objects without any problem. I also checked the outputs by running pkg-config opencv --libs and pkg-config opencv --cflags commands. Outputs are:
I don't know what's going wrong. Everything was fine before. I suspect something to do with opencv2 module, which cannot be loaded. Please help me out. Thanks |
2013-12-17 14:56:28 -0600 | asked a question | Gradient Location and Orientation Histogram Hi, I am comparing the performance of different features. I am wondering if someone know about the free source code of GLOH available either in opencv, C++ or MATLAB. Any help will be really appreciated. |
2013-11-14 20:53:07 -0600 | asked a question | knn matcher fails i am extracting ORB features from 1500 images i.e. 200 features from every training image. I store features in yml files and load them in one matrix. When I try to apply knn search during the image matching i am getting "Assertion Error". When I try to use small number of training images then it works perfectly fine. This shows that knn is failing with large features such as 2 M . How to resolve this issue? |
2012-10-01 00:43:24 -0600 | commented answer | pose estimation using RANSAC hi one thing i am wondering . . can i use 0 for both cx and cy ? or should it be the centre of the image? |
2012-09-22 14:59:41 -0600 | received badge | ● Editor (source) |
2012-09-22 02:45:24 -0600 | asked a question | camera calibration opencv error I am doing camera calibration using opencv. I am using the same code given in "Cook book programming". I am taking pictures from my smartphone of a chessboard. Then I am using opencv program to do camera calibration for me. Program worked for only one set of images when I have very large chess board (size 30 x 30). It does not work for other set of images and I get run time error "Assertion failed <ncorners> =0 . . ." I dont know what going wrong in my code. The code is as follows:- In camera calibration class, it fails on findChessboardCorners line . . . I think it cannot find the corners. Plz help me in that. One sample chess board image is as follows. It fails on this image . . . |
2012-09-22 02:43:21 -0600 | received badge | ● Scholar (source) |
2012-09-19 23:24:26 -0600 | received badge | ● Student (source) |
2012-09-19 19:17:44 -0600 | asked a question | pose estimation using RANSAC Hi Hi Everyone, As I posted before i had a problem with solvePnPRansac pose estimation. I solved that issues as I was doing something in mapping of my data. I generate vectors of 2D points and corresponding 3D points (Top 20 matches). Then I generate a Camera Matrix =[fx 1 cx; 1 fy cy;0 0 1] and I assume distortion coefficients are zero. Then I apply solvePnPRANSAC to estimate the pose. I get inliers. I am using 10 Error Threshold in RANSAC function and run it for 200 iterations. From pose, I reproject my points back. Some reprojected points are coming very far from actual image points. Please see the attached figure. So I am wondering if there is any step needed before calling RANSAC to ensure good results. ! waiting for replies . . image description |
2012-09-05 21:30:26 -0600 | asked a question | problem in my solvePnPRANSAC code Hi Everyone, I am having some problem with solvePnPRANSAC. I have 3D data obtained from bundler from images. I am matching 2D points (Query images) with the 3D points. Once I get the corresponding 2D to 3D points, I pick top 50 matches and estimating the pose using opencv function. Sometimes my 2D match wrongly i.e. against a different 3D model. In such case, ideally I should not get any inlier. But I am getting inliers. In case of correct matches, the inliers come very less as well. Mat op = Mat(modelPoints); //3d image points Mat ip =Mat(imagePoints); //2d image points //defining the camera matrix double _cm[9] = {FOCAL_LENGTH, 0, 1, 0, FOCAL_LENGTH, 1, 0, 0, 1 }; camMatrix = Mat(3,3,CV_64FC1,_cm); rvec = Mat(rv); tvec = Mat(tv); double _dc[] = {0,0,0,0}; Please guide me what I am doing wrong; The focal length is kept 2960 in my experiments. |