Ask Your Question

pistorinoj's profile - activity

2020-10-02 01:15:25 -0600 received badge  Notable Question (source)
2019-12-10 07:05:51 -0600 received badge  Famous Question (source)
2019-07-11 06:37:17 -0600 received badge  Notable Question (source)
2019-04-17 11:23:29 -0600 received badge  Notable Question (source)
2017-12-22 04:20:47 -0600 received badge  Notable Question (source)
2017-11-23 03:51:50 -0600 received badge  Famous Question (source)
2017-06-30 11:15:28 -0600 received badge  Popular Question (source)
2017-02-10 06:15:20 -0600 received badge  Popular Question (source)
2017-01-07 09:11:33 -0600 received badge  Popular Question (source)
2017-01-03 07:13:03 -0600 received badge  Notable Question (source)
2016-06-04 20:49:34 -0600 received badge  Notable Question (source)
2016-02-22 12:47:46 -0600 received badge  Popular Question (source)
2016-01-27 09:26:58 -0600 received badge  Taxonomist
2016-01-23 01:49:16 -0600 received badge  Popular Question (source)
2015-11-25 11:37:59 -0600 received badge  Popular Question (source)
2015-04-21 08:06:02 -0600 received badge  Self-Learner (source)
2015-04-21 08:03:35 -0600 answered a question cornerSubPix Exception

I figured this out and it was stupid. I was storing the point found by findChessboardCorners in one vector (corners) but using a different vector (pointBuf) for the cornerSubPix call. cornerSubPix apparently does not like an empty vector. Not sure why my try/catch call did not report this.

2015-04-12 10:40:57 -0600 commented question cornerSubPix Exception

"External component has thrown an exception."

2015-04-12 10:20:15 -0600 asked a question cornerSubPix Exception

I am attempting to calibrate a Logitech c930 camera and keep getting an exception when I follow what I think is the sample calibration code.

I am using OpenCV 2.4.9 with VS2012 c++/cli on a Win 8.1 machine. I have attempted to follow the tutorial code at: C:\opencv\sources\samples\cpp\tutorial_code\calib3d\camera_calibration\camera_calibration.cpp.

My code looks like:

    while(successes<CALIBRATE_NUMBER_OF_BOARDS_TO_MATCH && totalframes<CALIBRATE_TOTAL_FRAMES)
    {
        // check for new image
        if(CameraVI->isFrameNew(CameraNumber))
        {
            vector<Point2f> pointBuf;
            TermCriteria criteria = TermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1 );

            totalframes++;

            // load the image from the buffer
            CameraVI->getPixels(CameraNumber, LdataBuffer, false, true);

            // convert it to a mat
            cv::Mat tImage(Images->Height, Images->Width, CV_8UC3, LdataBuffer, Mat::AUTO_STEP);

            bool found = findChessboardCorners(tImage, board_sz, corners, CV_CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_FAST_CHECK| CV_CALIB_CB_NORMALIZE_IMAGE);

            if(found)
            {
                successes++;
                cvtColor(tImage,grayImage,CV_BGR2GRAY);
                cornerSubPix( grayImage, pointBuf, Size(11,11), Size(-1,-1), criteria);
                imagePoints.push_back(pointBuf);
            }

            drawChessboardCorners( tImage, board_sz, Mat(pointBuf), found );
        }
        Sleep(35);

When I run this, I consistently get an exception on the cornerSubPix call. I know that the image coming in is correct (it is 1920 x 1080) as I can show the image to the screen. Also, the pointBuf seems to be correctly defined and I can access it before the cornerSubPix call. One thing I do notice is that some examples use CV_BGR2GRAY for the color conversion while others use CV_RGB2GRAY. I have tried both but still get the exception. Some examples also use a 5x5 instead of an 11x11 window. I have also tried both but still get the exception.

Any help greatly appreciated.

2015-01-01 06:39:02 -0600 received badge  Enthusiast
2014-12-28 18:29:14 -0600 commented answer Camera Calibration Logitech c930

Thanks. I will check out the CV_CALIB_FIX_PRINCIPAL_POINT. Looking at the docs, it seemed like it might make sense to set that. Help me to understand your comment about the obj-Vector. I actually am not sure where in the code the second point is being set at (0.1,1.0). Is it the cornerSubPix line? If so, that line is copied directly from the tutorial link above. In any event, how should I change it? Finally, I do not know how many OpenCV programmers there are in Burma, but my guess is not many. Thanks

2014-12-27 12:31:07 -0600 asked a question Camera Calibration Logitech c930

I am using OpenCV 2.4.9 with VS2012 c++/cli on a Win8.1 machine.

I am attempting to calibrate a Logitech c930 HD webcam and am getting some perplexing results.

In particular:

1) the resulting intrinsic matrix appears off; and

2) findChessboardCorners does not recognize boards placed relatively close (within 2-3 squares) to the horizontal edges of the image;

I am using findChessboardCorners looking for 20 or more matches with a 10 X 7 board where each square is 1". The chessboard is placed about 18 inches from the camera and I have measured the squares which are printed on standard paper and placed flat. The auto-zoom and focus are both turned off before capturing the images and the input image is 1920 x 1080.

The resulting intrinsic matrix is (e.g.):

9028.981    0            0
0           14583.126    0
959.5       539.5        1

I get an RMS of 0.6523 and an AVG of 0.6523.

As I understand it, the matrix should not have the values at 0,2 and 1,2 locations (i.e., the 959 and 539 values). In addition the values at 2,0 and 2,1 should be the principal point of the image rather than 0,0.

My code has attempted to follow the example at: http://docs.opencv.org/doc/tutorials/...

Here is a code sample.

int totalframes = 0;
int numCornersHor = CALIBRATE_CORNERS_HORIZONTAL;
int numCornersVer = CALIBRATE_CORNERS_VERTICAL;
int numSquares = numCornersHor * numCornersVer;
Size board_sz = Size(numCornersHor,numCornersVer);
vector<vector<Point3f>> object_points;
vector<vector<Point2f>> image_points;
vector<Point2f> corners;
int successes = 0;
vector<Point3f> obj;
Mat tImage,tImage2;
Mat grayImage;
vector<Mat> rvecs;
vector<Mat> tvecs;
vector<float> reprojErrs;
double totalAvgErr;

for(int j=0;j<numSquares;j++)
    obj.push_back(Point3f((1.0*j)/(1.0*numCornersHor),j%numCornersHor,0.0f));

while(successes<CALIBRATE_NUMBER_OF_BOARDS_TO_MATCH && totalframes<CALIBRATE_TOTAL_FRAMES)
{
    // check for new image
    if(CameraVI->isFrameNew(CameraNumber))
    {
        totalframes++;

        // load the image from the buffer
        CameraVI->getPixels(CameraNumber, LdataBuffer, false, true);

        // convert it to a mat
        cv::Mat tImage(Images->Height, Images->Width, CV_8UC3, LdataBuffer, Mat::AUTO_STEP);
        cvtColor(tImage,grayImage,CV_RGB2GRAY);
        bool found = findChessboardCorners(grayImage, board_sz, corners, CV_CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE | CALIB_CB_FAST_CHECK);

        if(found)
        {
            cornerSubPix(grayImage, corners, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 0.1));
            image_points.push_back(corners);
            object_points.push_back(obj);
            successes++;
        }
        Sleep(500);
    }
    double rms = calibrateCamera(object_points, image_points, grayImage.size(), intrinsic, distCoeffs, rvecs, tvecs,CV_CALIB_FIX_PRINCIPAL_POINT);
    totalAvgErr = computeReprojectionErrors(object_points, image_points, rvecs, tvecs, intrinsic, distCoeffs, reprojErrs);
}

Any idea on what I am doing wrong? Thanks for any help.

2014-12-09 14:08:09 -0600 marked best answer FileStorage::READ runtime exception

I am using OpenCV 2.4.6 with VS2012 c++/cli in Windows 8 and trying to simply read a matrix from a file. At runtime, I am crashing when I try and open the file and I am getting an error saying "System.Runtime.InteropServices.SEHException" - "External component has thrown an exception".

I have two questions: 1) what is causing this error; and 2) why is try/catch not handling this error?

My code looks like:

private: System::Void buttonLoadStyle_Click(System::Object^  sender, System::EventArgs^  e) {
    FileStorage fs;
    try
    {
        fs.open("SVStyle0.xml",FileStorage::READ);
    }
    catch(cv::Exception& e)
    {
        const char * err_msg = e.what();
    }
    catch(...)
    {
        System::Windows::Forms::MessageBox::Show("Load Style Exception","Rune Time Error!",MessageBoxButtons::OK,MessageBoxIcon::Error);
    }

    if(fs.isOpened())
    {
        // do something
    }
};

I get the exception when the the fs.open line tries to open the file. Further, the exception is not caught by my try/catch blocks.

Edit: When I remove the ".xml" extension, I execute past the fs.open line but, of course, do not find the file. Seems like it might be something with parsing the filename.

Any help greatly appreciated. Thanks

2014-12-09 14:04:14 -0600 marked best answer Bug in line function in 2.4.8?

I am running VS2012 c++/cli on Win 8.1 64 bit.

I recently updated my libraries from version 2.4.6 to 2.4.8. Previously, I had a series of functions that called the line function and those were working perfectly (or at least I never saw this problem).

Now, I updated to 2.4.8 and what I thought were simple functions are crashing at run time.

In particular, the following code is causing a crash:

cv::Mat Frame;
Frame = cv::imread("Testfile.png");
cv::line(Frame,cv::Point(0,375),cv::Point(999,375),CV_RGB(200,0,0),1,8,0);

Frame is a Mat that has 1000 columns and 750 rows.

When I run this, I get an error saying "Run Time Check Failure #2 - Stack around the variable 'pt' was corrupted."

It is pointing me to the last line of the function "ThickLine" in the file drawing.cpp.

2014-12-09 13:59:03 -0600 marked best answer How to Draw Crosshairs/Marked Axes

Is there some OpenCV routine that will draw crosshairs and/or mark the axes of an image in a Mat with tick marks? I have looked a bit and see no mention of one but it seems like something that a lot people probably do/want?

If there is no routine, any sample code would be appreciated. Also, is it faster to draw that for each image or make a template image and use that as a mask?

Thanks!

2014-12-09 13:55:19 -0600 marked best answer Matching Unique Cameras to Calibration Values

I am using OpenCV 2.4.6 on a Windows 8 64 bit machine using VS2012 c++/cli with two MSFT LifeCam 3000HD web cams. I am able to calibrate the cameras using the calibrateCamera function.

Because you do not want to have to perform calibration every time (and the values should not change), it makes sense to store the computed values for each camera, which should be unique to that camera (i.e., two cameras of the exact same make and model will likely have different distortion values).

I am wondering how, if at all, people are uniquely identifying cameras so that they can load calibration values specific to that camera between sessions. For example, suppose two cameras are calibrated and then, before the next session, one is disconnected. In the next session, how would you know which calibration values apply to the camera that remains connected?

2014-12-09 13:53:56 -0600 marked best answer calibrateCamera - MSFT LifeCam 3000 HD - Sensor Technical Details

I am trying to use the calibrateCamera function to calibrate my MSFT LifeCam 3000 HD webcam.

In order to do that, I understand that need to know fx and fy (the focal lengths of both the x and y axes).

However, I am having a tough time finding that information out.

The only technical data from the spec sheet appears to be:

1) "fixed focus from 0.3 to 1.5mm";

2) "Field of View - 68.5 deg diagonal field of view".

The camera does have a 16:9 aspect ratio.

There is no information about the size of the sensor, manufacturer, etc.

I even tore one of these cameras down but the sensor is tiny and there are no markings on the chip itself.

I am following this example for the calibration process, which I have seen recommended.

Am I doing this right? Any idea on how to get the information?

Thanks for any help.

2014-12-09 13:53:54 -0600 marked best answer Displaying Grayscale in PictureBox/PixelFormat

I am using OpenCV 2.4.6 on a Windows 8 64 bit machine with VS2012 C++/cli and trying to make a WinForms application that displays frames captured from a webcam in a picturebox. So far, this has been working for color images but when I try and display a image converted to grayscale, the displayed image seems positionally correct but kaleidoscopic color wise rather than grayscale.

void DrawCVImageDetect(System::Windows::Forms::PictureBox^ PBox, cv::Mat& colorImage)
{
    System::Drawing::Graphics^ graphics = PBox->CreateGraphics();
    System::IntPtr ptr(colorImage.ptr());
    System::Drawing::Bitmap^ b;
    switch(colorImage.type())
    {
        case CV_8UC3: // non-grayscale images are correctly displayed here
            b  = gcnew System::Drawing::Bitmap(colorImage.cols,colorImage.rows,colorImage.step,
                System::Drawing::Imaging::PixelFormat::Format24bppRgb,ptr);
            break;
        case CV_8UC1: // grayscale images are incorrectly displayed here 
            b  = gcnew System::Drawing::Bitmap(colorImage.cols,colorImage.rows,colorImage.step,
                System::Drawing::Imaging::PixelFormat::Format8bppIndexed,ptr);
            break;
        default:
            // error message
            break;
    }
    System::Drawing::RectangleF rect(0,0,(float)PBox->Width,(float)PBox->Height);
    graphics->DrawImage(b,rect);
}

When I call this with a regular Mat captured from the webcam, it works fine. When I convert that same Mat to a grayscale, I get the weird colors. I am converting to grayscale using cvtColor(OriginalMat,OriginalMat,RGB2GRAY). The output from this does not appear to be the same channel type as the input (i.e., a CV_8UC3 going in appears to come out as a CV_8UC1). I have also forced the output to 3 channels using cvtColor(OriginalMat,OriginalMat,RGB2GRAY,3). The fact that just the colors are off makes me think that there is something with the color indexing/premultiplication but I have tried many of the the different pixelformat types and nothing seems to work. Thanks in advance for any help.

2014-12-09 13:50:58 -0600 marked best answer OpenCV and VideoInput library/MSFT Media Foundation

I am hoping that this question will be of use to others as how to handle input from a webcam into OpenCV seems to be a recurring issue. It would seem that many who are working in this area would also target Windows so I am hoping that my questions will have broader applicability. Sorry if my questions reflect my newbie status.

I am using the videoinput library with OpenCV 2.4.3 with a multithreaded approach for a WinForm using VS2010SE C++/cli. I am using two MSFT Lifecam 3000s as the video sources.

Currently, one thread runs a loop that polls the open VI cameras by calling isFrameNew to determine if there is a new frame to process. If a new frame is found, it is processed by running the OpenCV Canny function and then displaying the result in a WinForm picturebox on the UI thread. With two cameras at 640x480 resolution and color, I am getting ~4 frames/sec without trying a GPU approach on a Win7 machine with an Intel i3 processor.

While this works, it does not appear to be the best way of doing this in terms of polling the cameras. It would be better to have an event driven approach.

All of that is the lead up to my questions:

1) does the VideoInput library have some event driven capability that could avoid the need for polling or is there some other way of determining when new frames are available other than polling?

2) should I even be using the VideoInput library? I saw recent references that led me to the library and I understand that it is better than what is internal to OpenCV. I got it working with no problem and it is easy to use. However, it is not supported and seems to be several years old. Does it make sense to try and build something entirely new using MSFT Media Foundation? Again, this seems like an issue that huge numbers of people must face.

Thanks for any insight.

2014-12-09 13:50:48 -0600 marked best answer Do the precompiled downloads include CUDA?

Title says it all.

I downloaded the precompiled OpenCV 2.4.5 for Windows and I am using VS2010 C++ with an NVIDIA GeForce 610M. My project compiles fine and CUDA 5.0 appears in the Project Configurations.

All the non-CUDA aspects of OpenCV appear to be working fine. However, getCudaEnabledDeviceCount always returns zero and if I actually try and load a GpuMat with a Mat, I crash.

I see several discussions about settings to compile OpenCV with CUDA. Before I go down that route, I wonder if anyone knows if CUDA support is already compiled in the downloads or if there is something else I am supposed to be setting?

Thanks for any help.

2014-12-09 13:50:46 -0600 marked best answer FileStorage Documetation Wrong/Failing

I am trying to use FileStorage. The sample code in the documentation crashes when it reaches the first "[" bracket. Also, the sample code specifies a "yml" file type but eh text in the documentation states that the sample code will output an "xml" file.

I have tried several variations on the sample code (specifying xml instead of yml, putting a ":" after the bracket, etc.) but still no luck.

        FileStorage fs(tFileName,FileStorage::WRITE);
        std::list<cvImages>::iterator itl;

        if(fs.isOpened())
        {
            fs << "style name";
            fs << "images" << "[";

            for(itl = MatchInstance->begin(); itl != MatchInstance->end(); ++itl)
            {
                fs << "{:";
                fs << "ImageID" << (*itl).ImageID;
                fs << "Height" << (*itl).ImageID;
                fs << "Width" << (*itl).ImageID;
                fs << "AnalysisFrameType" << (*itl).ImageID;
                fs << "ShowImageType" << (*itl).ImageID;
                fs << "ImageSize" << (*itl).ImageID;
                fs << "}";
            }
            fs << "]";
        }
        fs.release();

Any help great appreciated.

2014-12-09 13:47:32 -0600 marked best answer cvtColor output unexpected channel count

I am using OpenCV 2.4.6 on a Windows 8 64 bit machine using VS2012 c++/cli. I want to take a CV_8UC3 color image, convert it to grayscale and display it.

cvtColor is not outputting what I would expect. Based on the documentation, where OriginalMat is a CV_8UC3 image, both of the following should output a CV_8UC3 image:

cvtColor(OriginalMat,NewMat,CV_RGB2GRAY);
cvtColor(originalMat,NewMat,CV_RGB2GRAY,3);

Instead, both output a CV_8UC1 image.

Perhaps the documentation should be changed to reflect the fact that when working with grayscale, the output will always be single channel and any hard coded channel count will be ignored.

2014-12-09 13:16:29 -0600 marked best answer Building OpenCV 2.4.6 with CUDA 5.5RC in VS2012

Has anybody succeeded in building OpenCV 2.4.6. with CUDA 5.5RC in VS2012 c++/cli?

I am running the above in Windows 8 64 bit. I am bulding in VS2012 with Debug and x64 the configuration. While most of OpenCV builds, five projects will not and I am getting errors like the following:

6>  opencv_core.dir\Debug\tables.obj
6>     Creating library C:/opencv/build/lib/Debug/opencv_core246d.lib and object C:/opencv    /build/lib/Debug/opencv_core246d.exp
6>gpumat.obj : error LNK2019: unresolved external symbol "void __cdecl cv::gpu::device::copyToWithMask_gpu(struct cv::gpu::PtrStepSz<unsigned char>,struct cv::gpu::PtrStepSz<unsigned char>,unsigned __int64,int,struct cv::gpu::PtrStepSz<unsigned char>,bool,struct CUstream_st *)" (?copyToWithMask_gpu@device@gpu@cv@@YAXU?$PtrStepSz@E@23@0_KH0_NPEAUCUstream_st@@@Z) referenced in function "void __cdecl cv::gpu::copyWithMask(class cv::gpu::GpuMat const &,class cv::gpu::GpuMat &,class cv::gpu::GpuMat const &,struct CUstream_st *)" (?copyWithMask@gpu@cv@@YAXAEBVGpuMat@12@AEAV312@0PEAUCUstream_st@@@Z)
6>gpumat.obj : error LNK2019: unresolved external symbol "void __cdecl cv::gpu::device::convert_gpu(struct cv::gpu::PtrStepSz<unsigned char>,int,struct cv::gpu::PtrStepSz<unsigned char>,int,double,double,struct CUstream_st *)" (?convert_gpu@device@gpu@cv@@YAXU?$PtrStepSz@E@23@H0HNNPEAUCUstream_st@@@Z) referenced in function "void __cdecl cv::gpu::convertTo(class cv::gpu::GpuMat const &,class cv::gpu::GpuMat &)" (?convertTo@gpu@cv@@YAXAEBVGpuMat@12@AEAV312@@Z)
6>gpumat.obj : error LNK2019: unresolved external symbol "void __cdecl cv::gpu::device::set_to_gpu<unsigned char>(struct cv::gpu::PtrStepSz<unsigned char>,unsigned char const *,int,struct CUstream_st *)" (??$set_to_gpu@E@device@gpu@cv@@YAXU?$PtrStepSz@E@12@PEBEHPEAUCUstream_st@@@Z) referenced in function "void __cdecl `anonymous namespace'::kernelSetCaller<unsigned char>(class cv::gpu::GpuMat &,class cv::Scalar_<double>,struct CUstream_st *)" (??$kernelSetCaller@E@?A0xd269b65d@@YAXAEAVGpuMat@gpu@cv@@V?$Scalar_@N@3@PEAUCUstream_st@@@Z)
6>gpumat.obj : error LNK2019: unresolved external symbol "void __cdecl cv::gpu::device::set_to_gpu<signed char>(struct cv::gpu::PtrStepSz<unsigned char>,signed char const *,int,struct CUstream_st *)" (??$set_to_gpu@C@device@gpu@cv@@YAXU?$PtrStepSz@E@12@PEBCHPEAUCUstream_st@@@Z) referenced in function "void __cdecl `anonymous namespace'::kernelSetCaller<signed char>(class cv::gpu::GpuMat &,class cv::Scalar_<double>,struct CUstream_st *)" (??$kernelSetCaller@C@?A0xd269b65d@@YAXAEAVGpuMat@gpu@cv@@V?$Scalar_@N@3@PEAUCUstream_st@@@Z)
6>gpumat.obj : error LNK2019: unresolved external symbol "void __cdecl cv::gpu::device::set_to_gpu<unsigned short>(struct cv::gpu::PtrStepSz<unsigned char>,unsigned short const *,int,struct CUstream_st *)" (??$set_to_gpu@G@device@gpu@cv@@YAXU?$PtrStepSz@E@12@PEBGHPEAUCUstream_st@@@Z) referenced in function "void __cdecl `anonymous namespace'::kernelSetCaller<unsigned short>(class cv::gpu::GpuMat &,class cv::Scalar_<double>,struct CUstream_st *)" (??$kernelSetCaller@G@?A0xd269b65d@@YAXAEAVGpuMat@gpu@cv@@V?$Scalar_@N@3@PEAUCUstream_st@@@Z)
6>gpumat.obj : error LNK2019: unresolved external symbol "void __cdecl cv::gpu::device::set_to_gpu<short>(struct cv::gpu::PtrStepSz<unsigned char>,short const *,int,struct CUstream_st *)" (??$set_to_gpu@F@device@gpu@cv@@YAXU?$PtrStepSz@E@12@PEBFHPEAUCUstream_st@@@Z) referenced in function "void __cdecl `anonymous namespace'::kernelSetCaller<short>(class cv::gpu::GpuMat &,class cv::Scalar_<double>,struct CUstream_st *)" (??$kernelSetCaller@F@?A0xd269b65d@@YAXAEAVGpuMat@gpu ...
(more)
2014-12-09 13:10:53 -0600 marked best answer Something funky in the drawing functions? 2.4.8 VS2012

I am wondering if other folks are experiencing something funky in the drawing functions in 2.4.8 using VS2012 c++/cli.

Previously, in 2.4.6, I was using the line function in spots with no problems. Then, I switched to 2.4.8, and the exact same line function started crashing for no apparent reason. I never figured it out.

Now, I am trying to use drawcontours and, depending on the line width, the code crashes.

The following line works.

for(unsigned int i = 0; i < ContoursList.size(); i++ )
{
    drawContours( AnalysisFrame, ContoursList , i, color, CV_FILLED, 8);
}

However, if I replace CV_FILLED with an integer like 1 or 2 (for a line width), it crashes. Thus,

 drawContours( AnalysisFrame, ContoursList , i, color, 2, 8);

crashes.

Anyone else experiencing these problems with the drawing functions?

Thanks for any help.

2014-11-13 13:51:01 -0600 marked best answer Understanding findHomography Mask

I apologize in advance if this is dumb but I am new to findHomography and trying to understand how to do object detection using it.

As I understand it, after using a keypoint detector, and a descriptor extractor, and filtering for points that are within some distance (e.g., 2-3 * the minimum distance), we only have a list of interesting points that are probably good individually but that may or may not match in pattern. That is where findHomography comes in. FindHomogrpahy is supposed to output a mask of both inliers (good points for a pattern) and outliers (bad points).

However, I never see anything saying how to interpret the output Mat from findHomography. It appears to be two dimensional but I do not know which column is the inliers and which the outliers. Also, I do not see anything which indicates how to conclude whether there was a match o not (i.e., is an absolute number of inliers determined to be a match or is it some measure of the inliers to outliers, etc.).

I have put some of my code below to give a sense of where I am.

Thanks for any help.

FlannBasedMatcher matcher;
std::vector< DMatch > matches, good_matches;
double max_dist;
double min_dist;
double dist;
Mat H;
std::vector<Point2f> obj;
std::vector<Point2f> scene;
Mat mask;

min_dist = 100;
max_dist = 0;

matcher.match(ObjectSurfDescriptors, SceneSurfDescriptors, matches);

for( int i = 0; i < ObjectSurfDescriptors.rows; i++ )
{
    dist = matches[i].distance;

    if( dist < min_dist ) 
        min_dist = dist;
    if( dist > max_dist ) 
        max_dist = dist;
}

for( int i = 0; i < ObjectSurfDescriptors.rows; i++ )
{ 
    if( matches[i].distance <= max(3*min_dist, 0.02) )
        good_matches.push_back( matches[i]); 
}

if(good_matches.size() >= MIN_GOOD_MATCHES)
{
    for( int i = 0; i < good_matches.size(); i++ )
    {
        //-- Get the keypoints from the good matches
        obj.push_back( ObjectSurfKeypoints[ good_matches[i].queryIdx ].pt );
        scene.push_back( SceneSurfKeypoints[ good_matches[i].trainIdx ].pt );
    }

    H = findHomography( obj, scene, CV_RANSAC, 3.0, mask );
}
2014-11-13 13:50:41 -0600 commented answer Understanding findHomography Mask

Thanks. This sounds right so let me play around with it.

2014-09-09 20:02:05 -0600 asked a question Converting from 2.4.8 to 3.0alpha/open_contrib/includes

I recently updated from 2.4.8 to 3.0alpha and am having a hard time understanding the how to re-setup the include files. I am running on Win 8.1 64 bit with VS2012 c++/cli and CMake 2.8.11.2 I have CUDA 6.0 and an NVidia card. My code is using SIFT/SURF and Cuda. I re-compiled the code with the OPENCV_EXTRA_MODULES_PATH set and the opencv code builds fine.

Now I have to modify the includes in my own source and that is where I am having a problem. My include section used to work fine and say:

#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d.hpp>
#include <opencv2/stitching.hpp>
#include <opencv2/gpu.hpp>
#include <opencv2/gpumat.hpp>

Now, it cannot find the gpu.hpp or gpumat,hpp files. I tried to follow this question but it is not clear that anything actually happened. Seems like all the header files, etc. need to be coped from various subfolders into a standard include directory.

What is the easiest way to get reset the include files?

Thanks for any help!

2014-09-07 16:45:28 -0600 received badge  Self-Learner (source)
2014-09-07 13:09:37 -0600 commented answer setting include files with opencv 3.0

Sorry to be so dumb. IN my opencv.sln, the projects listed in order are: 3rdparty; applications; CMake Targets; extra; modules; tests accuracy; tests performance; ALL BUILD. Within CMake Targets, the projects are: INSTALL; PACKAGE; uninstall; ZERO_CHECK. If I just build the INSTALL package here, as far as I can tell, nothing happens.

2014-09-07 00:42:33 -0600 commented answer setting include files with opencv 3.0

Can you elaborate on "run the INSTALL project"? How exactly do you do that?

2014-09-05 23:29:50 -0600 answered a question Error using CMake for 3.0alpha VS2012

I solved at least this issue. I downloaded OpenCV3.0 into the same directories that I had previously loaded 2.4.8 into. For whatever reason, that causes a CMake problem. I deleted the prior version of OpenCV entirely and reinstalled it fresh and it worked.

2014-09-05 11:23:04 -0600 edited question Error using CMake for 3.0alpha VS2012

I recently downloaded the 3.0alpha and am attempting to compile it on a Win 8.1 machine using VS2012 64 bit and CMake 2.8.11.2. I also have CUDA 5.5 with an NVidia card.

When I try and configure the code using CMake, I get:

Checking for Windows Platform SDK
Checking for Visual Studio 2012
CUDA detected: 5.5
CUDA NVCC target flags: -gencode;arch=compute_11,code=sm_11;-gencode; arch=compute_12,code=sm_12;-gencode;arch=compute_13,code=sm_13;-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_20,code=sm_21;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_30,code=compute_30
Could NOT find PythonInterp: Found unsuitable version "1.4", but required is at least "2.7" (found C:/Python27/python.exe)
Could NOT find PythonInterp: Found unsuitable version "1.4", but required is at least "2.6" (found C:/Python27/python.exe)
Could NOT find PythonInterp: Found unsuitable version "1.4", but required is at least "3.4" (found C:/Python27/python.exe)
Could NOT find PythonInterp: Found unsuitable version "1.4", but required is at least "3.2" (found C:/Python27/python.exe)
Could NOT find JNI (missing:  JAVA_AWT_LIBRARY JAVA_JVM_LIBRARY JAVA_INCLUDE_PATH JAVA_INCLUDE_PATH2 JAVA_AWT_INCLUDE_PATH) 
Could NOT find Matlab (missing:  MATLAB_MEX_SCRIPT MATLAB_INCLUDE_DIRS MATLAB_ROOT_DIR MATLAB_LIBRARIES MATLAB_LIBRARY_DIRS MATLAB_MEXEXT MATLAB_ARCH MATLAB_BIN) 
Found VTK ver. 6.1.0 (usefile: C:/VTK/VTK-6.1.0/CMake/UseVTK.cmake)
CMake Error at modules/gpu/CMakeLists.txt:88 (ocv_add_precompiled_headers):
  Unknown CMake command "ocv_add_precompiled_headers".

Configuring incomplete, errors occurred!

I do have CMake set to the proper platform (Visual Studio 11 Win64) so I am wondering what the issue is. I also have Python3.4 installed in a c:\Python34 directory and do not have Python27 installed any longer. I also have VTK installed in a VTK directory.

I am a CMake newbie so it may be something stupid.

The last version I downloaded was 2.4.8 so do not know if this would have been an issue in 2.4.9.

Any help appreciated.

Thanks, James

2014-09-05 10:45:21 -0600 commented question Error using CMake for 3.0alpha VS2012

Steve, I am not sure what you mean. I edited my question to put in some more background. If you are asking about all of the CMake settings, I believe that I left them all at default and, given how many of them there are, I do not know an easy way to post them all. Thanks again for any help.