Ask Your Question

acajic's profile - activity

2017-01-07 09:51:30 -0600 received badge  Scholar (source)
2017-01-07 07:07:47 -0600 asked a question Matrix multiplication without memory allocation

Is it possible to speed up the overloaded matrix multiplication operator (*) in OpenCV by using preallocated cv::Mat instance with correct dimensions as a placeholder for where the result is being written into?

Something like the existing function:

CV_EXPORTS_W void gemm(InputArray src1, InputArray src2, double alpha,
                       InputArray src3, double beta, OutputArray dst, int flags = 0);

only simpler. I would like to have something like this:

CV_EXPORTS_W void matmul(InputArray src1, InputArray src2, OutputArray dst);

My concern is performance. Is it possible that

res = m1 * m2;

is equally fast as the hypothetical function:

matmul(m1, m1, res)

?

2016-12-23 04:25:51 -0600 asked a question VideoCapture Inappropriate ioctl for device

On the line

cv::VideoCapture videoCapture(<path>);

A following error message shows up

Failed to query video capabilities: Inappropriate ioctl for device
libv4l2: error getting capabilities: Inappropriate ioctl for device
VIDEOIO ERROR: V4L: device /home/comp/B6.avi: Unable to query number of channels
(VideoTest:2891): GStreamer-CRITICAL **: gst_element_link_pads_full: assertion 'GST_IS_ELEMENT (src)' failed
OpenCV Error: Unspecified error (GStreamer: cannot link color -> sink
) in cvCaptureFromCAM_GStreamer, file /home/comp/opencv/modules/videoio/src/cap_gstreamer.cpp, line 792
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

/home/comp/opencv/modules/videoio/src/cap_gstreamer.cpp:792: error: (-2) GStreamer: cannot link color -> sink
 in function cvCaptureFromCAM_GStreamer

Platform is Arch Linux.

cv::getBuildInformation()

gives VIDEO I/O section

  Video I/O:
    DC1394 1.x:                  NO
    DC1394 2.x:                  NO
    FFMPEG:                      NO
      avcodec:                   NO
      avformat:                  NO
      avutil:                    NO
      swscale:                   NO
      avresample:                NO
    GStreamer:                   
      base:                      YES (ver 1.10.0)
      video:                     YES (ver 1.10.0)
      app:                       YES (ver 1.10.0)
      riff:                      YES (ver 1.10.0)
      pbutils:                   YES (ver 1.10.0)
    OpenNI:                      NO
    OpenNI PrimeSensor Modules:  NO
    OpenNI2:                     NO
    PvAPI:                       NO
    GigEVisionSDK:               NO
    Aravis SDK:                  NO
    UniCap:                      NO
    UniCap ucil:                 NO
    V4L/V4L2:                    Using libv4l1 (ver 1.10.1) / libv4l2 (ver 1.10.1)
    XIMEA:                       NO
    Xine:                        NO
    gPhoto2:                     NO
2016-05-20 10:06:30 -0600 commented question segmentation fault on cv::imshow("windowName", cvImagePtr->image)

any news on this issue?

2016-04-25 10:05:49 -0600 asked a question findEssentialMat for coplanar points

[this is a copy of a [question I just posted on StackOverflow](http://stackoverflow.com/questions/36844139/opencv-findessentialmat)]

I came to a conclusion that OpenCV's findEssentialMat is not working properly for coplanar points. The documentation specifies that it uses Nister's 5 point algorithm, and the corresponding paper declares that the algorithm works fine for coplanar points.

void main() {
    ofstream log;
    log.open("errorLog.txt");

    srand((time(NULL) % RAND_MAX) * RAND_MAX);

    /******* camera properties *******/
    Mat camMat = Mat::eye(3, 3, CV_64F);
    Mat distCoeffs = Mat::zeros(4, 1, CV_64F);

    /******* pose 1 *******/
    Mat rVec1 = (Mat_<double>(3, 1) << 0, 0, 0); 
    Mat tVec1 = (Mat_<double>(3, 1) << 0, 0, 1);
    /******* pose 2 *******/
    Mat rVec2 = (Mat_<double>(3, 1) << 0.0, 0.0, 0); 
    Mat tVec2 = (Mat_<double>(3, 1) << 0.2, 0, 1); // 2nd camera pose is just a pose1 translated by 0.2 along the X axis


    int iterCount = 50;
    int N = 40;
    for (int j = 0; j < iterCount; j++)
    {
        /******* generate 3D points *******/
        vector<Point3f> points3d = generatePlanarPoints(N);

        /******* project 3D points from pose 1 *******/
        vector<Point2f> points2d1;
        projectPoints(points3d, rVec1, tVec1, camMat, distCoeffs, points2d1);
        /******* project 3D points from pose 2 *******/
        vector<Point2f> points2d2;
        projectPoints(points3d, rVec2, tVec2, camMat, distCoeffs, points2d2);


        /******* add noise to 2D points *******/
        std::default_random_engine generator;
        double noise = 1.0 / 640;

        if (noise > 0.0) {
            std::normal_distribution<double> distribution(0.0, noise);
            for (int i = 0; i < N; i++)
            {
                points2d1[i].x += distribution(generator);
                points2d1[i].y += distribution(generator);
                points2d2[i].x += distribution(generator);
                points2d2[i].y += distribution(generator);
            }
        }



        /******* find transformation from 2D - 2D correspondences *******/
        double threshold = 2.0 / 640;
         Mat essentialMat = findEssentialMat(points2d1, points2d2, 1.0, Point(0,0), RANSAC, 0.999, threshold);
         Mat estimatedRMat1, estimatedRMat2, estimatedTVec;
         decomposeEssentialMat(essentialMat, estimatedRMat1, estimatedRMat2, estimatedTVec);
         Mat estimatedRVec1, estimatedRVec2;
         Rodrigues(estimatedRMat1, estimatedRVec1);
         Rodrigues(estimatedRMat2, estimatedRVec2);
         double minError = min(norm(estimatedRVec1 - rVec2), norm(estimatedRVec2 - rVec2));

        log << minError << endl; // logging errors 

    }

    log.flush();
    log.close();
    return;
}

The points are generated like this:

vector<Point3f> generatePlanarPoints(int N) {
    float span = 5.0;

    vector<Point3f> points3d;
    for (int i = 0; i < N; i++)
    {
        float x = ((float)rand() / RAND_MAX - 0.5) * span;
        float y = ((float)rand() / RAND_MAX - 0.5) * span;
        float z = 0;
        Point3f point3d(x,y,z);
        points3d.push_back(point3d);
    }
    return points3d;
}

Excerpt from errorLog.txt file:

0
0.199337
0.199337
0.199337
0.199338
0
0.199337
0
0
0.199337
0.199337

This shows us that algorithm sometimes performs good (error == 0), and sometimes something weird happens (error == 0.199337). Is there any other explanation for this?

Obviously, the algorithm is deterministic and error 0.199337 will appear for a specific configuration of points. What is this configuration, I wasn't able to figure out.

I also experimented with different prob and threshold parameters for findEssentialMat. And I tried using more/less points and different camera poses... same thing is happening.

2016-03-24 10:09:31 -0600 answered a question index matrix

findNonZero is good solution if your first concern is a concise code. Otherwise, if you need performance, I would stick to your original approach and change

matrix.at<uchar>(index,0) = col;

matrix.at<uchar>(index,1) = row;

into

matrix.ptr<uchar>(index)[0] = col

matrix.ptr<uchar>(index)[1] = row

2016-03-24 09:54:17 -0600 asked a question Viz3d removeWidget RtlValidateHeap

An error occurs when trying to remova a widget from a Viz3d window. The error only occurs using a Release configuration in Visual Studio. When using Debug configuration, everything works fine.

HEAP[ArUco.exe]: Invalid address specified to RtlValidateHeap( 00000000003B0000, 0000000014BC8BC0 )

The call stack at the moment of exception looks like this:

image description

At the bottom, we can see removeWidget method being called. It then goes through a long chain of VTK calls. Among other calls, I see "garbage collector" so I assume something is wrongly trying to get deallocated. (?)

I an altered code, if I perform

viz::Widget widg = measurementsWindow->getWidget(id);

before calling

removeWidget(id)

then the error is delayed. It is delayed up until the point when execution leaves the current scope. The scope in which widg object lives. When this widg object tries to get deallocated, the same error occurs.

2015-10-01 07:50:59 -0600 answered a question triangulatePoints() function

To the best of my knowledge the projPoints1 and projPoints2 need to be undistorted before being passed to triangulatePoints().

2015-10-01 07:22:14 -0600 commented question Given a pair of stereo-calibrated cameras and a set of 2D point correspondences, what would be a proper way to obtain 3D coordinates of those points through triangulation?

Any update on this?

I wonder what do your PL and PR look like.

2015-10-01 06:39:37 -0600 commented question triangulate 3d points from stereo images?

Any update on this?

Can you tell me why did you choose to insert negative value for translation in your projection matrix (-3.682632 )?

2015-09-30 11:32:12 -0600 commented question Coordinate axis with triangulatePoints

Is the question unclear in some way? I would consider this to be basic stuff... I just cannot find a definite answer.

Specifically, when I use triangulatePoints to get 3D point coordinates. And immediately after that I pass these 3D point coordinates to solvePnP. What I expect to get is a camera pose around zero 3D coordinate and zero rotation.

Instead I get some wild values, indicating that my camera moved about 5 meters in some random direction.

2015-09-25 12:59:36 -0600 received badge  Student (source)
2015-09-25 04:12:15 -0600 received badge  Editor (source)
2015-09-25 04:11:22 -0600 asked a question Coordinate axis with triangulatePoints

So, I have the projection matrix of the left camera:

image description

and the projection matrix of my right camera:

image description

And when I perform triangulatePoints on the two vectors of corresponding points, I get the collection of points in 3D space. All of the points in 3D space have a negative Z coordinate. I assume that the initial orientation of each camera is directed in the positive Z axis direction.

My assumption was that OpenCV uses Right Hand coordinate system like this:

image description image description

So, when I positioned my cameras with projection matrices, the complete picture would look like this:

image description

But my experiment leads me to believe that OpenCV uses Left Hand coordinate system:

image description

And that my projection matrices have effectively messed up the left and right concept:

image description

Is everything I've said correct? Is the latter coordinate system really the one that is used by OpenCV?

If I assume that it is, everything seems to work fine. But when I want to visualize things using viz module, their WCoordinateSystem widget is right handed.

2015-09-15 07:45:41 -0600 received badge  Enthusiast
2015-09-02 03:46:56 -0600 commented answer calculate distance using disparity map

How come pixel size isn't included in the formula?

Or is the pixel size implicitly included if we state disparity in the same measurment unit as focal length?

2015-08-04 05:34:24 -0600 received badge  Supporter (source)