Ask Your Question

stillNovice's profile - activity

2020-09-24 01:22:27 -0500 marked best answer Mat structure for depth map?

I am reading an image in to a cv::Mat.

cv::Mat depth_map = cv::imread("patio_depth.jpg", -1);

The Mat is then passed to this function: (that gets xyz coordinates from a depth map).

 inline cv::Point3d getPoint3D(int x, int y) const

    cv::Point3d point;
    point.x =<cv::Vec3f>(y, x)[0];
    point.y =<cv::Vec3f>(y, x)[1];
    point.z =<cv::Vec3f>(y, x)[2];

    return point; }

Which causes an assertion error.

Assertion failed <dims &lt;="2" &amp;&amp;="" data="" &amp;&amp;="" <unsigned="">i0 <unsigned>size.p[0] && <unsigned><i1*datatype<_tp>::channels> in \core\mat.hpp

I assume that this means the Mat I am using is incompatible with the function, but what Mat structure WILL work? Or do I need to convert it somehow before I pass it through?

2020-07-10 11:35:42 -0500 received badge  Popular Question (source)
2020-05-07 12:46:47 -0500 received badge  Popular Question (source)
2019-08-30 04:47:04 -0500 received badge  Popular Question (source)
2019-04-04 02:41:56 -0500 received badge  Famous Question (source)
2017-11-21 16:46:44 -0500 received badge  Popular Question (source)
2017-10-04 14:00:22 -0500 edited question 'Mapping' Aruco markers?

'Mapping' Aruco markers? I have a calibrated camera, and an application that tracks Aruco markers using opencv 3.2. Wha

2017-10-04 13:43:11 -0500 asked a question 'Mapping' Aruco markers?

'Mapping' Aruco markers? I have a calibrated camera, and an application that tracks Aruco markers using opencv 3.2. Wha

2017-09-22 07:45:24 -0500 asked a question Aruco markers with openCv, get the 3d corner coordinates?

Aruco markers with openCv, get the 3d corner coordinates? I am detecting a printed Aruco marker using opencv 3.2: aruco

2017-09-17 06:26:59 -0500 commented answer aruco giving strange rotations.

Hi, yup I did. I ended up using the built in opencv function 'cv2eigen', works great. Convert 'rvec' to a Mat, then use

2017-08-20 11:21:04 -0500 received badge  Organizer (source)
2017-08-20 11:17:16 -0500 asked a question Call a cv::Mat from a c++ dll to a C# picturebox?

As above. I have this function in a c++ dll:

__declspec(dllexport) unsigned char*  test()
        cv::Mat OriginalImg = cv::imread("D:/testFrame.jpg", 1);

        std::cout << OriginalImg.type() << std::endl;


If I show the image there, it is correct. The type returns as '16'.

in a C# application, I have:

[DllImport("cvTests.dll", CallingConvention = CallingConvention.Cdecl)]
        public static extern IntPtr test();

and the function to load the images is:

  private void buttonImg_Click(object sender, EventArgs e)
            IntPtr ptr = test();
            pictureBoxFrame.Image = new Bitmap(640, 360, 3 * 360, PixelFormat.Format24bppRgb, ptr);


This loads the image to the box, but it looks like this:

broken frame

What am I missing? The image size is 640w * 360h, I have tried a couple of other PixelFormats, but see an equally broken image.


2017-06-25 14:27:27 -0500 asked a question Manually set up stereo projection matrices?

I have a stereo camera system, and need to get my Projection matrices in order to triangulate points. As i already have the intrinsic, extrinsics and distortion values, I would like to just manually plug these in to get the P1 and P2 matrices.

My images are rectified, so the rotation matrix is zero, and the translation is just the 12cm baseline. I have the following code:

void main()
    float fx = 333.0208275693896;
    float fy = 333.0208275693896;
    float cx = 318.51652340332424;
    float cy = 171.93557987330718;

    float data[9] = { fx,0,cx,0,fy,cy,0,0,1 };

    cv::Mat K = cv::Mat(3, 3, CV_32F, data);

    float k1 = 0.031989690067704447;
    float k2 = -0.11373776380163705;
    float p1 = 0.0;
    float p2 = 0.0;
    float k3 = 0.11337076724792615;
    float dist[5] ={ k1,k2,p1,p2,k3 };

    cv::Mat D = cv::Mat(5, 1, CV_32F, dist);

    float tx = 0.12;
    float ty = 0;
    float tz = 0;

    //rotation ( images are rectified, so this is zero)
    float rots[9] = { 1,0,0,0,1,0,0,0,1 };
    cv::Mat R = cv::Mat(3,3, CV_32F, rots);

    //translation. (stereo camera, rectified images, 12 cm baseline)
    float trans[3] = { tx,ty,tz};
    cv::Mat t = cv::Mat(3, 1, CV_32F, trans);

    // Camera 1 Projection Matrix K[I|0]
    cv::Mat P1(3, 4, CV_32F, cv::Scalar(0));
    K.copyTo(P1.rowRange(0, 3).colRange(0, 3));

    std::cout << "matrix P1" << std::endl;
    std::cout << P1 << std::endl;

    // Camera 2 Projection Matrix K[R|t]
    cv::Mat P2(3, 4, CV_32F);
    R.copyTo(P2.rowRange(0, 3).colRange(0, 3));
    t.copyTo(P2.rowRange(0, 3).col(3));
    P2 = K*P2;

    std::cout << "matrix P2" << std::endl;
    std::cout << P2 << std::endl;



which gives me the following:

matrix P1
[333.02081, 0, 318.51651, 0;
 0, 333.02081, 171.93558, 0;
 0, 0, 1, 0]
matrix P2
[333.02081, 0, 318.51651, 39.962498;
 0, 333.02081, 171.93558, 0;
 0, 0, 1, 0]

Before i go any further i want to check, am i doing this correctly? Do the returned matrices look correct?

thank you!

2017-05-25 10:44:29 -0500 asked a question Error using cv::cuda::StreamAccessor::wrapStream

I am trying to build some third party code that uses the CUDA modules. I am down to one last unresolved external symbol error. Yay!

This one, though, i am stuck with. All the cuda code is in a lib, which builds fine. But when i try to compile a sample function that calls the lib, i get:

Severity    Code    Description Project File    Line    Suppression State
Error   LNK2001 unresolved external symbol "public: static class cv::cuda::Stream __cdecl cv::cuda::StreamAccessor::wrapStream(struct CUstream_st *)" (?wrapStream@StreamAccessor@cuda@cv@@SA?AVStream@23@PEAUCUstream_st@@@Z)  StereoTest  (   1

in, the constructor is:

GpuFast::GpuFast(int highThreshold, int lowThreshold, int maxKeypoints)
    : highThreshold(highThreshold), lowThreshold(lowThreshold), maxKeypoints(maxKeypoints)
    checkCudaErrors( cudaStreamCreate(&stream) );
    cvStream = cv::cuda::StreamAccessor::wrapStream(stream);
    checkCudaErrors( cudaMallocManaged(&kpLoc, sizeof(short2) * maxKeypoints) );
    checkCudaErrors( cudaMallocManaged(&kpScore, sizeof(float) * maxKeypoints) );
    checkCudaErrors( cudaStreamAttachMemAsync(stream, kpLoc) );
    checkCudaErrors( cudaStreamAttachMemAsync(stream, kpScore) );
    checkCudaErrors( cudaMalloc(&counter_ptr, sizeof(unsigned int)) );

It seems like it cannot find cv::cuda::StreamAccessor::wrapStream.

This seems to live in :


which i have linked to. What else could I be missing? my full libs list is:

-LIBPATH:C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0/lib/x64
C:\Program Files\NVIDIA Corporation\NvToolsExt\lib\x64\nvToolsExt64_1.lib
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\lib\x64\cudart_static.lib
2017-05-21 15:16:24 -0500 commented answer How does resizing an image affect the intrinsics?

Explained perfectly. thank you!

2017-05-21 14:37:11 -0500 asked a question How does resizing an image affect the intrinsics?

Hi there, i have calibrated a camera using images at 1280 * 720. the intrinsic data is:

  1280 720</cameraResolution>
<camera_matrix type_id="opencv-matrix">
    6.5700301862152855e+02 0. 6.4669706479436286e+02 0.
    6.5700301862152855e+02 3.4711751174913059e+02 0. 0. 1.</data></camera_matrix>

So the focal length is:


and the center pixels are:

cx = 646.69706
cy = 347.11751

Now, to speed up my algorithm, I have resized the images by half, to give a resolution of:


My question is, how do I need to adjust my intrinsic matrix to match? I assume i need to half the cx and cy values, but the focal stays the same? Is this correct?

Thank you.

2017-05-09 14:24:45 -0500 asked a question Aruco module, does it have 'Markermap'?

Hi, I was using the original aruco code, from here:

But am now using the opencv contrib module version, which works great. In the original code though, there is this concept of a 'markermap' which means you can place markers all over a room, film them, then run the video through aruco to calculate the positions. Once you have this map, you can use the 3d marker positions to localize.

My question is, does this exist in the same way in the opencv version?


2017-05-04 15:24:58 -0500 received badge  Notable Question (source)
2017-05-04 14:52:12 -0500 asked a question Aruco module, estimatePoseSingleMarkers looks great, estimatePoseBoard does not.

Hi, as the title says... I have a calibrated camera, the calibration is tested by undistorting a frame, and it looks good. When i run the aruco module and detect a single marker, estimating the pose and drawing the axis, it looks great.

When i do the same thing, but using a charuco board, it finds all the markers, but the axis, and therefore the pose, jump around almost every frame. It is very very noisy.

I am basically using the straight sample code, why would the single markers track well, and the board not? What can i look at to improve the stability of the estimatePoseBoard function?

Thank you!

2017-04-26 15:18:52 -0500 asked a question interactive charuco application fails...

Hi, i have built the sample app opencv_interactive-calibration.exe and it will run ok on a webcam (although every frame is rejected, I assume the webcam is too low res)

I have a higher resolution video that i am trying to run, but when i run:

opencv_interactive-calibration.exe -v

in a cmd prompt, I get:

Hot keys:
esc - exit application
s - save current data to .xml file
r - delete last frame
u - enable/disable applying undistortion
d - delete all frames
v - switch visualization
Unable to open video source

The mov is in the same folder as the exe. i have tried converting to an avi, and I get the same issue. Can anyone help me out here?


2017-04-02 14:03:12 -0500 asked a question cv::Matx, copy section to another Matx

I am porting some code from one project to another. One uses cv::Mat objects, and the other uses cv::Matx.

With Mat, I have:

cvImuMatrix.copyTo(imuPoseOut.rowRange(0, 3).colRange(0, 3));

cvImuMatrix and imuPoseOut are both:

cv::Mat imuPoseOut = cv::Mat::eye(4, 4, CV_32F);

I have not used Matx much, so my questions are:

What do these two lines need to be when using cv::Matx44d?

cv::Mat imuPoseOut = cv::Mat::eye(4, 4, CV_32F); //needs to be cv::Matx44d.
cvImuMatrix.copyTo(imuPoseOut.rowRange(0, 3).colRange(0, 3));

(Neither copyTo, nor rowrange seem to be included in Matx)

Thank you!

2017-02-17 13:39:52 -0500 commented answer realsense R200, difference between saved images and live stream?

Thank you! The conversion to 8bit first solved it for me.

2017-02-17 10:29:16 -0500 commented answer realsense R200, difference between saved images and live stream?

Thank you for taking the time to look at this. It was as simple as converting to 8bit. .convertTo(infrared_8U, CV_8U, 1/256.0); Gotta love an easy fix. :) Thanks again.

2017-02-17 04:57:16 -0500 commented question realsense R200, difference between saved images and live stream?

Hi, Thanks for your response. I am just wondering if there is anything obvious that it could be. Is there any obvious difference in the Mat types from the example code to the code I am using? Is there any other 16 bit Mat type I could try? Thanks again!

2017-02-16 14:07:38 -0500 received badge  Commentator
2017-02-16 14:07:38 -0500 commented question StereoRectify of non-parallel cameras?

Thank you for your response. i do need rectification, but the overlap turned out to be too small for what I need anyway. I have moved to using a different camera setup. Thanks again.

2017-02-16 14:06:27 -0500 asked a question realsense R200, difference between saved images and live stream?

Hi, I have a realsense r200 camera, and am using it with Stereo Odometry.

The images are converted to cv::mat like this:

Mat lFrame(480, 640, CV_16U, (uchar *)dev->get_frame_data(rs::stream::infrared));

which looks fine, and is from the docs.

When i save images out using:

cv::imwrite(name, img);

and run them through the third party Odometry library by loading the images, like this:

cv::Mat left_img = cv::imread(left_img_file_name.c_str(), 0); // load as black and white image
uint8_t* left_img_data =;

Everything works great. BUT, when i use a live stream of images, like this:

Mat lFrame(480, 640, CV_16U, (uchar *)dev->get_frame_data(rs::stream::infrared));
uint8_t* left_img_data =;

It does not work at all.

I suspect this is due to a mismatch in Mat types, but I am lost.

I have tried:

cv::cvtColor(lFrame, leftImg, CV_GRAY2RGB);

cv::cvtColor(lFrame, leftImg, CV_GRAY2BGR);

But I see no difference. Does anyone have any thoughts on what might be missing here?

..if it helps.. I have found an example by the author of the library. They used:

IplImage *I = cvCreateImageHeader(cvSize(dc_frame->size[0],dc_frame->size[1]),IPL_DEPTH_8U,1);

Thank you!

2017-02-13 15:18:49 -0500 asked a question StereoRectify of non-parallel cameras?

Hi, i have a factory-calibrated camera, which I am trying to rectify the images from.

The cameras are not parallel to each other, they are on a 72 degree angle, but there is still around a 20% overlap on the frames. Is this why it is failing?

I fill the Matrices manually from the calibration data, then run stereorectify as follows:

cv::Mat R1, P1, Q, map1x, map1y, R2, P2, map2x, map2y, imgU1, imgU2; Mat CM1; Mat CM2; Mat D1, D2; Mat R, T, E, F;

  //load images
cv::Mat img1 = cv::imread("/cam0/img391.png");
cv::Mat img2 = cv::imread("/cam1/img391.png");

//fill Matrices
//cam1  cx 372.61 cy 219.701 , fx 488.642, fy 491.335   
CM1 = (Mat_<double>(3, 3, CV_64FC1) << 488.642, 0, 372.61,0, 491.335, 219.701,0, 0, 1);

//cam2  cx 367.565 cy 188.369 fx 486.37 fy 489.142
CM2 = (Mat_<double>(3, 3, CV_64FC1) << 486.37, 0, 367.565,0, 489.142, 188.369,0, 0, 1);

D1 = (Mat_<double>(5, 1, CV_64FC1) << -0.331212, 0.146775, 0.000437399, 0.000434205, -0.0380578);
D2 = (Mat_<double>(5, 1, CV_64FC1) << -0.329043, 0.141312, -0.00020626, 0.000265379, -0.0337069);

R = (Mat_<double>(3,3, CV_64FC1) << 0.308658, -0.0013405, -0.951172,
    -0.00543986, 0.99998, -0.00317455,
    0.951157, 0.00615403, 0.308645);

T = (Mat_<double>(3, 1, CV_64FC1) << 0.0533983, 0.0223115, -0.238858);

stereoRectify(CM1, D1, CM2, D2, img1.size(), R, T, R1, R2, P1, P2, Q);  

cv::initUndistortRectifyMap(CM1, D1, R1, P1, img1.size(), CV_32FC1, map1x, map1y);
cv::initUndistortRectifyMap(CM2, D2, R2, P2, img2.size(), CV_32FC1, map2x, map2y);

printf("Undistort complete\n");     

remap(img1, imgU1, map1x, map1y, INTER_LINEAR, BORDER_CONSTANT, Scalar());
remap(img2, imgU2, map2x, map2y, INTER_LINEAR, BORDER_CONSTANT, Scalar());

imshow("image1", imgU1);
imshow("image2", imgU2);

The results are: image description

What can i look at to get a better result?

Thank you!

2017-02-13 14:28:45 -0500 asked a question StereoRectify, what type of Mats are needed?

Hi, i am trying to run stereo rectify. i have a factory calibrated camera, and so am using the values provided. I fill the Mats manually, but StereoRectify crashes with the error:

OpenCV Error: Assertion failed (src.size == dst.size && src.channels() == dst.channels()) in cvConvertScale, file C:\opencv-master\modules\core\src\convert.cpp, line 5474

My code is:

Mat R1, P1, Q, map1x, map1y, R2, P2, map2x, map2y, imgU1, imgU2;
Mat CM1;
Mat CM2;
Mat D1, D2;
Mat R, T, E, F;

cv::Mat img1 = cv::imread("D:/VanishingPoint/OmniTrack_repo/calibrationImages/omni/1-10.jpg");
cv::Mat img2 = cv::imread("D:/VanishingPoint/OmniTrack_repo/calibrationImages/omni/2-10.jpg");

//cam1  cx 372.61 cy 219.701 , fx 488.642, fy 491.335   
CM1 = (Mat_<double>(3, 3, CV_64FC1) << 488.642, 0, 372.61, 0, 491.335, 219.701, 0, 0, 1);

//cam2  cx 367.565 cy 188.369 fx 486.37 fy 489.142
CM2 = (Mat_<double>(3, 3, CV_64FC1) << 486.37, 0, 367.565, 0, 489.142, 188.369, 0, 0, 1);

D1 = (Mat_<double>(5, 1, CV_64FC1) << -0.331212, 0.146775, 0.000437399, 0.000434205, -0.0380578);
D2 = (Mat_<double>(5, 1, CV_64FC1) << -0.329043, 0.141312, -0.00020626, 0.000265379, -0.0337069);

stereoRectify(CM1, D1, CM2, D2, img1.size(), R, T, R1, R2, P1, P2, Q);  //crashes.

I obviously have an incorrectly formatted Mat, but cannot figure which one.

Thank you!

2017-02-13 08:47:03 -0500 commented answer aruCo module, world space coordinates?

Thank you. Exactly what i need to know.

2017-02-12 13:29:56 -0500 asked a question aruCo module, world space coordinates?

Hi, I need to move a camera, and know exactly how far it has traveled. I need to do this with a mono camera, or it would be simple, using stereo odometry.

I am looking into the aRuCo module in openCv, which returns the camera pose.

My question is:

Since the size of the board / marker is known, is the camera translation returned in accurate world-space coordinates, or not?

eg, if I move a meter, will the tvec value reflect that in a repeatable way?


2017-01-24 14:38:41 -0500 marked best answer extract the data from a cv::Matx44d?

Hi, I am new to the Matx format, and am trying to extract data into a std::vector and a cv::Mat.

I have :

cv::Matx44d resultMat;

which is a 4 x 4 transformation matrix. I wish to extract the translation part of this (the last col) into a std::vector, and the rotation part (the top left 3x3 matrix) into a new Mat object.

How can I go about this?