Ask Your Question

Gunter's profile - activity

2014-05-20 10:54:27 -0500 asked a question Discontinuous Mat (Using = address instead of copyTo)

Hello, I receive image sections frame-by-frame from moving images on a conveyor belt.

Using ROIs I piece the frame-by-frame images together to make one image. This works fine.

Except I have to use copyTo instead of just moving pointers. This makes sense as the original Mat would become discontinuous. Except Mat supports discontinuous, so I'm wondering if there is a better way.

Here's what I do now:

In constructor:

m_InData = Mat(m_h, m_w, CV_8UC4);
m_ScrollingBuffer = Mat(m_h*m_NumBufPerScrollingBuf, m_w, CV_8UC4);

for (int i=0; i<m_NumBufPerScrollingBuf;i++)
    m_ScrollingBufferROI.push_back(Mat(m_ScrollingBuffer, Range(i*m_h, (i+1)*m_h), Range(0,m_w)));

And at Run-Time, after I get a frame:

GetImageAddress(&address); = address;

for (int i=1; i<=m_NumBufPerScrollingBuf-1; i++)

But I'd like to something more elegant like:

for (int i=1; i<=m_NumBufPerScrollingBuf-1; i++)
    m_ScrollingBufferROI[i-1].data = m_ScrollingBufferROI[i].data;
m_ScrollingBufferROI[m_NumBufPerScrollingBuf-1].data = address;

But the above doesn't work (I suspect because of the non-continuous nature.)


Thanks and sorry if a similar/duplicate question shows up.


2014-05-14 07:55:44 -0500 received badge  Self-Learner (source)
2014-05-14 07:54:47 -0500 answered a question Cropping Calibration Correction maps

With fresh eyes it was easy to find the easy approach, call, instead:

 initUndistortRectifyMap(cameraMatrix, distCoeffs, R1, cameraMatrix, 
                         Size(cameraSensorX, cameraSensorY), **CV_32FC1**, 
                         distCorrectMapX, distCorrectMapY);

Then you get all the X in MapX and all the Y in the MapY. Then just subtract the offset from each.

2014-05-12 07:43:56 -0500 asked a question Cropping Calibration Correction maps

Hello, we have a system with a number of cameras.

At run-time, one of those cameras has a very small ROI in the y-direction: 6-20 lines.

This is (much) too small to use a grid to calibrate the camera; but I can do that using a full field of view and just crop.

But: if I use initUndistortRectifyMap to get mapx & mapy, how do I crop them?

Mapx looks pretty obvious (I think!), I crop to the ROI and then subtract the offset ie:

 initUndistortRectifyMap(cameraMatrix, distCoeffs, R1, cameraMatrix, 
                         Size(cameraSensorX, cameraSensorY), CV_16SC2, 
                         distCorrectMapXY, distCorrectMapY);

 Mat distCorrectMapXYROI_orig, distCorrectMapXYROI_offset, distCorrectMapYROI;

 distCorrectMapXYROI_orig = distCorrectMapXY(
                             Range(cameraYOffset, cameraYOffset + cameraYROI), 
                             Range(cameraXOffset, cameraXOffset + cameraXROI));

  vector <Mat> distCorrectMapsXYROI_split;
  split(distCorrectMapXYROI_orig, distCorrectMapsXYROI_split);

  distCorrectMapsXYROI_split[0] -= cameraXOffset;
  distCorrectMapsXYROI_split[1] -= cameraYOffset;

  merge(distCorrectMapsXYROI_split, distCorrectMapXYROI_offset);

Are my assumptions with mapx correct?

Next, what do I do with mapy? Its not so obvious. Right now I'm just cropping and it seems OK?

 distCorrectMapYROI = distCorrectMapY(
                      Range(cameraYOffset, cameraYOffset + cameraYROI),
                      Range(cameraXOffset, cameraXOffset + cameraXROI));

Thanks for everyone's help.


2014-04-29 10:21:31 -0500 asked a question Initialize Mat with color data

Hello, I'm trying to initialize a Mat with BGR data. I'm getting a strange results.

I do this all the time: Mat image(Size(cols, rows), CV_8UC1, (void*) inputData, Mat::AUTO_STEP);

But when I do it with a color image the output mat is not correct (its offset in all of BGR data, vertical and horizontal dimensions).

Even if I trim the code to:

Mat image2 = Mat(Size(imageIn.cols, imageIn.rows), imageIn.type(),, Mat::AUTO_STEP);

image2 is not imageIn.

imageIn.type() is CV_8UC3.

What am I doing wrong?

Thanks for your help,