Ask Your Question

Pellaeon's profile - activity

2016-12-07 04:16:46 -0600 received badge  Teacher
2015-11-02 03:19:25 -0600 asked a question Copy cropped image

Hi,

I undistort an image with opencv. After this I want to remove the black regions in the periphery which are caused by the undistortion. Therefore I define a border which I want to remove.

    const int border_width  = 10;
    const int croppedWidth  = resolution.first - 2 * border_width;
    const int croppedHeight = croppedWidth  * resolution.second / resolution.first;
    const int border_height = (resolution.second - croppedHeight) / 2;

    cv::Mat srcImg(resolution.second, resolution.first, cvFormat, const_cast<unsigned char*>(pImg));
    auto imgRoi = srcImg(cv::Rect(border_width, border_height, croppedWidth, croppedHeight));
    imgRoi.copyTo(*m_croppedRenderImg);

    pImg       = reinterpret_cast<unsigned char*>(m_croppedRenderImg->data);
    resolution = std::make_pair(croppedWidth, croppedHeight);

Because I need the image as raw pointer as input for a texture in ogre I want the image to be on one memory block. Therefore I can't use the imgRoi. I tried to get this via "copyTo". My problem is that the result image is wrong. It looks shifted.

So, what is wrong with my code?

Best regards

Pellaeon

2015-06-17 08:46:47 -0600 asked a question undistortion of a camera image with remap

Hi,

I need an undistorted image of a camera for an AR application. cv::undistort is too slow for my purpose, so I want to try initUndistortRectifyMap and remap to do the init only once and safe computational time. Here is my first test:

//create source matrix
cv::Mat srcImg(res.first, res.second, cvFormat, const_cast<char*>(pImg));

cv::Mat cam(3, 3, cv::DataType<float>::type);
cam.at<float>(0, 0) = 528.53618582196384f;
cam.at<float>(0, 1) = 0.0f;
cam.at<float>(0, 2) = 314.01736116032430f;

cam.at<float>(1, 0) = 0.0f;
cam.at<float>(1, 1) = 532.01912214324500f;
cam.at<float>(1, 2) = 231.43930864205211f;

cam.at<float>(2, 0) = 0.0f;
cam.at<float>(2, 1) = 0.0f;
cam.at<float>(2, 2) = 1.0f;

cv::Mat dist(5, 1, cv::DataType<float>::type);  
dist.at<float>(0, 0) = -0.11839989180635836f;
dist.at<float>(1, 0) = 0.25425420873955445f;
dist.at<float>(2, 0) = 0.0013269901775205413f;
dist.at<float>(3, 0) = 0.0015787467748277866f;
dist.at<float>(4, 0) = -0.11567938093172066f;

cv::Mat map1, map2;
cv::initUndistortRectifyMap(cam, dist, cv::Mat(), cam, cv::Size(res.second, res.first), CV_32FC1, map1, map2);

cv::remap(srcImg, *m_undistImg, map1, map2, cv::INTER_CUBIC);

At first, I create an opencv matrix with my image (format is BGRA), then I create the camera and distortion matrix. After this, I call initUndistortRectifyMap and then remap. As you can see in screen.jpg the camera image is wrong. I have no idea whats the problem. Any suggestions? What's wrong in my code?

Best regards

Pellaeon

2015-05-11 08:12:30 -0600 commented question how to use undistort proper

push to top

2015-05-07 09:12:14 -0600 received badge  Editor (source)
2015-05-07 08:56:17 -0600 asked a question how to use undistort proper

I want toshow an undistorted camera image in my application and found the opencv function "undistort". I created a matrix and filled it with the intrinsic parameters. But when I start my application the image is wrong. Looks like when there is a shift with the color channels. And what is the proper matrix size for the distortion coefficients? E.g. a (4,1) matrix or a (1,4) matrix?

auto cvFormat = getOpenCVFormat(format);
if (cvFormat != -1)
{
    //create source matrix
    cv::Mat srcImg(res.first, res.second, cvFormat, const_cast<char*>(pImg));

    cv::Mat cam(3, 3, cv::DataType<float>::type);
    cam.at<float>(0, 0) = 1.9191881071077680e+003f;
    cam.at<float>(0, 1) = 0.0f;
    cam.at<float>(0, 2) = 9.9180029393125994e+002f;

    cam.at<float>(1, 0) = 0.0f;
    cam.at<float>(1, 1) = 1.9199004896008303e+003f;
    cam.at<float>(1, 2) = 4.8762694385566999e+002f;

    cam.at<float>(2, 0) = 0.0f;
    cam.at<float>(2, 1) = 0.0f;
    cam.at<float>(2, 2) = 1.0f;

    cv::Mat dist(5, 1, cv::DataType<float>::type);  
    dist.at<float>(0, 0) = -6.0966622082132604e-001f;
    dist.at<float>(1, 0) = 1.1527185014286182e+001f;
    dist.at<float>(2, 0) = -5.2601963039546193e-003f;
    dist.at<float>(3, 0) = 1.8731645453405020e-003f;
    dist.at<float>(4, 0) = -8.6013254063968844e+001;

    cv::undistort(srcImg, *m_undistImg, cam, dist);

    m_pImgRender = reinterpret_cast<char*>(m_undistImg->data);
}
2015-01-30 02:35:41 -0600 asked a question Play a video file with the right timing

Hi,

I use VideoCapture to read an avi file and copy the frames to a video texture within my ogre3d application. I want the video is shown with a right timing (not too slow, not too fast). My first approach was to use the elapsed time, to calculate the passed (fps * elapsedTime [in seconds]) frames and if the passed frame is > 1 do a videocapture.read().

auto elapsedTime = m_timer.elapsed().wall / 1000 / 1000.0;

if (m_fps * elapsedTime < 0.9) return false;

m_videoCapture.read(m_imageBGR);

Unfortunately, the video is to slow with this code.

Therefore, I use teh following command to set the video stream to the right position:

m_videoCapture.set(CV_CAP_PROP_POS_MSEC, elapsedTime); m_videoCapture.read(m_imageBGR)

Now my question to this solution: is there a performance drawback. Surely, there is some buffer mechanism in the background so that the read call can grab the next frame effecently? But when I set the position every time, perhaps the buffering is troubled?

Best regards

Pellaeon