Ask Your Question

seaxgast's profile - activity

2020-03-23 15:43:41 -0600 received badge  Popular Question (source)
2019-06-07 07:58:31 -0600 asked a question Image stitching - need of seam finding/blending

Image stitching - need of seam finding/blending Hello! I am investigating the detailed stitching pipeline (as presente

2019-06-04 11:59:46 -0600 answered a question Which algorithm is used in DpSeamFinder?

Does anyone have an idea how this seam finder works? I also need an explanation.

2017-07-28 15:52:56 -0600 asked a question Proper way of rotating 3D points around axis

Hello!

I have a problem with apply rotation to a set of 3D points. I use depth map, which store Z coordinates of points, also I use reverse of camera intrinsic matrix to obtain X and Y coords of point. I need to rotate those 3D points aorund Y axis and compute depth map after rotation. The code I use is here:

for (int a = 0; a < depthValues.rows; ++a) 
{
    for (int b = 0; b < depthValues.cols; ++b)
    {
        float oldDepth = depthValues.at<cv::Vec3f>(a, b)[0];

        if (oldDepth > EPSILON)
        {                   
            cv::Mat pointInWorldSpace = cameraMatrix.inv() * cv::Mat(cv::Vec3f(a, b , 1), false);
            pointInWorldSpace *= oldDepth;

            cv::Mat rotatedPointInWorldSpace = rotation * pointInWorldSpace;

            float newDepth = rotatedPointInWorldSpace.at<cv::Vec3f>(0, 0)[2];

            cv::Mat rotatedPointInImageSpace = cameraMatrix * rotatedPointInWorldSpace;

            int x = rotatedPointInImageSpace.at<cv::Vec3f>(0, 0)[0] / newDepth;
            int y = rotatedPointInImageSpace.at<cv::Vec3f>(0, 0)[1] / newDepth;

            x = x < 0 ? 0 : x;
            y = y < 0 ? 0 : y;
            x = x > depthValues.rows - 1 ? depthValues.rows - 1 : x;
            y = y > depthValues.cols - 1 ? depthValues.cols - 1 : y;

            depthValuesAfterConversion.at < cv::Vec3f >(x, y) = cv::Vec3f(newDepth, newDepth, newDepth);
        }
    }
}

Here's how I compute rotation matrix:

float angle = (15.0 * 3.14159265f) / 180.0f;
float rotateYaxis[3][3] = 
{ 
    { cos(angle), 0, -sin(angle) },
    {     0,      1,       0     },
    { sin(angle), 0, cos(angle)  }
};

cv::Mat rotation(3, 3, CV_32FC1, rotateYaxis);

Unfortunately, after applying this rotation to my depth map it looks like it's rotated around X axis. I discovered that when I compute rotation matrix as it was rotation around X axis - my code works lke expected.

My question is: could you point me out where I made mistake to my code? Using matrix I've described I expected my depth map to be rotated around Y axis, not X.

Thank you for your help!

2017-05-04 11:42:28 -0600 received badge  Enthusiast
2017-04-29 12:12:56 -0600 asked a question How to determine which pixel from which image was used during panorama creation / how to create panorama from four-channel images?

Dear OpenCV users,

I'm playing around with image stitching pipeline and panorama creation using OpenCV3.2. I'm using Kinect RGB camera as image source, additionaly I retrieve depth values from Kinect's sensor. My code is very similar to the example code that can be found in OpenCV documentation. I convert RGB images captured from Kinect to matrix with BGR format and I'm able to create the panorama from a set of BGR images easily.

Howewer, my goal is to create panoramas from four-channels image (BGR and one extra channel, which I want to use for storing depth values for each pixel) - panorama should be created using BGR values and in fourth channel I'd like to store depth values that would come from source images. Unfortunately, OpenCV does not allow me to use four-channel images. Is there any way to bypass this limitation? If not, I would be glad to know how to determine, for each panorama pixel which pixel from which image was used during panorama creation>

Thank you!