Ask Your Question

Ice_T02's profile - activity

2020-11-18 03:01:07 -0600 received badge  Popular Question (source)
2020-03-01 17:03:44 -0600 marked best answer Compute fundamental matrix from camera calibration

Hello,

I try to compute the fundamental matrix given the following camera calibration parameters:

  • camera Matrix 1/2 (camMat1, camMat2)
  • rotation Vector 1/2 (rotVec1, rotVec2)
  • translation Vector 1/2 (transVec1, transVec2)

According to the following formula the fundamental matrix F is computed by:

F = inverse(transpose(camMat1)) * R * S * inverse(camMat2)

Anyway, i am quite a bit lost how to compute R and S. I know that R is the rotation matrix which brings image 1 into image 2. Also i know that S is the translation vector to transform image 1 into image 2. My plan would be:

1) Rodrigues both rotation Vectors and substract rotation Matrix 1 from rotation Matrix 2

cv::Mat rotMat1, rotMat2;
cv::Rodrigues(rotVec1[0], rotMat1);
cv::Rodrigues(rotVec2[0], rotMat2);
cv::Mat R = rotMat2 - rotMat1;

2) Substract translation Vector 1 from translation Vector 2

T = transVec2 - transVec1

3) Compose S

S = [0,-T[3], T[2]; T[3], 0, -T[1]; -T[2], T[1], 0];

Would this be correct? Any help on this topic would be appreciated. I hope this is not offtopic since it is not directly related to OpenCV.

Edit: I worked in the solution you provided. I cross checked it with cv::stereoCalib(). Unfortunatly the matrizes does not match. Any suggestions what i did wrong?

cv::Mat computeFundMat(cv::Mat camMat1, cv::Mat camMat2, vector<cv::Mat> rotVec1, 
    vector<cv::Mat> rotVec2, vector<cv::Mat> transVec1, vector<cv::Mat> transVec2)
{
    cv::Mat rotMat1(3, 3, CV_64F), rotMat2(3, 3, CV_64F);
    cv::Mat transVec2toWorld(3, 1, CV_64F);
    cv::Mat R(3, 3, CV_64F), T(3, 1, CV_64F), S(cv::Mat::zeros(3, 3, CV_64F));
    cv::Mat F(3, 3, CV_64F);
    //Convert rotation vector into rotation matrix 
    cv::Rodrigues(rotVec1.at(0), rotMat1);
    cv::Rodrigues(rotVec2.at(0), rotMat2);
    //Transform prameters of camera 2 into world
    rotMat2 = rotMat2.t();
    transVec2toWorld = (-rotMat2 * transVec2.at(0));
    //Compute R, T to rotate, translate image b into image a
    R = rotMat2 * rotMat1; //or --> rotMat2 * rotMat1 ???
    T = transVec1.at(0) + transVec2toWorld;
    //Compose skew matrix (Format: S = 
    // [0,-T[3], T[2];   -> 1
    S.at<double>(0, 1) = -T.at<double>(2); S.at<double>(0, 2) = T.at<double>(1); //-> 1
    //  T[3], 0, -T[1];  -> 2
    S.at<double>(1, 0) = T.at<double>(2); S.at<double>(1, 2) = -T.at<double>(0); //-> 2
    // -T[2], T[1], 0];) -> 3
    S.at<double>(2, 0) = -T.at<double>(1); S.at<double>(2, 1) = T.at<double>(0); //-> 3
    //Compute fundamental matrix (F = inverse(transpose(camMat2)) * R * S * inverse(camMat1))
    return cv::Mat(camMat2.t().inv() * R * S * camMat1.inv());
}
2019-11-26 03:40:00 -0600 received badge  Notable Question (source)
2019-01-14 07:48:07 -0600 received badge  Popular Question (source)
2018-03-20 06:55:55 -0600 received badge  Student (source)
2017-08-30 07:23:53 -0600 received badge  Necromancer (source)
2017-08-22 08:29:44 -0600 commented question Best Small Hardware to use with OpenCV for Position Tracking

Did you check out Nvidia Jetson TX2/TX1. The basic HW without peropherals has a size of a credit card.

2017-08-22 08:23:17 -0600 commented question Conversion of channel 3 to channel 1

You are looking for this.

E.g.:
cv::Mat ThreeChannel;
cv::Mat OneChannel;
cv::cvtColor(ThreeChannel, OneChannel, CV_BGR2GRAY); //if your input image is another formate change the flag
2017-07-19 10:47:10 -0600 answered a question OpenCV 3.1 cmake error during configuration

Okay, i have found a solution ... The issue is that "libcuda.so" is not found.

Adding the following command to cmake configuration solves the issue:

-DCMAKE_LIBRARY_PATH=/usr/local/cuda/lib64/stubs \
2017-07-19 09:34:13 -0600 commented question OpenCV 3.1 cmake error during configuration

:( RIP ErrorLOG :D ... well i know from windows you can select the compiler to exclusivly x64 environment ... but how to change this for ubunut? I have to say i am pretty new to the ubuntu and cmake command line as well...

2017-07-19 08:54:47 -0600 commented question OpenCV 3.1 cmake error during configuration

-.- sorry ...

2017-07-19 08:18:31 -0600 asked a question OpenCV 3.1 cmake error during configuration

Hy,

configuring OpenCV 3.1 for an Tegra TX1 Ubuntu 16.04 with CUDA 8.0 lead me to the following error:

    CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_CUDA_LIBRARY (ADVANCED)
    linked by target "example_gpu_alpha_comp" in directory /home/aa/Desktop/OPENCV_INSTALL/opencv/samples/gpu
    linked by target "example_gpu_bgfg_segm" in directory /home/aa/Desktop/OPENCV_INSTALL/opencv/samples/gpu
    linked by target "example_gpu_cascadeclassifier" in directory /home/aa/Desktop/OPENCV_INSTALL/opencv/samples/gpu
   ... and so on

For the installation process I am following this guide. Anyway all errors are related to "example_gpu_SomeExampleName". I googled it up and found out that people having similar problems since CUDA only comes with x64 libraries. How to work around this issue? Is it possible to configure OpenCv 3.1 for Ubuntu only in x64 mode? Any help would be appreciated.

2017-07-19 08:14:46 -0600 asked a question OpenCV 3.1 cmake error during configuration

Hy,

configuring OpenCV 3.1 for an Tegra TX1 Ubuntu 16.04 with CUDA 8.0 lead me to the following error:

    CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_CUDA_LIBRARY (ADVANCED)
    linked by target "example_gpu_alpha_comp" in directory /home/aa/Desktop/OPENCV_INSTALL/opencv/samples/gpu
    linked by target "example_gpu_bgfg_segm" in directory /home/aa/Desktop/OPENCV_INSTALL/opencv/samples/gpu
    linked by target "example_gpu_cascadeclassifier" in directory /home/aa/Desktop/OPENCV_INSTALL/opencv/samples/gpu
   ... and so on

For the installation process I am following this guide. Anyway all errors are related to "example_gpu_SomeExampleName". I googled it up and found out that people having similar problems since CUDA only comes with x64 libraries. How to work around this issue? Is it possible to configure OpenCv 3.1 for Ubuntu only in x64 mode? Any help would be appreciated.

EDIT 1: added CMakeError.log & CMakeOutput.log

CMakeOutput.log

The system is: Linux - 4.4.0-31-generic - x86_64
Compiling the CXX compiler identification source file "CMakeCXXCompilerId.cpp" succeeded.
Compiler: /usr/bin/c++ 
Build flags: 
Id flags: 

The output was:
0


Compilation of the CXX compiler identification source "CMakeCXXCompilerId.cpp" produced "a.out"

The CXX compiler identification is GNU, found in "/home/aa/Desktop/OPENCV_INSTALL/build/CMakeFiles/2.8.12.2/CompilerIdCXX/a.out"

Compiling the C compiler identification source file "CMakeCCompilerId.c" succeeded.
Compiler: /usr/bin/cc 
Build flags: 
Id flags: 

The output was:
0


Compilation of the C compiler identification source "CMakeCCompilerId.c" produced "a.out"

The C compiler identification is GNU, found in "/home/aa/Desktop/OPENCV_INSTALL/build/CMakeFiles/2.8.12.2/CompilerIdC/a.out"

Determining if the CXX compiler works passed with the following output:
Change Dir: /home/aa/Desktop/OPENCV_INSTALL/build/CMakeFiles/CMakeTmp

Run Build Command:/usr/bin/make "cmTryCompileExec3282914228/fast"
/usr/bin/make -f CMakeFiles/cmTryCompileExec3282914228.dir/build.make CMakeFiles/cmTryCompileExec3282914228.dir/build
make[1]: Entering directory `/home/aa/Desktop/OPENCV_INSTALL/build/CMakeFiles/CMakeTmp'
/usr/bin/cmake -E cmake_progress_report /home/aa/Desktop/OPENCV_INSTALL/build/CMakeFiles/CMakeTmp/CMakeFiles 1
Building CXX object CMakeFiles/cmTryCompileExec3282914228.dir/testCXXCompiler.cxx.o
/usr/bin/c++     -o CMakeFiles/cmTryCompileExec3282914228.dir/testCXXCompiler.cxx.o -c /home/aa/Desktop/OPENCV_INSTALL/build/CMakeFiles/CMakeTmp/testCXXCompiler.cxx
Linking CXX executable cmTryCompileExec3282914228
/usr/bin/cmake -E cmake_link_script CMakeFiles/cmTryCompileExec3282914228.dir/link.txt --verbose=1
/usr/bin/c++        CMakeFiles/cmTryCompileExec3282914228.dir/testCXXCompiler.cxx.o  -o cmTryCompileExec3282914228 -rdynamic 
make[1]: Leaving directory `/home/aa/Desktop/OPENCV_INSTALL/build/CMakeFiles/CMakeTmp'


Detecting CXX compiler ABI info compiled with the following output:
Change Dir: /home/aa/Desktop/OPENCV_INSTALL/build/CMakeFiles/CMakeTmp

Run Build Command:/usr/bin/make "cmTryCompileExec2523076562/fast"
/usr/bin/make -f CMakeFiles/cmTryCompileExec2523076562.dir/build.make CMakeFiles/cmTryCompileExec2523076562.dir/build
make[1]: Entering directory `/home/aa/Desktop/OPENCV_INSTALL/build/CMakeFiles/CMakeTmp'
/usr/bin/cmake -E cmake_progress_report /home/aa/Desktop/OPENCV_INSTALL/build/CMakeFiles/CMakeTmp/CMakeFiles 1
Building CXX object CMakeFiles/cmTryCompileExec2523076562.dir/CMakeCXXCompilerABI.cpp.o
/usr/bin/c++     -o CMakeFiles/cmTryCompileExec2523076562.dir ...
(more)
2017-07-12 04:34:20 -0600 commented question Pixel multiplication gives different result

You use unsigned char which ranges from 0 to 255 and write the uchar value back into the uchar matrix... When you multiply eg.: 255 * 255 weired stuff gonna happen since the value exceeds 255. Just write the value to eg.: a CV_16UC1 Matrix and it should work.

2017-07-10 02:50:52 -0600 commented answer calibrate stereo system without calling cv::stereoCalibrate

Hi, i was facing a similar porblem. I wanted to compute an fundamental Matrix from the 2 projection matrizes describes like here. Anyway what really helped me out is this. Just add a few lines of code as stated in the previous link and you can get R and t from r1,t1 and r2, t2. Here (in line 459) you have the principle to obtain essential from R and t. Hope i could help!

2017-06-30 10:55:06 -0600 commented answer Create a stereo projection matrix using rvec and tvec?

Sorry skipped that part .. when images are rectified there should be indeed no rotation. Hm ... are you sure about the 120mm baseline?

2017-06-30 10:55:06 -0600 received badge  Commentator
2017-06-30 10:07:23 -0600 commented answer Create a stereo projection matrix using rvec and tvec?

I assume the error is caused by using two times the same rotation. P1 and P2 will only have in very very rare cases the same rotation against the world. As i suggested you have to get relative rotation and translation from camera 1 and camera 2 and the add this to the result from solvePnP outcome (rVec1 & tVec1) in order to make rVec2 and tVec2 (from camera 2) be based on camera 1.

2017-06-30 09:10:01 -0600 commented answer Create a stereo projection matrix using rvec and tvec?

I dont think you can directly compare projection Matrix from time x against one from time y ... but you can use this in order to see if Intrinsic values changed or not.

2017-06-30 08:43:38 -0600 commented answer Create a stereo projection matrix using rvec and tvec?

Ok i think from this post i understand your issue. Below some pseudo code:

R ... 3x3 relative rotation between camera 1 and 2
cv::Mat R = R2 * R1.t();
t ... 3x1 relative translation between camera 1 and 2
cv::Mat t = t2 - R * t1;

Now add R and t respectivly to rMat and tVec of camera 1 to get the correct rMat2 and tVec2 for camera2. Then you can compute the projection matrix as given above. Note: relative Rotation and translation should not change so you can compute it once and always add the same values to rMat1 and tVec1.

2017-06-30 06:51:11 -0600 commented answer Create a stereo projection matrix using rvec and tvec?

I still dont understand what you mean by "update"? You compute the projection matrix once from intrinsic and extrinsic parameters and they stay the same as long as you dont rearange the camera setup or change focal length,... of the camera. If you do this then your camera is uncalibrated. Also if you just have extrinsic parameters you can't compute a projection matrix you also need intrinsic parameters for this. I am also not certain how you obtain intrinsic parameters for the second camera? Do you assume they are exactly the same? Further in order to obtain rVec and tVec from the 2nd camera you need to know the relative rotation/translation from camera 1 to camera 2.

2017-06-30 06:28:24 -0600 commented answer Create a stereo projection matrix using rvec and tvec?

Just for clarification you have a stereo camera setup. One camera is calibrated the other one not? Then you want to add the baseline to the translation in order to obtain the tVec from the second camera?

2017-06-30 05:47:39 -0600 answered a question Create a stereo projection matrix using rvec and tvec?

I am quite confused by the topic "Update a projection matrix from rVec and tVec" but if you are looking for a solution to obtain projection matrix from camera calibration with present intrinsic and extrinsic parameters you can do the following:

cv::Mat computeProjMat(cv::Mat camMat, vector<cv::Mat> rotVec, vector<cv::Mat> transVec)
{
    cv::Mat rotMat(3, 3, CV_64F), rotTransMat(3, 4, CV_64F); //Init.
    //Convert rotation vector into rotation matrix 
    cv::Rodrigues(rotVec[0], rotMat);
    //Append translation vector to rotation matrix
    cv::hconcat(rotMat, transVec[0], rotTransMat);
    //Compute projection matrix by multiplying intrinsic parameter 
    //matrix (A) with 3 x 4 rotation and translation pose matrix (RT).
    //Formula: Projection Matrix = A * RT;
    return (camMat * rotTransMat);
}
2017-06-29 02:57:09 -0600 commented answer How to Use formula in c++ opencv
  • noArray() ... as name suggests an empty Array, placeholder if you dont need it more or less
  • CV_8U & CV_32F ... is the type of the matrix ... eg.: CV_8UC1 is unsigned char 1 channel image ... data will range from 0 to 255. Here you will see all openCV data types and their meaning.
  • stating @LBerger - If you want to display results you will have to convert it to CV_8U
2017-06-28 07:21:40 -0600 commented question How to record multiple cameras with VideoWriter?

As far as i can tell, the code looks fine to me. Anyway this sounds too me that there is a bottleneck anywhere in your setup so the frames can't be properly transmitted. I had similar issues when i connected 2x usb 3.0 cameras to an usb hub and connect the hub to the pc. Some frames where fine but the majority were unuseable ... . Hard to tell when you don't have the same setup ...

EDIT: Did you try lowering your resolution to eg.: 640x480 and try to run it? Are you frames still corrupt?

2017-06-27 03:30:09 -0600 commented question How to record multiple cameras with VideoWriter?

Did you run your code in serial and check if it works there? How are your cameras connected to the PC? USB 3.0? Do you use an USB HUB in between? Also atleast one corrputed image and the corresponding code would be helpful ... At the moment it sounds to me that there is a bottleneck somewhere to the connection to your pc and the frames can't be properly transmitted.

2017-06-20 21:45:30 -0600 received badge  Nice Answer (source)
2017-06-16 04:08:59 -0600 answered a question How to use C-style scan to check pixel neighbors ?

I guess you are trying to implement your solution from this post.

Here is an alternative way to loop over the pixels. Its slower then old c-style but faster then the unoptimized version:

for (size_t y = 0; y < imageHeight - 1; ++y)
{
    for (size_t x = 0; x < imageWidth - 1; ++x)
    {
        idx = y * imageWidth + x;
        //Look straigth right
        pimg[idx + 1]
        //Look straigth down
        pimg[idx + imageWidth];
        //Look down right
        pimg[idx + imageWidth + 1]
    }
}

Ofcourse you have to stop at maxCol-1/maxRow-1 so you stay within the boundaries of image. You can add a special case for the last row, where you just look at the right side or something like this. Dont get me wrong you can also use old c-style but you need aditional checks inorder to know when a new row starts.

Hope this helps.

2017-06-16 03:11:16 -0600 commented answer how to get angles of x y z in pose estimation?

"Each quadrant should be chosen by using the signs of the numerator and denominator of the argument. The numerator sign selects whether the direction will be above or below the x-axis, and the denominator selects whether the direction will be to the left or right of the y-axis. This is the same as the atan2 function in the C programming language, which nicely expands the range of the arctangent to [0,2pi]"

The part where it states "This is the same as the 2" ... for some reason its cut off ... its atan2.

Please also be aware of the last sentence ... "Note that this method assumes r11 != 0 + r33 != 0."