2020-11-18 03:01:07 -0600 | received badge | ● Popular Question (source) |
2020-03-01 17:03:44 -0600 | marked best answer | Compute fundamental matrix from camera calibration Hello, I try to compute the fundamental matrix given the following camera calibration parameters:
According to the following formula the fundamental matrix F is computed by: Anyway, i am quite a bit lost how to compute R and S. I know that R is the rotation matrix which brings image 1 into image 2. Also i know that S is the translation vector to transform image 1 into image 2. My plan would be: 1) Rodrigues both rotation Vectors and substract rotation Matrix 1 from rotation Matrix 2 2) Substract translation Vector 1 from translation Vector 2 3) Compose S Would this be correct? Any help on this topic would be appreciated. I hope this is not offtopic since it is not directly related to OpenCV. Edit: I worked in the solution you provided. I cross checked it with cv::stereoCalib(). Unfortunatly the matrizes does not match. Any suggestions what i did wrong? |
2019-11-26 03:40:00 -0600 | received badge | ● Notable Question (source) |
2019-01-14 07:48:07 -0600 | received badge | ● Popular Question (source) |
2018-03-20 06:55:55 -0600 | received badge | ● Student (source) |
2017-08-30 07:23:53 -0600 | received badge | ● Necromancer (source) |
2017-08-22 08:29:44 -0600 | commented question | Best Small Hardware to use with OpenCV for Position Tracking Did you check out Nvidia Jetson TX2/TX1. The basic HW without peropherals has a size of a credit card. |
2017-08-22 08:23:17 -0600 | commented question | Conversion of channel 3 to channel 1 You are looking for this. |
2017-07-19 10:47:10 -0600 | answered a question | OpenCV 3.1 cmake error during configuration Okay, i have found a solution ... The issue is that "libcuda.so" is not found. Adding the following command to cmake configuration solves the issue: |
2017-07-19 09:34:13 -0600 | commented question | OpenCV 3.1 cmake error during configuration :( RIP ErrorLOG :D ... well i know from windows you can select the compiler to exclusivly x64 environment ... but how to change this for ubunut? I have to say i am pretty new to the ubuntu and cmake command line as well... |
2017-07-19 08:54:47 -0600 | commented question | OpenCV 3.1 cmake error during configuration -.- sorry ... |
2017-07-19 08:18:31 -0600 | asked a question | OpenCV 3.1 cmake error during configuration Hy, configuring OpenCV 3.1 for an Tegra TX1 Ubuntu 16.04 with CUDA 8.0 lead me to the following error: For the installation process I am following this guide. Anyway all errors are related to "example_gpu_SomeExampleName". I googled it up and found out that people having similar problems since CUDA only comes with x64 libraries. How to work around this issue? Is it possible to configure OpenCv 3.1 for Ubuntu only in x64 mode? Any help would be appreciated. |
2017-07-19 08:14:46 -0600 | asked a question | OpenCV 3.1 cmake error during configuration Hy, configuring OpenCV 3.1 for an Tegra TX1 Ubuntu 16.04 with CUDA 8.0 lead me to the following error: For the installation process I am following this guide. Anyway all errors are related to "example_gpu_SomeExampleName". I googled it up and found out that people having similar problems since CUDA only comes with x64 libraries. How to work around this issue? Is it possible to configure OpenCv 3.1 for Ubuntu only in x64 mode? Any help would be appreciated. EDIT 1: added CMakeError.log & CMakeOutput.log CMakeOutput.log (more) |
2017-07-12 04:34:20 -0600 | commented question | Pixel multiplication gives different result You use unsigned char which ranges from 0 to 255 and write the uchar value back into the uchar matrix... When you multiply eg.: 255 * 255 weired stuff gonna happen since the value exceeds 255. Just write the value to eg.: a CV_16UC1 Matrix and it should work. |
2017-07-10 02:50:52 -0600 | commented answer | calibrate stereo system without calling cv::stereoCalibrate Hi, i was facing a similar porblem. I wanted to compute an fundamental Matrix from the 2 projection matrizes describes like here. Anyway what really helped me out is this. Just add a few lines of code as stated in the previous link and you can get R and t from r1,t1 and r2, t2. Here (in line 459) you have the principle to obtain essential from R and t. Hope i could help! |
2017-06-30 10:55:06 -0600 | commented answer | Create a stereo projection matrix using rvec and tvec? Sorry skipped that part .. when images are rectified there should be indeed no rotation. Hm ... are you sure about the 120mm baseline? |
2017-06-30 10:55:06 -0600 | received badge | ● Commentator |
2017-06-30 10:07:23 -0600 | commented answer | Create a stereo projection matrix using rvec and tvec? I assume the error is caused by using two times the same rotation. P1 and P2 will only have in very very rare cases the same rotation against the world. As i suggested you have to get relative rotation and translation from camera 1 and camera 2 and the add this to the result from solvePnP outcome (rVec1 & tVec1) in order to make rVec2 and tVec2 (from camera 2) be based on camera 1. |
2017-06-30 09:10:01 -0600 | commented answer | Create a stereo projection matrix using rvec and tvec? I dont think you can directly compare projection Matrix from time x against one from time y ... but you can use this in order to see if Intrinsic values changed or not. |
2017-06-30 08:43:38 -0600 | commented answer | Create a stereo projection matrix using rvec and tvec? Ok i think from this post i understand your issue. Below some pseudo code: Now add R and t respectivly to rMat and tVec of camera 1 to get the correct rMat2 and tVec2 for camera2. Then you can compute the projection matrix as given above. Note: relative Rotation and translation should not change so you can compute it once and always add the same values to rMat1 and tVec1. |
2017-06-30 06:51:11 -0600 | commented answer | Create a stereo projection matrix using rvec and tvec? I still dont understand what you mean by "update"? You compute the projection matrix once from intrinsic and extrinsic parameters and they stay the same as long as you dont rearange the camera setup or change focal length,... of the camera. If you do this then your camera is uncalibrated. Also if you just have extrinsic parameters you can't compute a projection matrix you also need intrinsic parameters for this. I am also not certain how you obtain intrinsic parameters for the second camera? Do you assume they are exactly the same? Further in order to obtain rVec and tVec from the 2nd camera you need to know the relative rotation/translation from camera 1 to camera 2. |
2017-06-30 06:28:24 -0600 | commented answer | Create a stereo projection matrix using rvec and tvec? Just for clarification you have a stereo camera setup. One camera is calibrated the other one not? Then you want to add the baseline to the translation in order to obtain the tVec from the second camera? |
2017-06-30 05:47:39 -0600 | answered a question | Create a stereo projection matrix using rvec and tvec? I am quite confused by the topic "Update a projection matrix from rVec and tVec" but if you are looking for a solution to obtain projection matrix from camera calibration with present intrinsic and extrinsic parameters you can do the following: |
2017-06-29 02:57:09 -0600 | commented answer | How to Use formula in c++ opencv
|
2017-06-28 07:21:40 -0600 | commented question | How to record multiple cameras with VideoWriter? As far as i can tell, the code looks fine to me. Anyway this sounds too me that there is a bottleneck anywhere in your setup so the frames can't be properly transmitted. I had similar issues when i connected 2x usb 3.0 cameras to an usb hub and connect the hub to the pc. Some frames where fine but the majority were unuseable ... . Hard to tell when you don't have the same setup ... EDIT: Did you try lowering your resolution to eg.: 640x480 and try to run it? Are you frames still corrupt? |
2017-06-27 03:30:09 -0600 | commented question | How to record multiple cameras with VideoWriter? Did you run your code in serial and check if it works there? How are your cameras connected to the PC? USB 3.0? Do you use an USB HUB in between? Also atleast one corrputed image and the corresponding code would be helpful ... At the moment it sounds to me that there is a bottleneck somewhere to the connection to your pc and the frames can't be properly transmitted. |
2017-06-20 21:45:30 -0600 | received badge | ● Nice Answer (source) |
2017-06-16 04:08:59 -0600 | answered a question | How to use C-style scan to check pixel neighbors ? I guess you are trying to implement your solution from this post. Here is an alternative way to loop over the pixels. Its slower then old c-style but faster then the unoptimized version: Ofcourse you have to stop at maxCol-1/maxRow-1 so you stay within the boundaries of image. You can add a special case for the last row, where you just look at the right side or something like this. Dont get me wrong you can also use old c-style but you need aditional checks inorder to know when a new row starts. Hope this helps. |
2017-06-16 03:11:16 -0600 | commented answer | how to get angles of x y z in pose estimation? "Each quadrant should be chosen by using the signs of the numerator and denominator of the argument. The numerator sign selects whether the direction will be above or below the x-axis, and the denominator selects whether the direction will be to the left or right of the y-axis. This is the same as the atan2 function in the C programming language, which nicely expands the range of the arctangent to [0,2pi]" The part where it states "This is the same as the 2" ... for some reason its cut off ... its atan2. Please also be aware of the last sentence ... "Note that this method assumes r11 != 0 + r33 != 0." |