OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sat, 16 May 2020 23:28:20 -0500Find rigid 3D transform between two 3D point setshttp://answers.opencv.org/question/230253/find-rigid-3d-transform-between-two-3d-point-sets/ Hi,
I have two 3D point sets with known correspondence and I want to find rigid transform between them.
I did not find suitable function in OpenCV to do it.
Can anybody tell me how to do it？
Thanks.
YLARYL518Sat, 16 May 2020 23:28:20 -0500http://answers.opencv.org/question/230253/Auto Registrationhttp://answers.opencv.org/question/225617/auto-registration/ Hi,
I have a physical reflective object and its 3d Object. I placed a 3d object on turntable and captured images.
Now I want to auto register(auto align) 3d object on its turntable Images. How can I?
Pls help.MONKU76Fri, 31 Jan 2020 17:50:37 -0600http://answers.opencv.org/question/225617/Image pixels of RGB frame to x and y coordinates in camera frame ( in Gazebo simulation)http://answers.opencv.org/question/208171/image-pixels-of-rgb-frame-to-x-and-y-coordinates-in-camera-frame-in-gazebo-simulation/ Hello,
I am using Kinect sensor plugin in Gazebo. Since it is in gazebo, I know the camera intrinsic and extrinsic parameters without calibration.
I have u, v image pixel coordinates from RGB camera in Kinect and I wanted to convert it into kinect frame as 2D coordinates (x and y).
I am getting pointcloud data as kinect/depth/points which gives 3D point but I need in 2D plane. Can I able to do lot easier by avoiding depth registration?
Any help!!VigneshrajaWed, 30 Jan 2019 17:36:10 -0600http://answers.opencv.org/question/208171/How to use Map class to implement image registration?http://answers.opencv.org/question/191794/how-to-use-map-class-to-implement-image-registration/Actually, I have read the official documentation [here](https://github.com/opencv/opencv_contrib/blob/master/modules/reg/samples/map_test.cpp) about class `Map` in opencv to try to use the module `reg`. And This is my test image:
[![enter image description here][1]][1]
This is my code:
#include<opencv.hpp>
#include "opencv2/reg/mapshift.hpp"
#include "opencv2/reg/mappergradshift.hpp"
#include "opencv2/reg/mapperpyramid.hpp"
using namespace cv;
using namespace std;
using namespace cv::reg;
Mat highlight1(const Mat src, const Mat t_mask) {
Mat srcImg = src.clone(), mask = t_mask.clone();
threshold(mask, mask, 0, 255, THRESH_BINARY_INV + THRESH_OTSU);
cvtColor(mask, mask, COLOR_GRAY2BGR);
cvtColor(srcImg, srcImg, COLOR_GRAY2BGR);
dilate(mask - Scalar(0, 0, 255), mask, Mat(), Point(-1, -1), 1);
return srcImg - mask;
}
int main() {
Mat img1 = imread("img.jpg", 0);
Mat img2;
// Warp original image
Vec<double, 2> shift(5., 5.);
MapShift mapTest(shift);
mapTest.warp(img1, img2);
// Register
Ptr<MapperGradShift> mapper = makePtr<MapperGradShift>();
MapperPyramid mappPyr(mapper);
Ptr<Map> mapPtr = mappPyr.calculate(img1, img2);
MapShift* mapShift = dynamic_cast<MapShift*>(mapPtr.get());
// Display registration result
Mat result;
mapShift->inverseWarp(img2, result);
Mat registration_before = highlight1(img1, img2);
Mat registration_after = highlight1(img1, result);
return 0;
}
But as we see, the `registration_after` is even worse than `registration_before`. What's I have missed?
This is `registration_before`:
![](https://i.stack.imgur.com/bkn0f.png)
This is `registration_after`:
![](https://i.stack.imgur.com/hOWBf.png)
[1]: https://i.stack.imgur.com/RuesS.jpgyodeThu, 17 May 2018 22:20:17 -0500http://answers.opencv.org/question/191794/Speeding up image registrationhttp://answers.opencv.org/question/176461/speeding-up-image-registration/ Hi guys.
I am using **findTransformECC** to perform image registration and it has been working great so far. However, I need to speed up the process by a factor of 5x. I was thinking of using Cuda and delegate the registration to the GPU. What do you guys think? I have no experience with Cuda though. Would I have to rewrite the function **findTransformECC** from scratch? Many thanks.MatheusMon, 16 Oct 2017 12:08:36 -0500http://answers.opencv.org/question/176461/findTransformECC() With Multiple warpMatrix'shttp://answers.opencv.org/question/173097/findtransformecc-with-multiple-warpmatrixs/I am exploring using findTransformECC() in my application where there can be multiple matches between template image in a larger candidate image (e.g., template is a circular ball and candidate is a repeating pattern of those circular balls). Would you share if you know there is a way to find all the matches (i.e., multiple warpMatrix's).
I initially selected findTransformECC() as the method of my choice as it is a pixel intensity based and offers an option for input mask array. However I am open to other methods.hgyooTue, 29 Aug 2017 00:36:33 -0500http://answers.opencv.org/question/173097/Image Registration Module (OpenCV 3.3)http://answers.opencv.org/question/172378/image-registration-module-opencv-33/I am familiar with image registration in MATLAB. I now want to completely switch over to OpenCV in C++ and need some guidance.
OpenCV 3.3 has a module named Image Registration (http://docs.opencv.org/trunk/db/d61/group__reg.html). I found users with older release have used several techniques (e.g., ORB, SIFT, SURF) to calculate degree of transformation and then applied the transformation to obtain registered images. How are these techniques different from the ones in the image registration module cv::reg ?
Adding a clarification to my original question, I wanted to know how these techniques are different from each other (e.g., pixel-based vs. feature-based) before figuring out which one to use for my application.
Since my original posting, I learned the Image Registration module is part of the extra "contrib" module package and is similar/identical to the one published by Alfonso Sanchez-Beato at least 3 years ago (https://github.com/opencv/opencv_contrib/tree/master/modules/reg). The technique is pixel based.
I would appreciate if anyone could chip in or point to a summary of various registration techniques.hgyooFri, 18 Aug 2017 12:20:24 -0500http://answers.opencv.org/question/172378/cpp-tutorial-pnp_registration throw errorhttp://answers.opencv.org/question/156749/cpp-tutorial-pnp_registration-throw-error/ Hi everyone,
I am trying to use cpp-tutorial-pnp_registration in opencv sample code whose location is /home/***/opencv-3.2.0/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/main_registration.cpp
I just try to use opencv original data resized_IMG_3875.JPG and box.ply to get textured 3D model (yml file), however, opencv throw the following error for me:
--------------------------------------------------------------------------
This program shows how to create your 3D textured model.
Usage:
./cpp-tutorial-pnp_registration
--------------------------------------------------------------------------
init done
Click the box corners ...
Waiting ...
COMPUTING POSE ...
OpenCV Error: Assertion failed (npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F))) in solvePnP, file /tmp/binarydeb/ros-kinetic-opencv3-3.2.0/modules/calib3d/src/solvepnp.cpp, line 63
terminate called after throwing an instance of 'cv::Exception'
what(): /tmp/binarydeb/ros-kinetic-opencv3-3.2.0/modules/calib3d/src/solvepnp.cpp:63: error: (-215) npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) in function solvePnP
Aborted (core dumped)
I am not sure what the problem is and could you give me some idea? Any idea will be appreciate.
Thanks in advance.
doubleMon, 05 Jun 2017 13:45:28 -0500http://answers.opencv.org/question/156749/RGBD contrib module depth registration (registerDepth)http://answers.opencv.org/question/104638/rgbd-contrib-module-depth-registration-registerdepth/Hi,
We I register my depth image **with distortion** in size of 640x480 to a bigger size of an image, I always get 2x2 block 0 depth at in the center of the output registered depth. I activate the **dilation flag** for upsampling.
Can anyone tell me what the reason of this is?
Thanks.hazirbasFri, 14 Oct 2016 06:52:30 -0500http://answers.opencv.org/question/104638/phase correlation for image registration(image stitching)http://answers.opencv.org/question/1624/phase-correlation-for-image-registrationimage-stitching/I'm trying to stitch 2 images using cross correlation (phase correlation).Images are the same size.Only shift is present.
I tried opencv with cvDFT and opencv+FFTW it seems to work, but for some reason coorelation peak coordinate is not the shift coordinate, maybe it depends on quadrant where correlation point is.
So the question is how to obtain shift point from correlation point.
Note: I don't need "interest points" SIFT-like approaches.
My suggestion commented in code.
Here is the code that I use:
class Peak
{
public:
CvPoint pt;
double maxval;
};
Peak old_opencv_FFT(IplImage* src,IplImage* temp)
{
CvSize imgSize = cvSize(src->width, src->height);
// Allocate floating point frames used for DFT (real, imaginary, and complex)
IplImage* realInput = cvCreateImage( imgSize, IPL_DEPTH_64F, 1 );
IplImage* imaginaryInput = cvCreateImage( imgSize, IPL_DEPTH_64F, 1 );
IplImage* complexInput = cvCreateImage( imgSize, IPL_DEPTH_64F, 2 );
int nDFTHeight= cvGetOptimalDFTSize( imgSize.height );
int nDFTWidth= cvGetOptimalDFTSize( imgSize.width );
CvMat* src_DFT = cvCreateMat( nDFTHeight, nDFTWidth, CV_64FC2 );
CvMat* temp_DFT = cvCreateMat( nDFTHeight, nDFTWidth, CV_64FC2 );
CvSize dftSize = cvSize(nDFTWidth, nDFTHeight);
IplImage* imageRe = cvCreateImage( dftSize, IPL_DEPTH_64F, 1 );
IplImage* imageIm = cvCreateImage( dftSize, IPL_DEPTH_64F, 1 );
IplImage* imageImMag = cvCreateImage( dftSize, IPL_DEPTH_64F, 1 );
IplImage* imageMag = cvCreateImage( dftSize, IPL_DEPTH_64F, 1 );
CvMat tmp;
// Processing of src
cvScale(src,realInput,1.0,0);
cvZero(imaginaryInput);
cvMerge(realInput,imaginaryInput,NULL,NULL,complexInput);
cvGetSubRect(src_DFT,&tmp,cvRect(0,0,src->width,src->height));
cvCopy(complexInput,&tmp,NULL);
if (src_DFT->cols>src->width)
{
cvGetSubRect(src_DFT,&tmp,cvRect(src->width,0,src_DFT->cols-src->width,src->height));
cvZero(&tmp);
}
cvDFT(src_DFT,src_DFT,CV_DXT_FORWARD,complexInput->height);
cvSplit(src_DFT,imageRe,imageIm,0,0);
// Processing of temp
cvScale(temp,realInput,1.0,0);
cvMerge(realInput,imaginaryInput,NULL,NULL,complexInput);
cvGetSubRect(temp_DFT,&tmp,cvRect(0,0,temp->width,temp->height));
cvCopy(complexInput,&tmp,NULL);
if (temp_DFT->cols>temp->width)
{
cvGetSubRect(temp_DFT,&tmp,cvRect(temp->width,0,temp_DFT->cols-temp->width,temp->height));
cvZero( &tmp );
}
cvDFT(temp_DFT,temp_DFT,CV_DXT_FORWARD,complexInput->height);
// Multiply spectrums of the scene and the model (use CV_DXT_MUL_CONJ to get correlation instead of convolution)
cvMulSpectrums(src_DFT,temp_DFT,src_DFT,CV_DXT_MUL_CONJ);
// Split Fourier in real and imaginary parts
cvSplit(src_DFT,imageRe,imageIm,0,0);
// Compute the magnitude of the spectrum components: Mag = sqrt(Re^2 + Im^2)
cvPow( imageRe, imageMag, 2.0 );
cvPow( imageIm, imageImMag, 2.0 );
cvAdd( imageMag, imageImMag, imageMag, NULL );
cvPow( imageMag, imageMag, 0.5 );
// Normalize correlation (Divide real and imaginary components by magnitude)
cvDiv(imageRe,imageMag,imageRe,1.0);
cvDiv(imageIm,imageMag,imageIm,1.0);
cvMerge(imageRe,imageIm,NULL,NULL,src_DFT);
// inverse dft
cvDFT( src_DFT, src_DFT, CV_DXT_INVERSE_SCALE, complexInput->height );
cvSplit( src_DFT, imageRe, imageIm, 0, 0 );
double minval = 0.0;
double maxval = 0.0;
CvPoint minloc;
CvPoint maxloc;
cvMinMaxLoc(imageRe,&minval,&maxval,&minloc,&maxloc,NULL);
int x=maxloc.x; // log range
//if (x>(imageRe->width/2))
// x = x-imageRe->width; // positive or negative values
int y=maxloc.y; // angle
//if (y>(imageRe->height/2))
// y = y-imageRe->height; // positive or negative values
Peak pk;
pk.maxval= maxval;
pk.pt=cvPoint(x,y);
return pk;
}
void phase_correlation2D( IplImage* src, IplImage *tpl, IplImage *poc )
{
int i, j, k;
double tmp;
/* get image properties */
int width = src->width;
int height = src->height;
int step = src->widthStep;
int fft_size = width * height;
/* setup pointers to images */
uchar *src_data = ( uchar* ) src->imageData;
uchar *tpl_data = ( uchar* ) tpl->imageData;
double *poc_data = ( double* )poc->imageData;
/* allocate FFTW input and output arrays */
fftw_complex *img1 = ( fftw_complex* )fftw_malloc( sizeof( fftw_complex ) * width * height );
fftw_complex *img2 = ( fftw_complex* )fftw_malloc( sizeof( fftw_complex ) * width * height );
fftw_complex *res = ( fftw_complex* )fftw_malloc( sizeof( fftw_complex ) * width * height );
/* setup FFTW plans */
fftw_plan fft_img1 = fftw_plan_dft_2d( height ,width, img1, img1, FFTW_FORWARD, FFTW_ESTIMATE );
fftw_plan fft_img2 = fftw_plan_dft_2d( height ,width, img2, img2, FFTW_FORWARD, FFTW_ESTIMATE );
fftw_plan ifft_res = fftw_plan_dft_2d( height ,width, res, res, FFTW_BACKWARD, FFTW_ESTIMATE );
/* load images' data to FFTW input */
for( i = 0, k = 0 ; i < height ; i++ ) {
for( j = 0 ; j < width ; j++, k++ ) {
img1[k][0] = ( double )src_data[i * step + j];
img1[k][1] = 0.0;
img2[k][0] = ( double )tpl_data[i * step + j];
img2[k][1] = 0.0;
}
}
///* Hamming window */
//double omega = 2.0*M_PI/(fft_size-1);
//double A= 0.54;
//double B= 0.46;
//for(i=0,k=0;i<height;i++)
//{
// for(j=0;j<width;j++,k++)
// {
// img1[k][0]= (img1[k][0])*(A-B*cos(omega*k));
// img2[k][0]= (img2[k][0])*(A-B*cos(omega*k));
// }
//}
/* obtain the FFT of img1 */
fftw_execute( fft_img1 );
/* obtain the FFT of img2 */
fftw_execute( fft_img2 );
/* obtain the cross power spectrum */
for( i = 0; i < fft_size ; i++ ) {
res[i][0] = ( img2[i][0] * img1[i][0] ) - ( img2[i][1] * ( -img1[i][1] ) );
res[i][1] = ( img2[i][0] * ( -img1[i][1] ) ) + ( img2[i][1] * img1[i][0] );
tmp = sqrt( pow( res[i][0], 2.0 ) + pow( res[i][1], 2.0 ) );
res[i][0] /= tmp;
res[i][1] /= tmp;
}
/* obtain the phase correlation array */
fftw_execute(ifft_res);
//normalize and copy to result image
for( i = 0 ; i < fft_size ; i++ ) {
poc_data[i] = res[i][0] / ( double )fft_size;
}
/* deallocate FFTW arrays and plans */
fftw_destroy_plan( fft_img1 );
fftw_destroy_plan( fft_img2 );
fftw_destroy_plan( ifft_res );
fftw_free( img1 );
fftw_free( img2 );
fftw_free( res );
}
Peak FFTW_test(IplImage* src,IplImage* temp)
{
clock_t start=clock();
int t_w=temp->width;
int t_h=temp->height;
/* create a new image, to store phase correlation result */
IplImage* poc = cvCreateImage( cvSize(temp->width,temp->height ), IPL_DEPTH_64F, 1 );
/* get phase correlation of input images */
phase_correlation2D( src, temp, poc );
/* find the maximum value and its location */
CvPoint minloc, maxloc;
double minval, maxval;
cvMinMaxLoc( poc, &minval, &maxval, &minloc, &maxloc, 0 );
/* IplImage* poc_8= cvCreateImage( cvSize(temp->width, temp->height ), 8, 1 );
cvConvertScale(poc,poc_8,(double)255/(maxval-minval),(double)(-minval)*255/(maxval-minval));
cvSaveImage("poc.png",poc_8); */
cvReleaseImage( &poc );
clock_t end=clock();
int time= end-start;
//fprintf( stdout, "Time= %d using clock() \n" ,time/*dt*/ );
//fprintf( stdout, "Maxval at (%d, %d) = %2.4f\n", maxloc.x, maxloc.y, maxval );
CvPoint pt;
pt.x= maxloc.x;
pt.y= maxloc.y;
//4 variants?
//if(maxloc.x>=0&&maxloc.x<=t_w/2&&maxloc.y>=0&&maxloc.y<=t_h/2)
//{
// pt.x= src->width-maxloc.x;
// pt.y= -maxloc.y;
//}
//if(maxloc.x>=t_w/2&&maxloc.x<=t_w&&maxloc.y>=0&&maxloc.y<=t_h/2)
//{
// pt.x= src->width-maxloc.x;
// pt.y= src->height-maxloc.y;
//}
//if(maxloc.x>=0&&maxloc.x<=t_w/2&&maxloc.y>=t_h/2&&maxloc.y<=t_h)
//{
// /*pt.x= -maxloc.x;
// pt.y= -maxloc.y;*/
// pt.x= src->width-maxloc.x;
// pt.y= src->height-maxloc.y;
//}
//if(maxloc.x>=t_w/2&&maxloc.x<=t_w&&maxloc.y>=t_h/2&&maxloc.y<=t_h)
//{
// pt.x= -maxloc.x;
// pt.y= src->height-maxloc.y;
//}
Peak pk;
pk.maxval= maxval;
pk.pt=pt;
return pk;
}
----------
**UPDATE:**
I tried new interface but it still not working.If I try var1 then program fails on first dft, if I try var2 then peak at (0,0) which isn't right.var3 seem to be work but also not right peak.
For example ImageJ gives me peak at (x= 876 y= -5) image are size 1024 884 and var 3 give me -440 0
Mat image1= imread("001_001.PNG",0);
Mat image2= imread("001_002.PNG",0);
int width = getOptimalDFTSize(max(image1.cols,image2.cols));
int height = getOptimalDFTSize(max(image1.rows,image2.rows));
Mat fft1(Size(width,height),CV_32F,Scalar(0));
Mat fft2(Size(width,height),CV_32F,Scalar(0));
//var1
copyMakeBorder(image1,fft1,0,height-image1.rows,0,width-image1.cols,BORDER_CONSTANT,Scalar::all(0));
copyMakeBorder(image2,fft2,0,height-image2.rows,0,width-image2.cols,BORDER_CONSTANT,Scalar::all(0));
//var2
/*image1.copyTo(fft1(Rect(0,0,image1.cols,image1.rows)));
image2.copyTo(fft2(Rect(0,0,image2.cols,image2.rows)));*/
//var3
image1.convertTo(fft1(Rect(0,0,image1.cols,image1.rows)),CV_32F);
image2.convertTo(fft2(Rect(0,0,image2.cols,image2.rows)),CV_32F);
dft(fft1,fft1,0,image1.rows);
dft(fft2,fft2,0,image2.rows);
mulSpectrums(fft1,fft2,fft1,0,true);
idft(fft1,fft1);
double maxVal;
Point maxLoc;
minMaxLoc(fft1,NULL,&maxVal,NULL,&maxLoc);
int resX = (maxLoc.x<width/2) ? (maxLoc.x) : (maxLoc.x-width);
int resY = (maxLoc.y<height/2) ? (maxLoc.y) : (maxLoc.y-height);mrgloomMon, 20 Aug 2012 04:27:44 -0500http://answers.opencv.org/question/1624/Register images with deformations (and evaluate quality)http://answers.opencv.org/question/63924/register-images-with-deformations-and-evaluate-quality/Hello everyone,
First if any developpers of opencv read this : congratulations for OpenCV3 !! You made a great job.
Second, here my questions.
I would like to register a set of planetary images. These images can be deformed between each other due to atmospheric conditions.
In order to process the registration I think to do like this:
1 - looking for the better image with high quality
2 - taking this image as reference and find keypoints of other images to this image (the x% of better images)
3 - compute and apply transformation to images in order to take into account the possible deformation with regards to the reference one
Now, my questions are :
For 1 : What is the better way to rate the quality ? I ever heard that contrast could be one of many points to do this, but what else ? In my algorithm I use enthropy but it is not so good. Have you have any idea about some opencv function to help me to do that ?
For 2 : I think that this point is not the most difficult, because OpenCV provides very goods function for that like Feature Detection. If you have another ideas, of course let me know.
For 3 : I think this part is the most difficult and I don't know how to start. Indeed, How to evaluate deformation (because I think once it is evaluated, it will be easy to apply) ?
Finaly, I need to use opencv 2 and not the last version. Thanks.
My best regardslock042Thu, 11 Jun 2015 07:53:06 -0500http://answers.opencv.org/question/63924/Registration module crasheshttp://answers.opencv.org/question/58287/registration-module-crashes/Hi,
I was trying to use the new registration module from opencv_contrib.
I used the [example code](https://github.com/Itseez/opencv_contrib/blob/master/modules/reg/samples/map_test.cpp "example code").
cv::reg::MapperGradProj mapper;
cv::Ptr<cv::reg::Map> mapPtr;
cv::reg::MapperPyramid mappPyr(mapper);
mappPyr.calculate(oimage,oimage2, mapPtr);
The program already fails calling the calculate function.
**Error in ... free(): invalid next size (normal): 0x0000000002a5a590**
Edit: Got it. I had to convert the image to float first.ascentofWed, 25 Mar 2015 03:26:41 -0500http://answers.opencv.org/question/58287/where is ICPhttp://answers.opencv.org/question/53168/where-is-icp/ This documentation article suggests that version 3.0 has ICP:
http://docs.opencv.org/trunk/modules/surface_matching/doc/surface_matching.html?highlight=icp
I downloaded the 3_0_0 beta but could not find ICP. Any ideas?chrisFri, 16 Jan 2015 08:48:07 -0600http://answers.opencv.org/question/53168/Multi-camera registration using images taken from a targethttp://answers.opencv.org/question/37278/multi-camera-registration-using-images-taken-from-a-target/Hi everyone
I am trying to find out the most accurate way to calculate the orientation of images that are taken from several cameras that encircle a common target. My idea is to produce a cylinder whose surface contains equip-spaced dots or squares (some kind of simple pattern). Then, several cameras are positioned around this target and images are captured from each camera. I then need to find out the 3-dimensional orientation of each of the images from the separate cameras relative to the target coordinate. The most important for me are the azimuth and inclination but other information such as the pitch and roll angle are also useful. Does anyone have any useful information that would help me with this? I would be mostly interested if there are algorithms that do something like this. I can develop it myself to suit my specific case, but a start would be great.
ThanksMohriWed, 16 Jul 2014 03:06:55 -0500http://answers.opencv.org/question/37278/aligning two cameras with similar viewshttp://answers.opencv.org/question/36727/aligning-two-cameras-with-similar-views/Hi
I have a pair of cameras (picameras) setup in my vehicle at the top of the windshield looking out the front. They are about 22" apart. I capture an image stream from each one and view them side-by-side using OpenCV. Are there OpenCV tools to align the two cameras? I believe this would be akin to having the two views registered to each other? I have started a blog discussing this project which gives more detail: http://auvrnh.blogspot.com/ where there are some sample images taken from the captured image streams.
Thanks...
mikeyWed, 09 Jul 2014 16:21:39 -0500http://answers.opencv.org/question/36727/Check whether two images are registeredhttp://answers.opencv.org/question/36593/check-whether-two-images-are-registered/I
Hi,
I have two images that are partially overlapping. I know the projection matrix for each of the images. I would like to check if the images are well aligned to each other. I was thinking to do the following steps:
find the similar features
find the features' world coordinates
calculate the distances between each pair of the corresponding features.
I wonder if there is a quicker way to check if the images are aligned. Let's assume that I marked one feature in one of the images so I know [x, y, z, i, j]. Is there a way to find what is the corresponding feature in the other image?
Thanks in advance for any leading answers.
RakefetmoorTue, 08 Jul 2014 08:47:47 -0500http://answers.opencv.org/question/36593/Extrinsics matrix after cv::StereoCalibratehttp://answers.opencv.org/question/1055/extrinsics-matrix-after-cvstereocalibrate/Hi all,
I have two cameras (2 kinects specifically) and I am using cv::StereoCalibrate to compute the position and orientation of Kinect2 relative to Kinect1.
The function returns (among others), the rotation matrix R and the translation vector T.
Is the final transformation matrix given by E = [R | T] (meaning, I just use the rotation and the translation as is to compose the matrix -and then transform it to homogeneous 4x4-) or I should do something else? (Like E = [R | -R^(-1)*T] which I saw somewhere).
What I want to do is multiply the Kinect2 point cloud with this matrix to register the two point clouds.
Thanks so much
BobRWed, 01 Aug 2012 15:29:31 -0500http://answers.opencv.org/question/1055/