OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 18 Nov 2020 22:32:15 -0600I do downhole test using CCTV. Is there any open codes of the OpenCV I can use for a digital image stitching?http://answers.opencv.org/question/237997/i-do-downhole-test-using-cctv-is-there-any-open-codes-of-the-opencv-i-can-use-for-a-digital-image-stitching/ A camera is put in a downhole from ground, and moving downward, while the video is recorded by capturing an image of internal surface of the downhole at different depths.
Later on, a complete 2D digital image of internal surface of the whole downhole needs to be formed by stitching each capture from the recorded video. Fred YangWed, 18 Nov 2020 22:32:15 -0600http://answers.opencv.org/question/237997/Strange streching effects in photo due to transformation by matrixhttp://answers.opencv.org/question/229362/strange-streching-effects-in-photo-due-to-transformation-by-matrix/ I am working on stitching algorithm and and sometime there is strange streching of photo. It is cause by low number of keypoints BUT I am interested what cause it in this matrix:
![matrix](/upfiles/15876576686657783.png)
and the effect of this matrix is this: https://imgur.com/a/dwzm9Af (sorry can not upload image here, it seems too big)
can you suggest how to reduce this effect or what cause it?
RockStar1337Thu, 23 Apr 2020 11:06:48 -0500http://answers.opencv.org/question/229362/Stitch images by Translationhttp://answers.opencv.org/question/99932/stitch-images-by-translation/ I am currently using OpenCV stitcher class to stitch two images acquired by a camera translated parallel to a wall. The distance between the wall and the camera is around 75 cm with a limited field of view (around 70 degrees). I am using the Stitch() function to stitch such images and it works well for most of the cases. The camera is moved manually thus there may be also some rotations in the process. I know that these are taken into consideration by OpenCV in bundle Adjustment but what about the Translation ? Is the stitching working because the scene is planar therefore no parallax is involved? Has anyone worked with OpenCV Stitcher to stitch images from a translating camera ? AelWed, 10 Aug 2016 04:11:06 -0500http://answers.opencv.org/question/99932/phase correlation for image registration(image stitching)http://answers.opencv.org/question/1624/phase-correlation-for-image-registrationimage-stitching/I'm trying to stitch 2 images using cross correlation (phase correlation).Images are the same size.Only shift is present.
I tried opencv with cvDFT and opencv+FFTW it seems to work, but for some reason coorelation peak coordinate is not the shift coordinate, maybe it depends on quadrant where correlation point is.
So the question is how to obtain shift point from correlation point.
Note: I don't need "interest points" SIFT-like approaches.
My suggestion commented in code.
Here is the code that I use:
class Peak
{
public:
CvPoint pt;
double maxval;
};
Peak old_opencv_FFT(IplImage* src,IplImage* temp)
{
CvSize imgSize = cvSize(src->width, src->height);
// Allocate floating point frames used for DFT (real, imaginary, and complex)
IplImage* realInput = cvCreateImage( imgSize, IPL_DEPTH_64F, 1 );
IplImage* imaginaryInput = cvCreateImage( imgSize, IPL_DEPTH_64F, 1 );
IplImage* complexInput = cvCreateImage( imgSize, IPL_DEPTH_64F, 2 );
int nDFTHeight= cvGetOptimalDFTSize( imgSize.height );
int nDFTWidth= cvGetOptimalDFTSize( imgSize.width );
CvMat* src_DFT = cvCreateMat( nDFTHeight, nDFTWidth, CV_64FC2 );
CvMat* temp_DFT = cvCreateMat( nDFTHeight, nDFTWidth, CV_64FC2 );
CvSize dftSize = cvSize(nDFTWidth, nDFTHeight);
IplImage* imageRe = cvCreateImage( dftSize, IPL_DEPTH_64F, 1 );
IplImage* imageIm = cvCreateImage( dftSize, IPL_DEPTH_64F, 1 );
IplImage* imageImMag = cvCreateImage( dftSize, IPL_DEPTH_64F, 1 );
IplImage* imageMag = cvCreateImage( dftSize, IPL_DEPTH_64F, 1 );
CvMat tmp;
// Processing of src
cvScale(src,realInput,1.0,0);
cvZero(imaginaryInput);
cvMerge(realInput,imaginaryInput,NULL,NULL,complexInput);
cvGetSubRect(src_DFT,&tmp,cvRect(0,0,src->width,src->height));
cvCopy(complexInput,&tmp,NULL);
if (src_DFT->cols>src->width)
{
cvGetSubRect(src_DFT,&tmp,cvRect(src->width,0,src_DFT->cols-src->width,src->height));
cvZero(&tmp);
}
cvDFT(src_DFT,src_DFT,CV_DXT_FORWARD,complexInput->height);
cvSplit(src_DFT,imageRe,imageIm,0,0);
// Processing of temp
cvScale(temp,realInput,1.0,0);
cvMerge(realInput,imaginaryInput,NULL,NULL,complexInput);
cvGetSubRect(temp_DFT,&tmp,cvRect(0,0,temp->width,temp->height));
cvCopy(complexInput,&tmp,NULL);
if (temp_DFT->cols>temp->width)
{
cvGetSubRect(temp_DFT,&tmp,cvRect(temp->width,0,temp_DFT->cols-temp->width,temp->height));
cvZero( &tmp );
}
cvDFT(temp_DFT,temp_DFT,CV_DXT_FORWARD,complexInput->height);
// Multiply spectrums of the scene and the model (use CV_DXT_MUL_CONJ to get correlation instead of convolution)
cvMulSpectrums(src_DFT,temp_DFT,src_DFT,CV_DXT_MUL_CONJ);
// Split Fourier in real and imaginary parts
cvSplit(src_DFT,imageRe,imageIm,0,0);
// Compute the magnitude of the spectrum components: Mag = sqrt(Re^2 + Im^2)
cvPow( imageRe, imageMag, 2.0 );
cvPow( imageIm, imageImMag, 2.0 );
cvAdd( imageMag, imageImMag, imageMag, NULL );
cvPow( imageMag, imageMag, 0.5 );
// Normalize correlation (Divide real and imaginary components by magnitude)
cvDiv(imageRe,imageMag,imageRe,1.0);
cvDiv(imageIm,imageMag,imageIm,1.0);
cvMerge(imageRe,imageIm,NULL,NULL,src_DFT);
// inverse dft
cvDFT( src_DFT, src_DFT, CV_DXT_INVERSE_SCALE, complexInput->height );
cvSplit( src_DFT, imageRe, imageIm, 0, 0 );
double minval = 0.0;
double maxval = 0.0;
CvPoint minloc;
CvPoint maxloc;
cvMinMaxLoc(imageRe,&minval,&maxval,&minloc,&maxloc,NULL);
int x=maxloc.x; // log range
//if (x>(imageRe->width/2))
// x = x-imageRe->width; // positive or negative values
int y=maxloc.y; // angle
//if (y>(imageRe->height/2))
// y = y-imageRe->height; // positive or negative values
Peak pk;
pk.maxval= maxval;
pk.pt=cvPoint(x,y);
return pk;
}
void phase_correlation2D( IplImage* src, IplImage *tpl, IplImage *poc )
{
int i, j, k;
double tmp;
/* get image properties */
int width = src->width;
int height = src->height;
int step = src->widthStep;
int fft_size = width * height;
/* setup pointers to images */
uchar *src_data = ( uchar* ) src->imageData;
uchar *tpl_data = ( uchar* ) tpl->imageData;
double *poc_data = ( double* )poc->imageData;
/* allocate FFTW input and output arrays */
fftw_complex *img1 = ( fftw_complex* )fftw_malloc( sizeof( fftw_complex ) * width * height );
fftw_complex *img2 = ( fftw_complex* )fftw_malloc( sizeof( fftw_complex ) * width * height );
fftw_complex *res = ( fftw_complex* )fftw_malloc( sizeof( fftw_complex ) * width * height );
/* setup FFTW plans */
fftw_plan fft_img1 = fftw_plan_dft_2d( height ,width, img1, img1, FFTW_FORWARD, FFTW_ESTIMATE );
fftw_plan fft_img2 = fftw_plan_dft_2d( height ,width, img2, img2, FFTW_FORWARD, FFTW_ESTIMATE );
fftw_plan ifft_res = fftw_plan_dft_2d( height ,width, res, res, FFTW_BACKWARD, FFTW_ESTIMATE );
/* load images' data to FFTW input */
for( i = 0, k = 0 ; i < height ; i++ ) {
for( j = 0 ; j < width ; j++, k++ ) {
img1[k][0] = ( double )src_data[i * step + j];
img1[k][1] = 0.0;
img2[k][0] = ( double )tpl_data[i * step + j];
img2[k][1] = 0.0;
}
}
///* Hamming window */
//double omega = 2.0*M_PI/(fft_size-1);
//double A= 0.54;
//double B= 0.46;
//for(i=0,k=0;i<height;i++)
//{
// for(j=0;j<width;j++,k++)
// {
// img1[k][0]= (img1[k][0])*(A-B*cos(omega*k));
// img2[k][0]= (img2[k][0])*(A-B*cos(omega*k));
// }
//}
/* obtain the FFT of img1 */
fftw_execute( fft_img1 );
/* obtain the FFT of img2 */
fftw_execute( fft_img2 );
/* obtain the cross power spectrum */
for( i = 0; i < fft_size ; i++ ) {
res[i][0] = ( img2[i][0] * img1[i][0] ) - ( img2[i][1] * ( -img1[i][1] ) );
res[i][1] = ( img2[i][0] * ( -img1[i][1] ) ) + ( img2[i][1] * img1[i][0] );
tmp = sqrt( pow( res[i][0], 2.0 ) + pow( res[i][1], 2.0 ) );
res[i][0] /= tmp;
res[i][1] /= tmp;
}
/* obtain the phase correlation array */
fftw_execute(ifft_res);
//normalize and copy to result image
for( i = 0 ; i < fft_size ; i++ ) {
poc_data[i] = res[i][0] / ( double )fft_size;
}
/* deallocate FFTW arrays and plans */
fftw_destroy_plan( fft_img1 );
fftw_destroy_plan( fft_img2 );
fftw_destroy_plan( ifft_res );
fftw_free( img1 );
fftw_free( img2 );
fftw_free( res );
}
Peak FFTW_test(IplImage* src,IplImage* temp)
{
clock_t start=clock();
int t_w=temp->width;
int t_h=temp->height;
/* create a new image, to store phase correlation result */
IplImage* poc = cvCreateImage( cvSize(temp->width,temp->height ), IPL_DEPTH_64F, 1 );
/* get phase correlation of input images */
phase_correlation2D( src, temp, poc );
/* find the maximum value and its location */
CvPoint minloc, maxloc;
double minval, maxval;
cvMinMaxLoc( poc, &minval, &maxval, &minloc, &maxloc, 0 );
/* IplImage* poc_8= cvCreateImage( cvSize(temp->width, temp->height ), 8, 1 );
cvConvertScale(poc,poc_8,(double)255/(maxval-minval),(double)(-minval)*255/(maxval-minval));
cvSaveImage("poc.png",poc_8); */
cvReleaseImage( &poc );
clock_t end=clock();
int time= end-start;
//fprintf( stdout, "Time= %d using clock() \n" ,time/*dt*/ );
//fprintf( stdout, "Maxval at (%d, %d) = %2.4f\n", maxloc.x, maxloc.y, maxval );
CvPoint pt;
pt.x= maxloc.x;
pt.y= maxloc.y;
//4 variants?
//if(maxloc.x>=0&&maxloc.x<=t_w/2&&maxloc.y>=0&&maxloc.y<=t_h/2)
//{
// pt.x= src->width-maxloc.x;
// pt.y= -maxloc.y;
//}
//if(maxloc.x>=t_w/2&&maxloc.x<=t_w&&maxloc.y>=0&&maxloc.y<=t_h/2)
//{
// pt.x= src->width-maxloc.x;
// pt.y= src->height-maxloc.y;
//}
//if(maxloc.x>=0&&maxloc.x<=t_w/2&&maxloc.y>=t_h/2&&maxloc.y<=t_h)
//{
// /*pt.x= -maxloc.x;
// pt.y= -maxloc.y;*/
// pt.x= src->width-maxloc.x;
// pt.y= src->height-maxloc.y;
//}
//if(maxloc.x>=t_w/2&&maxloc.x<=t_w&&maxloc.y>=t_h/2&&maxloc.y<=t_h)
//{
// pt.x= -maxloc.x;
// pt.y= src->height-maxloc.y;
//}
Peak pk;
pk.maxval= maxval;
pk.pt=pt;
return pk;
}
----------
**UPDATE:**
I tried new interface but it still not working.If I try var1 then program fails on first dft, if I try var2 then peak at (0,0) which isn't right.var3 seem to be work but also not right peak.
For example ImageJ gives me peak at (x= 876 y= -5) image are size 1024 884 and var 3 give me -440 0
Mat image1= imread("001_001.PNG",0);
Mat image2= imread("001_002.PNG",0);
int width = getOptimalDFTSize(max(image1.cols,image2.cols));
int height = getOptimalDFTSize(max(image1.rows,image2.rows));
Mat fft1(Size(width,height),CV_32F,Scalar(0));
Mat fft2(Size(width,height),CV_32F,Scalar(0));
//var1
copyMakeBorder(image1,fft1,0,height-image1.rows,0,width-image1.cols,BORDER_CONSTANT,Scalar::all(0));
copyMakeBorder(image2,fft2,0,height-image2.rows,0,width-image2.cols,BORDER_CONSTANT,Scalar::all(0));
//var2
/*image1.copyTo(fft1(Rect(0,0,image1.cols,image1.rows)));
image2.copyTo(fft2(Rect(0,0,image2.cols,image2.rows)));*/
//var3
image1.convertTo(fft1(Rect(0,0,image1.cols,image1.rows)),CV_32F);
image2.convertTo(fft2(Rect(0,0,image2.cols,image2.rows)),CV_32F);
dft(fft1,fft1,0,image1.rows);
dft(fft2,fft2,0,image2.rows);
mulSpectrums(fft1,fft2,fft1,0,true);
idft(fft1,fft1);
double maxVal;
Point maxLoc;
minMaxLoc(fft1,NULL,&maxVal,NULL,&maxLoc);
int resX = (maxLoc.x<width/2) ? (maxLoc.x) : (maxLoc.x-width);
int resY = (maxLoc.y<height/2) ? (maxLoc.y) : (maxLoc.y-height);mrgloomMon, 20 Aug 2012 04:27:44 -0500http://answers.opencv.org/question/1624/I need help about mosaic constructionhttp://answers.opencv.org/question/62452/i-need-help-about-mosaic-construction/I need solve this problem...
I am constructing a mosaic of the environment, but some problems have occurred in my method:
I want take this borders of the my mosaic. Anyone know how to solve this problem, please ?
![image description](/upfiles/14323202748188302.jpg)diegomoreiraFri, 22 May 2015 13:46:33 -0500http://answers.opencv.org/question/62452/Image stitching - why does the pipeline include 2 times resizing?http://answers.opencv.org/question/32100/image-stitching-why-does-the-pipeline-include-2-times-resizing/Hi all!
I have been working on a project involving image stitching of aerial photography. The stitching pipeline that is given in the documentation of OpenCV I actually encountered many different books and papers and frankly it makes perfect sense (http://docs.opencv.org/modules/stitching/doc/introduction.html). Except for one thing. In the two stages presented there (image acquisition being the first out of three but no point including it there) - registration and composition - I encounter resizing first to a medium and then to low resolution. Can someone explain to me why that is? Does the resizing in the registration stage has to do anything with the feature extraction? The only thing that makes sense to me in all this is that we obviously need the same resolution for all images in an image stitching. Another reason for the additional resizing this time in the composition stage is the computation of masks, which are then applied on the high resolution images that we give as input at the very beginning.
Thanks a lot for your help!
PS: Also with resolution it is obviously meant the number of pixels (since resizing is used in the stitching example), which is somewhat controversial since resolution per definition also depends on the size of each pixel and not only on their numbers as it defines the amount of detail in an image.rbaleksandarTue, 22 Apr 2014 04:43:18 -0500http://answers.opencv.org/question/32100/Panorama building using spanning treehttp://answers.opencv.org/question/13277/panorama-building-using-spanning-tree/I'm coding some aplication for panorama stitching.
I'm using crosscorrelation for calculate relative displacement of each image(only (dx,dy))
So I have matrix(or fully connected graph) of relations between images(peak + (dx,dy) displacement),but algorithm sometimes give "fake/false peaks".
Something like [1][2] or [2][3]
My thoughts:
1. Find minimum spanning tree(can be false peaks and it does not take into account the geometric relationships).
2. Some global optimization. (don't sure which function/critiria to minimize).Maybe I can find all spanning trees (or best k spanning trees) and then find best using criteria (sum of peaks)/(number of rect intersections) or maybe sum(pixel difference/intersection area).Number of spanning trees grows fast with grow of number of images, even if I don't use fully connected graph, but cut some connection below threshold.
Maybe there is some opencv-based solution?
Also I found [this paper][4] but I'm not sure this is my case.
Also found [this paper][5] about spanning trees.
[1]: http://i.stack.imgur.com/aUiFy.jpg
[2]: http://www.flickr.com/photos/brunopostle/2321777799/
[3]: http://www.flickr.com/photos/sbprzd/2324271815/
[4]: http://www.inf.ethz.ch/personal/chzach/pdf/cvpr2010-preprint.pdf
[5]: http://www.isprs.org/proceedings/XXXVII/congress/5_pdf/125.pdfmrgloomMon, 13 May 2013 04:03:03 -0500http://answers.opencv.org/question/13277/Adaptation of OpenCV stitcherhttp://answers.opencv.org/question/2784/adaptation-of-opencv-stitcher/I'm working on a project where an image stitcher is required. The camera is fixed (no rotation
and no translation), and the object is translating on a plane about 60-70 mm below the camera.
The image plane is not quite parallel to the object plane.
I know the exact translation between successive images in the object plane.
I've successfully calibrated the camera using OpenCV:s calibrateCamera.
Now I would like to adapt the stitching library to fit my application. One idea is to
bypass the feature matching etc and go straight to the composePanorama-function (as I know
the translation). Of course this means I would have to add methods to access the various members such as the camera data structure. It seems the PlaneWarper takes both a rotation matrix
and a translation vector, so perhaps that could be used.
Any ideas about this approach? Is there a better approach?
Thanks.ktjMon, 01 Oct 2012 08:08:43 -0500http://answers.opencv.org/question/2784/