Ask Your Question

我干过豪哥's profile - activity

2020-12-21 06:26:58 -0600 received badge  Notable Question (source)
2018-10-18 08:42:13 -0600 received badge  Notable Question (source)
2017-06-20 04:12:12 -0600 received badge  Popular Question (source)
2017-01-17 22:41:35 -0600 received badge  Popular Question (source)
2015-05-26 00:29:39 -0600 asked a question The problem about the global variable of vector<cv::Mat> and temporary cv::Mat variable
 #include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>


using namespace std;
using namespace cv;


cv::Mat Right(Size(768,576),CV_8UC3);
cv::Mat Left(Size(768,576),CV_8UC3);

vector<Mat> mapx;
vector<Mat> mapy;


cv::VideoCapture capture1("test.avi");


void MapInit(const Mat &src)
{
    Mat map_x(src.size(),CV_32FC1);
    Mat map_y(src.size(),CV_32FC1);
    for( int i = 0; i < src.rows; ++i)
    {
        for( int j = 0; j < src.cols; ++j)
        {
            map_x.at<float>(i, j) = j ;
            map_y.at<float>(i, j) = src.rows - i ;
        }
    }
    mapx.push_back(map_x);
    mapy.push_back(map_y);
    //Mat map_x1(src.size(),CV_32FC1);
    //Mat map_y1(src.size(),CV_32FC1);
    for( int i = 0; i < src.rows; ++i)
    {
        for( int j = 0; j < src.cols; ++j)
        {
            map_x.at<float>(i, j) = src.cols - j ;
            map_y.at<float>(i, j) =  i ;
            //map_x1.at<float>(i, j) = src.cols - j ;
            //map_y1.at<float>(i, j) =  i ;
        }
    }
    //mapx.push_back(map_x1);
    //mapx.push_back(map_y1);
    mapx.push_back(map_x);
    mapy.push_back(map_y);
}

void mapPlane(const Mat &src)
{
    remap(src,Left,mapx[0],mapy[0],0);
    flip(Left,Left,0);
    remap(src,Right,mapx[1],mapy[1],0);
    //mapx[0] is the same as mapx[1];mapy[0] is the same as mapy[1] ???
    flip(Right,Right,0);
}
void runCam()
{
    namedWindow("test1",0);
    namedWindow("test2",0);
    for(;;)
    {   
        cv::Mat image;
        capture1 >> image;

        if(image.empty())
            exit(0) ;
        cvtColor(image,image,CV_BGR2RGB);
        if(mapx.empty() && mapy.empty())
            MapInit(image);
        mapPlane(image);
        imshow("test1",Left);
        imshow("test2",Right);
        waitKey(20);
    }
}

int main(int argc, char **argv)
{
    if(!capture1.isOpened())
    {
        printf("open failed \n");
        return -1;
    }
    runCam();
    return 0;
}

As above, I create the temporary variable Mat map_x map_y in Function MapInit(),to my surprise, when I pushback the different map_x map_y into vector<mat> mapx,mapy in order,the element of vector<mat> are the same as the latest modified map_x map_y. Similarly, I write a section of code as following,the elements in the vector result are different,which is contradicted against the above example. I am so confused about that.

#include <iostream>
#include <vector>
using namespace std;

vector<int> result;

void fun()
{
    int i = 0;
    result.push_back(i);
    i = 1;
    result.push_back(i);
}
int main()
{
    fun();
    cout<<result[0]<<endl;
    cout<<result[1]<<endl;
}
2015-02-26 01:09:51 -0600 asked a question The perplexity about the formula derivation of camera calibration on the source code and tutorial documents

the interpretation of initundistortRectify map in OpenCv tutorial image description

the sourcecode in undistort.cpp

void cv::initUndistortRectifyMap( InputArray _cameraMatrix, InputArray _distCoeffs,
                              InputArray _matR, InputArray _newCameraMatrix,
                              Size size, int m1type, OutputArray _map1, OutputArray _map2 )
{
      ...........
    for( int i = 0; i < size.height; i++ )
    {
        float* m1f = (float*)(map1.data + map1.step*i);
        float* m2f = (float*)(map2.data + map2.step*i);
        short* m1 = (short*)m1f;
        ushort* m2 = (ushort*)m2f;
        double _x = i*ir[1] + ir[2], _y = i*ir[4] + ir[5], _w = i*ir[7] + ir[8];

        for( int j = 0; j < size.width; j++, _x += ir[0], _y += ir[3], _w += ir[6] )
        {
            double w = 1./_w, x = _x*w, y = _y*w;
            double x2 = x*x, y2 = y*y;
            double r2 = x2 + y2, _2xy = 2*x*y;
            double kr = (1 + ((k3*r2 + k2)*r2 + k1)*r2)/(1 + ((k6*r2 + k5)*r2 + k4)*r2);
            double u = fx*(x*kr + p1*_2xy + p2*(r2 + 2*x2)) + u0;
            double v = fy*(y*kr + p1*(r2 + 2*y2) + p2*_2xy) + v0;
            if( m1type == CV_16SC2 )
            {
                int iu = saturate_cast<int>(u*INTER_TAB_SIZE);
                int iv = saturate_cast<int>(v*INTER_TAB_SIZE);
                m1[j*2] = (short)(iu >> INTER_BITS);
                m1[j*2+1] = (short)(iv >> INTER_BITS);
                m2[j] = (ushort)((iv & (INTER_TAB_SIZE-1))*INTER_TAB_SIZE + (iu & (INTER_TAB_SIZE-1)));
            }
            else if( m1type == CV_32FC1 )
            {
                m1f[j] = (float)u;
                m2f[j] = (float)v;
            }
            else
            {
                m1f[j*2] = (float)u;
                m1f[j*2+1] = (float)v;
            }
        }
    }
}

Something puzzle me between source code and tutorial

1.[u,v] in the tutorial is the point in the undistort image coordinates,but in source code is the point in the distort image.

2.the radial distort happens in converting the world coordinate to the camera coordinate or the camera coordinate to the image coordinate.In the source code [_x,_y,_w] is the world coordinate, which is nomalized into [x,y] as camera coordinate,that is true?

3.The R (rotation matrix) is assumed as identity,why?

2014-08-29 23:24:13 -0600 asked a question how to show gpumat in window with glut lib

Because I want to use some mouse scrollwheel function in my projects,so I want to know how to use gpumat in window created by glutCreateWindow Function and hope somebody can help me.

2014-08-23 09:57:08 -0600 commented question How to display gpumat with QT window

Thanks,I have known it

2014-08-23 09:19:30 -0600 asked a question How to display gpumat with QT window

I have cmake opencv with qt,cuda,and opengl.So I can display mat in a Qt-style window,and I can resize,drag and save picture in the GUI.But when I create a window with CV_WINDOWN_OPENGL to show GpuMat, I can only have a window without extended gui.Finally,I want to ask how to display gpumat in a qt-style window,and hope somebody can help me .

2014-04-19 03:05:40 -0600 answered a question The problem about cmake + cuda5.5 + opencv2.4.7 or higher

hope someone can help me

2014-04-18 11:39:52 -0600 asked a question The problem about cmake + cuda5.5 + opencv2.4.7 or higher

My complier is vs2010.Several month ago,I make it successfully in the same computer.But today I want to make a new opencvlib,it always report error" 6>CMake Error at cuda_compile_generated_matrix_operations.cu.obj.cmake:264 (message): 6> Error generating file 6> D:/opencv249/modules/core/CMakeFiles/cuda_compile.dir/__/dynamicuda/src/cuda/Debug/cuda_compile_generated_matrix_operations.cu.obj" firstly, and then report

LINK : fatal error LNK1104: 无法打开文件(can't not open)“....\lib\Debug\opencv_core247d.lib”. I have tried to turn cuda option off and make it sucessfully.

2014-03-31 22:33:05 -0600 asked a question The problem about read 1080p Hd video

My program requires cpureader in opencv to read 4 1080p(avi) video,it works well in any pc or labtop,but I try to run it in the Graphics workstation in my lab,the error shows "mpeg4@ 0735d060 cannot allocate memory",but When I use GpuReader in opencv or use cpureader to capture 720p ,it works well.I have install ffdshow and xvid in graphic workstation(serve2008 or win7 x64 both failed).I cannot figure out that and hope sb can help me.

2014-03-31 07:01:38 -0600 asked a question The problem about tbb and Vector<Mat.
class ApplyFoo {
    vector<Mat>my_a;
public:
    void operator()( const blocked_range<size_t>& r ) const {
        vector<Mat> a = my_a;
        for( size_t i=r.begin(); i!=r.end(); ++i ) 
           //Foo(a[i]);
           pyrDown(a[i],a[i],cv::Size(),4);
    }
    ApplyFoo( vector<Mat> a ) :
        my_a(a)
    {}
};


void test(vector<Mat> &src)
{
    parallel_for(blocked_range<size_t>(0,3),ApplyFoo(src));
}

int main(int argc,char **argv)
{
    vector<Mat> Imgs(3);
    vector<Mat> omp(3);
    vector<Mat> tbb(3);
    Imgs[0] = imread("1.jpg",-1);
    Imgs[1] = imread("2.jpg",-1);
    Imgs[2] = imread("3.jpg",-1);
    tbb = Imgs;
    omp = Imgs;
    int64 t = getTickCount();
    for(size_t i = 0 ;i < Imgs.size(); ++i)
    {
        pyrDown(Imgs[i],Imgs[i],cv::Size(),4);
    }
    cout<<"serial cost is "<<(getTickCount() - t)/getTickFrequency()<<endl;
    t = getTickCount();
#pragma omp parallel num_threads(4)
    for(int i = 0 ;i < Imgs.size(); ++i)
    {
        pyrDown(omp[i],omp[i],cv::Size(),4);
    }
    cout<<"omp cost is "<<(getTickCount() - t)/getTickFrequency()<<endl;    
    t = getTickCount();
    test(tbb);
    cout<<"tbb cost is "<<(getTickCount() - t)/getTickFrequency()<<endl;    

    return 0;
}

I want to compare the results between openmp and tbb in the opencv code(as above),but in the section of tbb,the result image is the same as the origin img,it is not working.I don't use cv::ParallelLoopBody,because I think there is no difference between them . I am new to tbb and hope some point out error about tbb

2014-03-28 02:22:19 -0600 asked a question The problem about Using "cudamemcpy" to copy IplImage image to GpuMat
IplImage * tmp = cvLoadImage("1.jpg",1);
GpuMat tmp_gpu;
tmp_gpu.create(cvSize(tmp->width, tmp->height), CV_8UC3);
cudaMemcpy2D(tmp_gpu.data,tmp_gpu.step,(unsigned char *)tmp->imageData,tmp->widthStep,tmp->width*tmp->nChannels*sizeof(unsigned char),tmp->height,cudaMemcpyHostToDevice);

the result is right.

IplImage * tmp = cvLoadImage("1.jpg",1);
GpuMat tmp_gpu;
tmp_gpu.create(cvSize(tmp->width, tmp->height), CV_8UC3);
cudaMemcpy(tmp_gpu.data,(unsigned char *)tmp->imageData,tmp->widthStep*tmp->height,cudaMemcpyHostToDevice);

the result is wrong I think there is not difference between them;

2014-03-18 09:57:38 -0600 commented answer The function "setTo"about gpumat

THanks! if GpuMat A ,GpuMat B , int k, A = B > k GpuMat can't support operator.How to do it,can you share you idea?

2014-03-18 04:41:18 -0600 answered a question a wierd result of a project with GpuMat and cuda

thumb!!!!!!

2014-03-17 14:34:13 -0600 commented question a wierd result of a project with GpuMat and cuda

I am new to the GpuMat struct and PtrStep,hope you can figure out my fault.

2014-03-17 14:23:59 -0600 commented question a wierd result of a project with GpuMat and cuda

I edit my post and pic ^_^

2014-03-17 14:05:25 -0600 asked a question a wierd result of a project with GpuMat and cuda

The code like following

#include <opencv2/core/core.hpp>
#include <opencv2/gpu/gpu.hpp>
#include <opencv2/core/cuda_devptrs.hpp>
#include <opencv2/gpu/stream_accessor.hpp>
#include <opencv2/gpu/device/common.hpp>  
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <cuda_runtime.h>
using namespace std;
using namespace cv;
using namespace cv::gpu;

__global__ void AddColor_Kernel(const PtrStepSz<uchar3> src,const PtrStepSz<uchar> mask, 
PtrStep<uchar3> dst, PtrStep<uchar> dst_mask)
{
    int x = blockIdx.x * blockDim.x + threadIdx.x;
    int y = blockIdx.y * blockDim.y + threadIdx.x;

    if(y < mask.rows && x < mask.cols)
    {
        if(mask(y,x))
            dst.ptr(y )[ x] = src.ptr(y)[x];
        dst_mask.ptr(y )[ x] |= mask.ptr(y)[x];
    }
}




void AddColorCaller(const PtrStepSz<uchar3> &src,const PtrStepSz<uchar> &mask, PtrStep<uchar3> dst,
PtrStep<uchar> dst_mask,cudaStream_t stream)
    {

    dim3 block(32,8);
    //  dim3 grid(divUp(src.cols ,block.x),divUp(src.rows ,block.y));
    dim3 grid((mask.cols + block.x - 1)/block.x,(mask.rows + block.y - 1)/block.y);
    AddColor_Kernel<<<grid,block,0,stream>>>(src,mask,dst,dst_mask);
    cudaSafeCall(cudaGetLastError());
    if(stream == 0)
        cudaSafeCall(cudaDeviceSynchronize());
}

void AddColor(const GpuMat &src,const GpuMat &mask,  GpuMat &dst,  GpuMat &dst_mask,
Stream& stream = Stream::Null())
    {   
        dst.create(src.size(),src.type());
        dst.setTo(Scalar::all(0));
        dst_mask.create(mask.size(),mask.type());
        dst_mask.setTo(Scalar::all(0));
        cudaStream_t st = StreamAccessor::getStream(stream);
        AddColorCaller(src,mask,dst,dst_mask,st);
    }

int main()
{
    Mat image = imread("1.jpg");
    Mat mask  = imread("mask.jpg",-1);
    cout<<mask.channels()<<endl;
    resize(mask,mask,image.size());

    imshow("src",image);
    imshow("mask",mask);
    GpuMat gpuMat,output,mask_gpu,dst,dst_mask;

    gpuMat.upload(image);
    mask_gpu.upload(mask);
    AddColor(gpuMat,mask_gpu,dst,dst_mask);
    namedWindow("1",CV_WINDOW_OPENGL);
    imshow("1",dst);
    waitKey(0); 
    return 0;
}

origin_mask processed_final_mask image description

According to code,the final_mask should be the same as origin_mask.The pic should be cut according to the mask.But result is disappointed,So I hope somebody can point out error of my code.Thanks!

2014-03-16 02:16:11 -0600 asked a question The function "setTo"about gpumat

I define GpuMat Img,Mask, and use" Img.setTo(Scalar::all(0),Mask == 0)",but Vs2010 can't complie and report error.

2014-03-15 11:28:36 -0600 asked a question GPU API call (unknown error) in unknown function in gpumat.hpp

when I use upload or create a Gpumat varaiable , the program stop at gpumat.hpp line 1415(cudaSafeCall( cudaMallocPitch(devPtr, step, width, height).But I have create several GpuMat variables and works well before error varaibles.

2014-02-08 06:27:43 -0600 asked a question The question about Homography matrix

I get H findHomography =(left_points,right_point);

so I modify H by H.at<double>(0,2) += X_translation;//to show all of leftimage in the result

andthen warpPerwarpPerspective(left,result,H) finally I copy rightImage to half(reust,Rect(X_translation,0,cols,rows)); To my surprise,the stich image looks bad,it is not accurate. .I can't figure out that and hope somebody can help me!

2014-01-22 02:04:26 -0600 asked a question The detail thing in the opencv stitch module

I get K(intrinsic matrix),R(rotation matrix) by bundleadjuster,so I can calculate Homography matrix by "H = K(R|T)".For example,I want to stitch 4 pics ,and calculate four homography matrix (H1,H2,H3,H4)corresponding to pic(P1,P2,P3,P4).Finally I want to use Homograph matrix to map 4 pic to the final panorama instead of warper(K,R) in the opencv stitch module.But I tried to realize my idea this afternoon and failed.Someone knows that my ideas make sense ? Thank you for your reply!

2014-01-21 08:28:49 -0600 commented answer A question about relation of (K R T H)

I choose pairwise points manually instead of surf features.My purpose is that using Homography matrix to map the pics on the final pano instead of warp the image by K and R.And I have get a good final panorama by stitch moudle order.So I want to know ,is the idea"use homograph matrix to map pic insteadof stitch module" right?

2014-01-21 05:53:52 -0600 commented answer A question about relation of (K R T H)

Thanks,Bro! I got Homography matrix like above.But I use the K,R refined Bundleadjuster in opencv stitch module to calculate Homography matrix.For example, I have four pic(P1,P2,P3,P4) to stitch a panorama,so I will calculate four H(H1,H2,H3,H4)correspoding to(P1 to P4),finally I try to use function warpPespective(H)to map Pic to the final pano instead of function remap(K,R)(opencvStitchmodule use remap to map pics to final pano).I tried to realize that idea this afternoon and failed.So I want to know ,is my idea right?

2014-01-21 03:04:03 -0600 received badge  Teacher (source)
2014-01-21 02:10:08 -0600 received badge  Student (source)
2014-01-20 11:55:01 -0600 asked a question A question about relation of (K R T H)

H is homography matrix. R(r1,r2,r3)is a rotation matrix ,t is a translation matrix , k is a intrinsic matrix.I want to get H by R K T,so I use the equation (H = K(R|T).But I want to get the Homography matrix between two 2D images,and I only use r1 r2 just like H = K(r1,r2,T).Is that right?? Thank you for your reply!!

2014-01-07 22:27:59 -0600 marked best answer The relationship between Homography matrix and scaling images

I have compute the Homography matrix H1 between two images,after that I just use function "resize(orign_image,img,Size(),workscale,worscale)".finally I want to get a H'to apply to the img which has been resized,but the H1 or H'(H1 *workscale)cann't work.So I hope someone can help me get correct H' that can be applied to resized imgs.^_^

2014-01-07 22:27:55 -0600 received badge  Scholar (source)
2014-01-07 22:25:55 -0600 commented answer The relationship between Homography matrix and scaling images

Thanks Bro!! your answer is correct,I have solved it^_^