Ask Your Question

Josep Bosch's profile - activity

2016-11-11 06:13:54 -0600 received badge  Necromancer (source)
2014-02-17 09:39:44 -0600 answered a question How can I reference a standalone OpenCV installation inspite of having other version of Opencv version in ROS

I had the same problem. The thing is ros sets CPATH variable to something like:

/home/jep/catkin_ws/devel/include:/opt/ros/groovy/include

Then when compiling gcc always finds ROS opencv headers instead of the standalone ones... So I ended up just deleting CPATH whenever I need to compile somthing with the OpenCV standalone version.

2013-12-17 03:47:49 -0600 asked a question Pose from Fundamental matrix and vice versa

I have computed the Fundamental Matrix between two cameras using opencv's findFundamentalMat. Then I plot the epipolar lines in the image. And I get something like:

Epipolar lines ok

Now, I tried to get the pose from that fundamental matrix, computing first the essential matrix and then using Hartley & Zissserman approach.

K2=np.mat(self.calibration.getCameraMatrix(1))
K1=np.mat(self.calibration.getCameraMatrix(0))
E=K2.T*np.mat(F)*K1

H&Z

w,u,vt = cv2.SVDecomp(np.mat(E))   
if np.linalg.det(u) < 0:
    u *= -1.0
if np.linalg.det(vt) < 0:
    vt *= -1.0 
#Find R and T from Hartley & Zisserman
W=np.mat([[0,-1,0],[1,0,0],[0,0,1]],dtype=float)
R = np.mat(u) * W * np.mat(vt)
t = u[:,2] #u3 normalized.

In order to check everything until here was correct, I recompute E and F and plot the epipolar lines again.

S=np.mat([[0,-T[2],T[1]],[T[2],0,-T[0]],[-T[1],T[0],0]])
E=S*np.mat(R)
F=np.linalg.inv(K2).T*np.mat(E)*np.linalg.inv(K1)

But surprise, the lines have moved and they don't go through the points anymore. Have I done something wrong?

epilines bad

It looks similar to this question http://answers.opencv.org/question/18565/pose-estimation-produces-wrong-translation-vector/

The matrices I get are:

Original F=[[ -1.62627683e-07  -1.38840952e-05   8.03246936e-03]
 [  5.83844799e-06  -1.37528349e-06  -3.26617731e-03]
 [ -1.15902181e-02   1.23440336e-02   1.00000000e+00]]

E=[[-0.09648757 -8.23748182 -0.6192747 ]
 [ 3.46397143 -0.81596046  0.29628779]
 [-6.32856235 -0.03006961 -0.65380443]]

R=[[  9.99558381e-01  -2.72074658e-02   1.19497464e-02]
  [  3.50795548e-04   4.12906861e-01   9.10773189e-01]
  [ -2.97139627e-02  -9.10366782e-01   4.12734058e-01]]

T=[[-8.82445166e-02]
 [8.73204425e-01]
 [4.79298380e-01]]

Recomputed E=
[[-0.0261145  -0.99284189 -0.07613091]
 [ 0.47646462 -0.09337537  0.04214901]
 [-0.87284976 -0.01267909 -0.09080531]]

Recomputed F=
[[ -4.40154169e-08  -1.67341327e-06   9.85070691e-04]
 [  8.03070680e-07  -1.57382143e-07  -4.67389530e-04]
 [ -1.57927152e-03   1.47100268e-03   2.56606003e-01]]
2013-11-29 09:14:14 -0600 commented question Selecting pixel with mouse

That could be the reason... But it happens as well with the hole icon inside the same pixel !

2013-11-29 08:20:40 -0600 asked a question Selecting pixel with mouse

I'm using the cv2.setMouseCallback function to select a pixel of an image shown in a window. The callback function returns an x and y integers that represent the position of the pixel in the image, but paying attention to it's behaviour it seems to me that doesn't return the pixel you are over but the rounded value of a point in an imaginary axis.

image description

If you look at the two first images, in both the mouse is over the pixel 0,0 but the result is diferent if you move closer to other pixels.

Ok. I know in a real image the error is insignificant, but is this a bug?

   cv2.namedWindow('image',cv2.WINDOW_NORMAL) # Can be resized
   cv2.resizeWindow('image', self.w, self.h) #Reasonable size window
   cv2.setMouseCallback('image',self.mouse_callback) #Mouse callback
   while(not self.finished):
      cv2.imshow('image',self.img)
      k = cv2.waitKey(4) & 0xFF
      if k == 27:
         breakim
   cv2.destroyAllWindows()


   # mouse callback function
   def mouse_callback(self,event,x,y,flags,param):
      if event == cv2.EVENT_LBUTTONDOWN:
         print x, y
2013-11-26 05:22:52 -0600 commented answer rectify fisheye stereo setup

Hi Jenseb. How did you manage to solve your situation? Did you finally implement these constraints? I made a look to the code, but it doesn't seem easy to implement it to me...

2013-11-26 05:18:19 -0600 commented answer rectify fisheye stereo setup

Would be possible to share that piece of code you modified somewhere Kristian? It would help a lot other users. Cheers

2013-11-21 11:13:08 -0600 commented answer Stereo Rectification - Rectified image larger than original one

Invalid means, that there will be black pixels, as they aren't in the original images... Alpha should go from 0 to 1... -1 it's like default value. I don't get what part of the image you want to cut...

2013-11-21 09:05:34 -0600 answered a question Stereo Rectification - Rectified image larger than original one

What is happening is that only the valid points on the images are shown. Have a look at the documentation of alpha parameter of stereorectify. If you set alpha to 1 you will see all the original pixels in the rectified image.

StereoRectifyDoc

2013-11-21 08:37:26 -0600 asked a question Wide angle lenses calibration with Opencv

I'm using a wide-angle lens (178ยบ Diagonal FOV ) and I'm trying to calibrate it properly using Opencv Calibration module. All the detection and calibration process are working fine, but the result is very poor.

I have tried many different configurations:

  • Different set of images
  • Different radial coefficient numbers: 2,3,4,5 even 6.(CV_CALIB_FIX_K1,...,CV_CALIB_FIX_K6 )
  • Fixing principal point and tangential disortion to 0 (CV_CALIB_FIX_ASPECT_RATIO, CV_CALIB_FIX_PRINCIPAL_POINT)
  • Using expected focal length as initial camera matrix. (CV_CALIB_USE_INTRINSIC_GUESS)

The best I can get is something like:Bad calibration

Any ideas about how could I get a good calibration? Do you think using two calibrattion patterns at the same time, or using a circles grid as calibration pattern would help?

I've tried as well the opencv 3.0 thin prism coeffs, but they didn't help.

2013-09-10 04:48:23 -0600 received badge  Supporter (source)
2013-09-04 02:34:51 -0600 asked a question projectPoints fails with points behind the camera

I'm using the pojectPoints opencv function to get the projection of a 3d point in a camera image plane.

cv::projectPoints(inputPoint, rvec, tvec, fundamentalMatrix, distCoeffsMat, outputPoint);

The problem I'm facing is when Z (in camera local frame) is negative, instead of returning a point out of the image boundaries, it returns me the symmetric (Z positive) instead. I was expecting that function to check for positive Z values...

I can check this manually by myself, but is there a better way?

Thanks!

2013-07-11 07:21:14 -0600 received badge  Editor (source)
2013-07-11 07:18:27 -0600 commented answer Merge separated bayer channels.

Thanks alberto but is no that. I think it's actually the other way around, what I want to do is redo the original bayer image from the jpegs in the most efficient way possible.

2013-07-08 10:19:17 -0600 received badge  Student (source)
2013-07-08 09:06:56 -0600 asked a question Merge separated bayer channels.

Hi,

I have a camera that is giving 4 separated JPEG images for the 4 different Bayer channels (B,G1,G2,R).

I want to transform this in to a colour image.

What I'm doing at the moment is uncompress the jpeg, restore the "original" image manually and converting to a colour image using cvtColor. But this is too slow. How could I do it better?

        cv::Mat imgMat[4]=cv::Mat::zeros(616, 808, CV_8U); //height, width
        for (k=0;k<4;k++) {
            ........
            imgMat[k] = cv::imdecode(buffer, CV_LOAD_IMAGE_GRAYSCALE);
        }
        //Reconstruct the original image from the four channels! RGGB
        cv::Mat Reconstructed=cv::Mat::zeros(1232, 1616, CV_8U);
        int x,y;
        for(x=0;x<1616;x++){
            for(y=0;y<1232;y++){
                if(y%2==0){
                    if(x%2==0){
                        //R
                        Reconstructed.at<uint8_t>(y,x)=imgMat[0].at<uint8_t>(y/2,x/2);
                    }
                    else{
                        //G1
                        Reconstructed.at<uint8_t>(y,x)=imgMat[1].at<uint8_t>(y/2,floor(x/2));
                    }
                }
                else{
                    if(x%2==0){
                        //G2
                        Reconstructed.at<uint8_t>(y,x)=imgMat[2].at<uint8_t>(floor(y/2),x/2);
                    }
                    else{
                        //B
                        Reconstructed.at<uint8_t>(y,x)=imgMat[3].at<uint8_t>(floor(y/2),floor(x/2));
                    }
                }
            }
        }
        //Debayer
        cv::Mat ReconstructedColor;
        cv::cvtColor(Reconstructed, ReconstructedColor, CV_BayerBG2BGR);

Edit: It seems that where it takes more time is when decoding the jpeg image. Has anybody found out a work around for this?