Ask Your Question
1

Aruco: Z-Axis flipping perspective

asked 2017-01-20 17:52:21 -0600

MrZander gravatar image

I am trying to do some simple AR with Aruco tags and I am having trouble determining the correct perspective.

The problem occurs when it is unclear which side of the tag is closer to the camera.

For example, in the image, the two codes are on the same plane pointing the same direction, but the z-axes are pointed in different directions (The code on the bottom is showing the correct orientation):

Image is posted in comments, I don't have high enough karma for links yet.

I am not doing anything fancy, just a simple detectMarkers with drawAxis call for the results.

What can be done to ensure I don't get these false perspective reads?

edit retag flag offensive close merge delete

Comments

http://imgur.com/a/0fe6I

MrZander gravatar imageMrZander ( 2017-01-20 17:52:29 -0600 )edit

Have you calibrated the camera to get the distortion matrix? It's not flipping the z axis, as both of those are right handed. It's just appearing that way because the board is curved, image distortion, printing errors, or a combination of all of them.

Tetragramm gravatar imageTetragramm ( 2017-01-20 18:07:09 -0600 )edit

@Tetragramm Yes, I have calibrated the camera, I will try re-calibrating in case it wasn't sufficient. The video flickers between the correct and incorrect direction, I will say that it is usually correct. The board is pretty darn straight too. Is there any way to account for these errors in software? Or, for example, If I know how the camera is oriented in the world, can I exclude (or fix) the incorrect marker result?

MrZander gravatar imageMrZander ( 2017-01-20 18:25:13 -0600 )edit

I dunno. I can't be sure without seeing the original image, but those makers don't look square. Maybe it's image distortion, maybe it's a bent board, maybe they're printed wrong, but it just looks skewed.

Tetragramm gravatar imageTetragramm ( 2017-01-21 00:57:57 -0600 )edit

Hi, I am having exactly the same problem. I have 4 markers which are co-planar. Most of the time 3 are OK and one is "flipping" back and forth. Looking at the tvecs and revs they are quite similar for the OK-markers and the flipped one is different. Since these vectors are estimated using solvePNP internally I expect the bug in that direction. However solvePNP is way to complicated to change it. I am looking in the same directions as MrZander to exclude those wrong pose estimations, but how ? Any ideas ?

@MrZander: Do you have a solution ?

chnbr gravatar imagechnbr ( 2017-02-06 06:33:41 -0600 )edit

Can you post two images (one flipped and one not) and the relevant section of code so I can test?

Tetragramm gravatar imageTetragramm ( 2017-02-06 18:11:11 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2017-02-07 06:16:46 -0600

chnbr gravatar image

updated 2017-02-07 06:18:50 -0600

Hi,

thanks for trying to help. Pics links are given in my comment below (thanks to 'karma'):

Now, below there are two examples of the rvecs. Interestingly you can 'see' when it goes wrong by comparing the values of the correct vectors with the wrong one, s below... Both examples were taken from the exact same scene, one flipped, one normal.

Normal frame:
rvec[0]: [3.0265, 0.19776, -0.32671]
rvec[1]: [2.9941, 0.20444, -0.39449]
rvec[2]: [3.1457, 0.18338, -0.41779]
rvec[3]: [3.1319, 0.17826, -0.39777]


Another frame, same scene but z-axis flipped:
rvec[0]: [3.0238, 0.19807, -0.32165]
rvec[1]: [2.9897, 0.20444, -0.38867]
rvec[2]: [3.0786, 0.17836, -0.39851]
rvec[3]: [2.9951, 0.17127,  0.79585] //  Wrong !!! Negative sign on 3rd element is missing ?

The relevant code follows here. The code is somehow "compressed", so I stripped out unnecessary parts ... but anyhow there is nothing special, just normal pose-estimation.

void display() {
      std::vector< int > ids;
      std::vector< std::vector< Point2f > > corners, rejected;
      std::vector< Vec3d > rvecs, tvecs;

      Ptr<aruco::Dictionary> dictionary = aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
      Ptr<aruco::DetectorParameters> detectorParams = aruco::DetectorParameters::create();
      detectorParams->doCornerRefinement = YES;

      Mat image_copy;
      // variable 'image' contains the input image
      cv:cvtColor(image, image_copy, CV_BGRA2BGR);

      //Mat tmp = image_copy.clone();
      //undistort(tmp, image_copy, cameraMatrix, distCoeffs);

      // detect markers and estimate pose
      detectMarkers(image_copy, dictionary, corners, ids, detectorParams, rejected);

      if(ids.size() == 4) {
        // bring the 4 markers into a defined order: tl,tr,br,bl
        std::sort(corners.begin(), corners.end(), compare_y);
        std::sort(corners.begin(), corners.end()-2, compare_x1);
        std::sort(corners.begin()+2, corners.end(), compare_x2);

        // estimate all the poses at once
        rvecs.clear();
        tvecs.clear();
        float markerLength = 150; // mm
        estimatePoseSingleMarkers(corners, markerLength, cameraMatrix, distCoeffs, rvecs, tvecs);

        for(unsigned int i = 0; i < ids.size(); i++) {
          // draw axis systems for debugging
          aruco::drawAxis(image_copy, cameraMatrix, distCoeffs, rvecs[i], tvecs[i], 6.0 * markerLength);
        }
    }
    // display the image with the axes; 
    // Note: 'image' is the img that is displayed on screen i.e. the output image
    cvtColor(image_copy, image, CV_BGR2BGRA ); // CV_BGR2BGRA
}


void getSingleMarkerObjectPoints(float markerLength, OutputArray _objPoints) {
  CV_Assert(markerLength > 0);
  _objPoints.create(4, 1, CV_32FC3);
  Mat objPoints = _objPoints.getMat();
  // set coordinate system in the middle of the marker, with Z pointing out
  objPoints.ptr< Vec3f >(0)[0] = Vec3f(0, 0, 0);
  objPoints.ptr< Vec3f >(0)[1] = Vec3f(markerLength, 0, 0);
  objPoints.ptr< Vec3f >(0)[2] = Vec3f(markerLength, -markerLength, 0);
  objPoints.ptr< Vec3f >(0)[3] = Vec3f(0, -markerLength, 0);
}


void estimatePoseSingleMarkers(InputArrayOfArrays _corners, float markerLength,
                               InputArray _cameraMatrix, InputArray _distCoeffs,
                               OutputArray _rvecs, OutputArray _tvecs) {
  CV_Assert(markerLength > 0);
  Mat markerObjPoints;
  getSingleMarkerObjectPoints(markerLength, markerObjPoints);
  int nMarkers = (int)_corners.total();
  _rvecs.create(nMarkers, 1, CV_64FC3);
  _tvecs.create(nMarkers, 1, CV_64FC3);
  Mat rvecs = _rvecs.getMat(), tvecs = _tvecs.getMat();
  // for each marker, calculate its pose
  for (int i = 0; i < nMarkers; i++) {
    solvePnP(markerObjPoints, _corners.getMat(i), _cameraMatrix, _distCoeffs,
             rvecs.at<Vec3d>(i), tvecs.at<Vec3d>(i));
  }
}

Any help is highly appreciated! Thanks

Chris

edit flag offensive delete link more

Comments

Images here:

https://s28.postimg.org/4gq2mdibx/snapshot_normal.jpg (https://s28.postimg.org/4gq2mdibx/sna...)

https://s28.postimg.org/ff1c4k6x9/snapshot_flipped.jpg (https://s28.postimg.org/ff1c4k6x9/sna...)

chnbr gravatar imagechnbr ( 2017-02-07 06:18:05 -0600 )edit

Please have a look on these two screen shots as well. They show that it is not a pure z-axis sign flip !

https://s24.postimg.org/7xakfmr9h/snapshot_good.jpg (https://s24.postimg.org/7xakfmr9h/sna...)

https://s24.postimg.org/6tqg3o6md/snapshot_flipped.jpg (https://s24.postimg.org/6tqg3o6md/sna...)

The pics show the following: The red-green frame is the z=0 plane (where the markers are). The blue frame is the z=1m plane and the cyan is the z= - 1m plane. In the first pic everything is as it should be. In the second the pose estimation of the upper-left marker was corrupted. You can see that the z-axis is inverted, however the projection points do not swap exactly. You can visually estimate from the good pic where the join-points of blue and cyan should lie if it were an exact swap of the z-axis. However it ist not. There ...(more)

chnbr gravatar imagechnbr ( 2017-02-07 08:39:01 -0600 )edit

Are you sure the markers are totally flat? Can't quite tell but the paper looks a little curved.

Does the "flipping" happen more when marker is at the edges of the frame (which I think means it's distortion related, but not sure, im still new to cv)

Try turning on corner refinement and messing with cornerRefinementWinSize? SolvePnP is only as good as the data it has to work.

SolvePnP allows you to pass in rvec and tvec as initial guesses. Maybe you could check the angle to the camera and pass it in an initial angle somehow.

I recently switched to apriltags (still using opencv's Solve PnP for pose) and am getting much better results, but it's a little slow.

dpizzle gravatar imagedpizzle ( 2017-02-07 10:05:53 -0600 )edit

You are right, the markers are not completely flat, but I have to cope with that in real live. Further I don't see a reason for the algorithm to go crazy because of a minimal convexness...

Flipping is independent of margins. I have done a quite good calibration set (RMS < 0.20 px) for the whole FoV.

Corner refinement is already switched on. Playing with cornerRefinementWinSize is a good suggestion. Thanks.

I already tried to feed in initial guesses but then the results got completely crazy. So far I didn't debug why this is so.

I will have a look into april tags. I didn't know them yet but looks good at a first glance.

Thanks again

chnbr gravatar imagechnbr ( 2017-02-07 12:11:38 -0600 )edit

The most important thing is that it's not actually flipping the z-axis. it just appears that way. It's actually rotating by about 30 degrees. It's just rotating so that by perspective, it appears the blue is going into the image. I've put it into the VIZ module, and it's just a trick of perspective.

Picture of VIZ module

I'm afraid the only suggestion I have is that the markers might be a little small. I'd print them larger and see if that helps.

I like your code though, very clean.

Tetragramm gravatar imageTetragramm ( 2017-02-07 18:18:22 -0600 )edit

@chnbr@Tetragramm We inevitably fixed this by increasing the size of our codes... We still have the issue sometimes, but are filtering it using temporal smoothing. It is working okay, unfortunately I think the only way to truly fix something like this is to use stereo vision. I've tried a lot of different methods in attempt to "pick" the right perspective, but none really worked aside from the temporal approach. Thank you for the help though Tetra.

MrZander gravatar imageMrZander ( 2017-02-07 18:36:16 -0600 )edit

The other thing would be, if you know the board with the four markers is rigid, take a median/average of the markers. That's what the charuco board does, is combine the results from all of the markers.

It doesn't help MrZander much, since he only had 2 markers, but it should help chnbr.

Tetragramm gravatar imageTetragramm ( 2017-02-07 19:01:44 -0600 )edit
1

I think I know what the problem is. The problem is that our markers are quadratic, which means they are rotation symmetric with respect to 0,90,180,270 degree. Sure, the upper left corner of the marker is known and Aruco code takes care of that, BUT this information is not passed into solvePNP. All that solvePNP sees is a quadratic shape with 4 corners ordered clockwise, starting top left. Since solvePNP is a RMS optimizer there are four possible solutions where RMS goes minimal. Most of the time it returns the wanted rotation of 0 degree and sometimes one of the other. This 'theory' is hardened by the observation that in cases where the camera looks nearly perpendicular onto the marker the flips occur more often. [contd. in next comment]

chnbr gravatar imagechnbr ( 2017-02-08 05:39:57 -0600 )edit

This leads to the situation that the marker poses are better estimated when looked at from the 'side' or at least not 'frontal'. So, what could we do ? 1. Make markers slightly rectangular (not quadratic). We would get rid of the 90, 270 degree ambiguity, but this is not a general solution. 2. Pass another (5th) point into solvePNP which breaks the rotation 0,90,180,270 symmetry of the marker quadratic. In fact this worked in some constellations but the problem is that we don't have info about the object point to image point relationship for this 5th point. It needs however to be known, otherwise the pose estimation is not correct anymore. This could be done internally aruco when someone takes care of detection details and passes out the 5th point with known object coords.

Other ideas ?

chnbr gravatar imagechnbr ( 2017-02-08 05:50:24 -0600 )edit
1

Here's some more discussions of the same issue: https://github.com/chili-epfl/chilitags/issues/19 (https://github.com/chili-epfl/chilita...) https://github.com/chili-epfl/chilitags/issues/82 (https://github.com/chili-epfl/chilita...)

dpizzle gravatar imagedpizzle ( 2017-02-08 10:01:10 -0600 )edit

Question Tools

4 followers

Stats

Asked: 2017-01-20 17:52:21 -0600

Seen: 5,593 times

Last updated: Feb 07 '17