Ask Your Question

chnbr's profile - activity

2020-10-04 12:17:12 -0600 received badge  Nice Question (source)
2019-12-18 13:12:46 -0600 asked a question detectMultiScale(...) internal principle ?

detectMultiScale(...) internal principle ? Hi, I am using detectMultiScale on src image sizes around 960x540 with a LBP

2019-09-04 15:03:53 -0600 received badge  Notable Question (source)
2018-11-11 13:10:41 -0600 received badge  Popular Question (source)
2017-06-01 15:37:41 -0600 commented answer Traffic Sign Recognition Concept

Thanks for your suggestions. Meanwhile I got a cascade classifier running for finding ROIs and a CNN for classifying the sign. The CNN works fine, however I still have problems with the cascade classifier. No matter how I train, it always detects "round signs" much better than "triangle signs" or "filled signs". I have trained many many cascades with image counts from 1000 up to 40000, same numbers for negatives. I tried to equalize the distribution between these sign types (to avoid getting biased towards the better represented signs). Question: Is it at all possible to train a _single_ cascade for "round", "quadratic", "triangle" features altogether ? If not I am stuck because I don't have computation power to do 3 or 4 detect_multiscale calls. Any suggestions ?

2017-05-11 04:28:17 -0600 received badge  Student (source)
2017-05-11 01:37:36 -0600 asked a question Traffic Sign Recognition Concept

Hi,

I am working on a traffic sign recognition project and have tried several different approaches with no luck.

Goals

  • detect _all_ traffic signs in the frame (all refers to all classes not to statistics)
  • discriminate round speed limit signs from other signs
  • recognize speed limits

Basic approach

My approach is classical 2 phase: CascadeClassification is done on the entire frame for the detection of potential sign ROIs followed by a second recognition phase only on the ROIs.

Status quo: I have trained a CascadeClassifier which detects signs quite well. It is trained for all traffic signs and delivers a certain (but tolerable) amount of false detection which I want to rule out in the recognition phase. ROIs are square, ranging between 40-90 pixels, color. I decided to do all processing on gray images only due to CPU limitations and the requirement to work at night as well.

image description image description image description

Problem

My problem is the recognition phase. I describe what I tried so far:

a) "Multi Cascade Classifier": I trained several Cascade Classifiers, each on a particular speed limit class, e.g. one for the 50s, on the 60s and so on. This worked somehow but performance was bad. Main problems:

  • 30 and 80 signs (and others as well) got confused.
  • I am not able to tell if my ROI shows is a speed limit at all, because this approach delivers always a result even when I did not feed in a speed limit sign.

b) "Features2D": I tried feature based classifiers, "ORB followed by Knn brute-force-hamming" and "SIFT followed by Knn brute force". I used the Lowe criterion but the discrimination was again not good enough:

  • signs still got confused
  • no possibility to reject "non speed limit" signs

c) "Neuronal net": I trained a Convolutional Neuronal Net with 2 convolution layers and 2 FC layers based on the 9 speed limit sign classes. It somehow worked but again with the same problems a the other approaches. In addition a huge computational burden comes with this. So I would exclude solution (c) for the future.

Question

  • What concept would you recommend to solve the problem ?
  • Is there benefit from including color into the game ?
  • Do you think from your expertise that a) or b) should do the job, if so which one ?
  • Any suggestions on completely different approaches ?

Remark: I have read nearly everything that deals with that problem on the net...

Thanks and regards Chris

2017-02-11 06:08:02 -0600 commented answer How can I link with native OpenCV in Android studio

I got the armeabi-v7a directory. This seems to be for ARM-based devices. How do I compile such that I can run on the simulator ? I think I need to set APP_ABI, but where and to what ?

Thanks Chris

2017-02-11 03:37:18 -0600 received badge  Enthusiast
2017-02-10 12:47:39 -0600 commented answer How can I link with native OpenCV in Android studio

To what should I set <library name=""> ? Is it arbitrary but the same on on locations it occurs ?

2017-02-08 05:50:24 -0600 commented answer Aruco: Z-Axis flipping perspective

This leads to the situation that the marker poses are better estimated when looked at from the 'side' or at least not 'frontal'. So, what could we do ? 1. Make markers slightly rectangular (not quadratic). We would get rid of the 90, 270 degree ambiguity, but this is not a general solution. 2. Pass another (5th) point into solvePNP which breaks the rotation 0,90,180,270 symmetry of the marker quadratic. In fact this worked in some constellations but the problem is that we don't have info about the object point to image point relationship for this 5th point. It needs however to be known, otherwise the pose estimation is not correct anymore. This could be done internally aruco when someone takes care of detection details and passes out the 5th point with known object coords.

Other ideas ?

2017-02-08 05:39:57 -0600 commented answer Aruco: Z-Axis flipping perspective

I think I know what the problem is. The problem is that our markers are quadratic, which means they are rotation symmetric with respect to 0,90,180,270 degree. Sure, the upper left corner of the marker is known and Aruco code takes care of that, BUT this information is not passed into solvePNP. All that solvePNP sees is a quadratic shape with 4 corners ordered clockwise, starting top left. Since solvePNP is a RMS optimizer there are four possible solutions where RMS goes minimal. Most of the time it returns the wanted rotation of 0 degree and sometimes one of the other. This 'theory' is hardened by the observation that in cases where the camera looks nearly perpendicular onto the marker the flips occur more often. [contd. in next comment]

2017-02-07 12:11:38 -0600 commented answer Aruco: Z-Axis flipping perspective

You are right, the markers are not completely flat, but I have to cope with that in real live. Further I don't see a reason for the algorithm to go crazy because of a minimal convexness...

Flipping is independent of margins. I have done a quite good calibration set (RMS < 0.20 px) for the whole FoV.

Corner refinement is already switched on. Playing with cornerRefinementWinSize is a good suggestion. Thanks.

I already tried to feed in initial guesses but then the results got completely crazy. So far I didn't debug why this is so.

I will have a look into april tags. I didn't know them yet but looks good at a first glance.

Thanks again

2017-02-07 08:39:01 -0600 commented answer Aruco: Z-Axis flipping perspective

Please have a look on these two screen shots as well. They show that it is not a pure z-axis sign flip !

https://s24.postimg.org/7xakfmr9h/snapshot_good.jpg (https://s24.postimg.org/7xakfmr9h/sna...)

https://s24.postimg.org/6tqg3o6md/snapshot_flipped.jpg (https://s24.postimg.org/6tqg3o6md/sna...)

The pics show the following: The red-green frame is the z=0 plane (where the markers are). The blue frame is the z=1m plane and the cyan is the z= - 1m plane. In the first pic everything is as it should be. In the second the pose estimation of the upper-left marker was corrupted. You can see that the z-axis is inverted, however the projection points do not swap exactly. You can visually estimate from the good pic where the join-points of blue and cyan should lie if it were an exact swap of the z-axis. However it ist not. There ... (more)

2017-02-07 06:18:50 -0600 received badge  Editor (source)
2017-02-07 06:18:05 -0600 commented answer Aruco: Z-Axis flipping perspective

Images here:

https://s28.postimg.org/4gq2mdibx/snapshot_normal.jpg (https://s28.postimg.org/4gq2mdibx/sna...)

https://s28.postimg.org/ff1c4k6x9/snapshot_flipped.jpg (https://s28.postimg.org/ff1c4k6x9/sna...)

2017-02-07 06:16:46 -0600 answered a question Aruco: Z-Axis flipping perspective

Hi,

thanks for trying to help. Pics links are given in my comment below (thanks to 'karma'):

Now, below there are two examples of the rvecs. Interestingly you can 'see' when it goes wrong by comparing the values of the correct vectors with the wrong one, s below... Both examples were taken from the exact same scene, one flipped, one normal.

Normal frame:
rvec[0]: [3.0265, 0.19776, -0.32671]
rvec[1]: [2.9941, 0.20444, -0.39449]
rvec[2]: [3.1457, 0.18338, -0.41779]
rvec[3]: [3.1319, 0.17826, -0.39777]


Another frame, same scene but z-axis flipped:
rvec[0]: [3.0238, 0.19807, -0.32165]
rvec[1]: [2.9897, 0.20444, -0.38867]
rvec[2]: [3.0786, 0.17836, -0.39851]
rvec[3]: [2.9951, 0.17127,  0.79585] //  Wrong !!! Negative sign on 3rd element is missing ?

The relevant code follows here. The code is somehow "compressed", so I stripped out unnecessary parts ... but anyhow there is nothing special, just normal pose-estimation.

void display() {
      std::vector< int > ids;
      std::vector< std::vector< Point2f > > corners, rejected;
      std::vector< Vec3d > rvecs, tvecs;

      Ptr<aruco::Dictionary> dictionary = aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
      Ptr<aruco::DetectorParameters> detectorParams = aruco::DetectorParameters::create();
      detectorParams->doCornerRefinement = YES;

      Mat image_copy;
      // variable 'image' contains the input image
      cv:cvtColor(image, image_copy, CV_BGRA2BGR);

      //Mat tmp = image_copy.clone();
      //undistort(tmp, image_copy, cameraMatrix, distCoeffs);

      // detect markers and estimate pose
      detectMarkers(image_copy, dictionary, corners, ids, detectorParams, rejected);

      if(ids.size() == 4) {
        // bring the 4 markers into a defined order: tl,tr,br,bl
        std::sort(corners.begin(), corners.end(), compare_y);
        std::sort(corners.begin(), corners.end()-2, compare_x1);
        std::sort(corners.begin()+2, corners.end(), compare_x2);

        // estimate all the poses at once
        rvecs.clear();
        tvecs.clear();
        float markerLength = 150; // mm
        estimatePoseSingleMarkers(corners, markerLength, cameraMatrix, distCoeffs, rvecs, tvecs);

        for(unsigned int i = 0; i < ids.size(); i++) {
          // draw axis systems for debugging
          aruco::drawAxis(image_copy, cameraMatrix, distCoeffs, rvecs[i], tvecs[i], 6.0 * markerLength);
        }
    }
    // display the image with the axes; 
    // Note: 'image' is the img that is displayed on screen i.e. the output image
    cvtColor(image_copy, image, CV_BGR2BGRA ); // CV_BGR2BGRA
}


void getSingleMarkerObjectPoints(float markerLength, OutputArray _objPoints) {
  CV_Assert(markerLength > 0);
  _objPoints.create(4, 1, CV_32FC3);
  Mat objPoints = _objPoints.getMat();
  // set coordinate system in the middle of the marker, with Z pointing out
  objPoints.ptr< Vec3f >(0)[0] = Vec3f(0, 0, 0);
  objPoints.ptr< Vec3f >(0)[1] = Vec3f(markerLength, 0, 0);
  objPoints.ptr< Vec3f >(0)[2] = Vec3f(markerLength, -markerLength, 0);
  objPoints.ptr< Vec3f >(0)[3] = Vec3f(0, -markerLength, 0);
}


void estimatePoseSingleMarkers(InputArrayOfArrays _corners, float markerLength,
                               InputArray _cameraMatrix, InputArray _distCoeffs,
                               OutputArray _rvecs, OutputArray _tvecs) {
  CV_Assert(markerLength > 0);
  Mat markerObjPoints;
  getSingleMarkerObjectPoints(markerLength, markerObjPoints);
  int nMarkers = (int)_corners.total();
  _rvecs.create(nMarkers, 1, CV_64FC3);
  _tvecs.create(nMarkers, 1, CV_64FC3);
  Mat rvecs = _rvecs.getMat(), tvecs = _tvecs.getMat();
  // for each marker, calculate its pose
  for (int i = 0; i < nMarkers; i++) {
    solvePnP(markerObjPoints, _corners.getMat(i), _cameraMatrix, _distCoeffs,
             rvecs.at<Vec3d>(i), tvecs.at<Vec3d>(i));
  }
}

Any help is highly appreciated! Thanks

Chris

2017-02-07 06:14:58 -0600 commented question Aruco Z-axis randomly flipped

Image links here:

https://s28.postimg.org/4gq2mdibx/snapshot_normal.jpg (https://s28.postimg.org/4gq2mdibx/sna...)

https://s28.postimg.org/ff1c4k6x9/snapshot_flipped.jpg (https://s28.postimg.org/ff1c4k6x9/sna...)

2017-02-07 06:14:36 -0600 asked a question Aruco Z-axis randomly flipped

Hi,

I am using Aruco markers and have a problem with the z-axis flipping randomly. First of all two pics of the situation:

pls. s. comment

Now, below there are two examples of the rvecs. Interestingly you can 'see' when it goes wrong by comparing the values of the correct vectors with the wrong one, s below... Both examples were taken from the exact same scene, one flipped, one normal.

Normal frame:
rvec[0]: [3.0265, 0.19776, -0.32671]
rvec[1]: [2.9941, 0.20444, -0.39449]
rvec[2]: [3.1457, 0.18338, -0.41779]
rvec[3]: [3.1319, 0.17826, -0.39777]


Another frame, same scene but z-axis flipped:
rvec[0]: [3.0238, 0.19807, -0.32165]
rvec[1]: [2.9897, 0.20444, -0.38867]
rvec[2]: [3.0786, 0.17836, -0.39851]
rvec[3]: [2.9951, 0.17127,  0.79585] //  Wrong !!! Negative sign on 3rd element is missing ?

The relevant code follows here. The code is somehow "compressed", so I stripped out unnecessary parts ... but anyhow there is nothing special, just normal pose-estimation.

void display() {
      std::vector< int > ids;
      std::vector< std::vector< Point2f > > corners, rejected;
      std::vector< Vec3d > rvecs, tvecs;

      Ptr<aruco::Dictionary> dictionary = aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
      Ptr<aruco::DetectorParameters> detectorParams = aruco::DetectorParameters::create();
      detectorParams->doCornerRefinement = YES;

      Mat image_copy;
      // variable 'image' contains the input image
      cv:cvtColor(image, image_copy, CV_BGRA2BGR);

      //Mat tmp = image_copy.clone();
      //undistort(tmp, image_copy, cameraMatrix, distCoeffs);

      // detect markers and estimate pose
      detectMarkers(image_copy, dictionary, corners, ids, detectorParams, rejected);

      if(ids.size() == 4) {
        // bring the 4 markers into a defined order: tl,tr,br,bl
        std::sort(corners.begin(), corners.end(), compare_y);
        std::sort(corners.begin(), corners.end()-2, compare_x1);
        std::sort(corners.begin()+2, corners.end(), compare_x2);

        // estimate all the poses at once
        rvecs.clear();
        tvecs.clear();
        float markerLength = 150; // mm
        estimatePoseSingleMarkers(corners, markerLength, cameraMatrix, distCoeffs, rvecs, tvecs);

        for(unsigned int i = 0; i < ids.size(); i++) {
          // draw axis systems for debugging
          aruco::drawAxis(image_copy, cameraMatrix, distCoeffs, rvecs[i], tvecs[i], 6.0 * markerLength);
        }
    }
    // display the image with the axes; 
    // Note: 'image' is the img that is displayed on screen i.e. the output image
    cvtColor(image_copy, image, CV_BGR2BGRA ); // CV_BGR2BGRA
}


void getSingleMarkerObjectPoints(float markerLength, OutputArray _objPoints) {
  CV_Assert(markerLength > 0);
  _objPoints.create(4, 1, CV_32FC3);
  Mat objPoints = _objPoints.getMat();
  // set coordinate system in the middle of the marker, with Z pointing out
  objPoints.ptr< Vec3f >(0)[0] = Vec3f(0, 0, 0);
  objPoints.ptr< Vec3f >(0)[1] = Vec3f(markerLength, 0, 0);
  objPoints.ptr< Vec3f >(0)[2] = Vec3f(markerLength, -markerLength, 0);
  objPoints.ptr< Vec3f >(0)[3] = Vec3f(0, -markerLength, 0);
}


void estimatePoseSingleMarkers(InputArrayOfArrays _corners, float markerLength,
                               InputArray _cameraMatrix, InputArray _distCoeffs,
                               OutputArray _rvecs, OutputArray _tvecs) {
  CV_Assert(markerLength > 0);
  Mat markerObjPoints;
  getSingleMarkerObjectPoints(markerLength, markerObjPoints);
  int nMarkers = (int)_corners.total();
  _rvecs.create(nMarkers, 1, CV_64FC3);
  _tvecs.create(nMarkers, 1, CV_64FC3);
  Mat rvecs = _rvecs.getMat(), tvecs = _tvecs.getMat();
  // for each marker, calculate its pose
  for (int i = 0; i < nMarkers; i++) {
    solvePnP(markerObjPoints, _corners.getMat(i), _cameraMatrix, _distCoeffs,
             rvecs.at<Vec3d>(i), tvecs.at<Vec3d>(i));
  }
}

I have ... (more)

2017-02-06 06:40:58 -0600 commented question Aruco: Z-Axis flipping perspective

Hi, I am having exactly the same problem. I have 4 markers which are co-planar. Most of the time 3 are OK and one is "flipping" back and forth. Looking at the tvecs and revs they are quite similar for the OK-markers and the flipped one is different. Since these vectors are estimated using solvePNP internally I expect the bug in that direction. However solvePNP is way to complicated to change it. I am looking in the same directions as MrZander to exclude those wrong pose estimations, but how ? Any ideas ?

@MrZander: Do you have a solution ?