2020-10-04 12:17:12 -0600 | received badge | ● Nice Question (source) |
2019-12-18 13:12:46 -0600 | asked a question | detectMultiScale(...) internal principle ? detectMultiScale(...) internal principle ? Hi, I am using detectMultiScale on src image sizes around 960x540 with a LBP |
2019-09-04 15:03:53 -0600 | received badge | ● Notable Question (source) |
2018-11-11 13:10:41 -0600 | received badge | ● Popular Question (source) |
2017-06-01 15:37:41 -0600 | commented answer | Traffic Sign Recognition Concept Thanks for your suggestions. Meanwhile I got a cascade classifier running for finding ROIs and a CNN for classifying the sign. The CNN works fine, however I still have problems with the cascade classifier. No matter how I train, it always detects "round signs" much better than "triangle signs" or "filled signs". I have trained many many cascades with image counts from 1000 up to 40000, same numbers for negatives. I tried to equalize the distribution between these sign types (to avoid getting biased towards the better represented signs). Question: Is it at all possible to train a _single_ cascade for "round", "quadratic", "triangle" features altogether ? If not I am stuck because I don't have computation power to do 3 or 4 detect_multiscale calls. Any suggestions ? |
2017-05-11 04:28:17 -0600 | received badge | ● Student (source) |
2017-05-11 01:37:36 -0600 | asked a question | Traffic Sign Recognition Concept Hi, I am working on a traffic sign recognition project and have tried several different approaches with no luck. Goals
Basic approach My approach is classical 2 phase: CascadeClassification is done on the entire frame for the detection of potential sign ROIs followed by a second recognition phase only on the ROIs. Status quo: I have trained a CascadeClassifier which detects signs quite well. It is trained for all traffic signs and delivers a certain (but tolerable) amount of false detection which I want to rule out in the recognition phase. ROIs are square, ranging between 40-90 pixels, color. I decided to do all processing on gray images only due to CPU limitations and the requirement to work at night as well.
Problem My problem is the recognition phase. I describe what I tried so far: a) "Multi Cascade Classifier": I trained several Cascade Classifiers, each on a particular speed limit class, e.g. one for the 50s, on the 60s and so on. This worked somehow but performance was bad. Main problems:
b) "Features2D": I tried feature based classifiers, "ORB followed by Knn brute-force-hamming" and "SIFT followed by Knn brute force". I used the Lowe criterion but the discrimination was again not good enough:
c) "Neuronal net": I trained a Convolutional Neuronal Net with 2 convolution layers and 2 FC layers based on the 9 speed limit sign classes. It somehow worked but again with the same problems a the other approaches. In addition a huge computational burden comes with this. So I would exclude solution (c) for the future. Question
Remark: I have read nearly everything that deals with that problem on the net... Thanks and regards Chris |
2017-02-11 06:08:02 -0600 | commented answer | How can I link with native OpenCV in Android studio I got the armeabi-v7a directory. This seems to be for ARM-based devices. How do I compile such that I can run on the simulator ? I think I need to set APP_ABI, but where and to what ? Thanks Chris |
2017-02-11 03:37:18 -0600 | received badge | ● Enthusiast |
2017-02-10 12:47:39 -0600 | commented answer | How can I link with native OpenCV in Android studio To what should I set <library name=""> ? Is it arbitrary but the same on on locations it occurs ? |
2017-02-08 05:50:24 -0600 | commented answer | Aruco: Z-Axis flipping perspective This leads to the situation that the marker poses are better estimated when looked at from the 'side' or at least not 'frontal'. So, what could we do ? 1. Make markers slightly rectangular (not quadratic). We would get rid of the 90, 270 degree ambiguity, but this is not a general solution. 2. Pass another (5th) point into solvePNP which breaks the rotation 0,90,180,270 symmetry of the marker quadratic. In fact this worked in some constellations but the problem is that we don't have info about the object point to image point relationship for this 5th point. It needs however to be known, otherwise the pose estimation is not correct anymore. This could be done internally aruco when someone takes care of detection details and passes out the 5th point with known object coords. Other ideas ? |
2017-02-08 05:39:57 -0600 | commented answer | Aruco: Z-Axis flipping perspective I think I know what the problem is. The problem is that our markers are quadratic, which means they are rotation symmetric with respect to 0,90,180,270 degree. Sure, the upper left corner of the marker is known and Aruco code takes care of that, BUT this information is not passed into solvePNP. All that solvePNP sees is a quadratic shape with 4 corners ordered clockwise, starting top left. Since solvePNP is a RMS optimizer there are four possible solutions where RMS goes minimal. Most of the time it returns the wanted rotation of 0 degree and sometimes one of the other. This 'theory' is hardened by the observation that in cases where the camera looks nearly perpendicular onto the marker the flips occur more often. [contd. in next comment] |
2017-02-07 12:11:38 -0600 | commented answer | Aruco: Z-Axis flipping perspective You are right, the markers are not completely flat, but I have to cope with that in real live. Further I don't see a reason for the algorithm to go crazy because of a minimal convexness... Flipping is independent of margins. I have done a quite good calibration set (RMS < 0.20 px) for the whole FoV. Corner refinement is already switched on. Playing with cornerRefinementWinSize is a good suggestion. Thanks. I already tried to feed in initial guesses but then the results got completely crazy. So far I didn't debug why this is so. I will have a look into april tags. I didn't know them yet but looks good at a first glance. Thanks again |
2017-02-07 08:39:01 -0600 | commented answer | Aruco: Z-Axis flipping perspective Please have a look on these two screen shots as well. They show that it is not a pure z-axis sign flip ! https://s24.postimg.org/7xakfmr9h/snapshot_good.jpg (https://s24.postimg.org/7xakfmr9h/sna...) https://s24.postimg.org/6tqg3o6md/snapshot_flipped.jpg (https://s24.postimg.org/6tqg3o6md/sna...) The pics show the following: The red-green frame is the z=0 plane (where the markers are). The blue frame is the z=1m plane and the cyan is the z= - 1m plane. In the first pic everything is as it should be. In the second the pose estimation of the upper-left marker was corrupted. You can see that the z-axis is inverted, however the projection points do not swap exactly. You can visually estimate from the good pic where the join-points of blue and cyan should lie if it were an exact swap of the z-axis. However it ist not. There ... (more) |
2017-02-07 06:18:50 -0600 | received badge | ● Editor (source) |
2017-02-07 06:18:05 -0600 | commented answer | Aruco: Z-Axis flipping perspective Images here: https://s28.postimg.org/4gq2mdibx/snapshot_normal.jpg (https://s28.postimg.org/4gq2mdibx/sna...) https://s28.postimg.org/ff1c4k6x9/snapshot_flipped.jpg (https://s28.postimg.org/ff1c4k6x9/sna...) |
2017-02-07 06:16:46 -0600 | answered a question | Aruco: Z-Axis flipping perspective Hi, thanks for trying to help. Pics links are given in my comment below (thanks to 'karma'): Now, below there are two examples of the rvecs. Interestingly you can 'see' when it goes wrong by comparing the values of the correct vectors with the wrong one, s below... Both examples were taken from the exact same scene, one flipped, one normal. The relevant code follows here. The code is somehow "compressed", so I stripped out unnecessary parts ... but anyhow there is nothing special, just normal pose-estimation. Any help is highly appreciated! Thanks Chris |
2017-02-07 06:14:58 -0600 | commented question | Aruco Z-axis randomly flipped Image links here: https://s28.postimg.org/4gq2mdibx/snapshot_normal.jpg (https://s28.postimg.org/4gq2mdibx/sna...) https://s28.postimg.org/ff1c4k6x9/snapshot_flipped.jpg (https://s28.postimg.org/ff1c4k6x9/sna...) |
2017-02-07 06:14:36 -0600 | asked a question | Aruco Z-axis randomly flipped Hi, I am using Aruco markers and have a problem with the z-axis flipping randomly. First of all two pics of the situation: pls. s. comment Now, below there are two examples of the rvecs. Interestingly you can 'see' when it goes wrong by comparing the values of the correct vectors with the wrong one, s below... Both examples were taken from the exact same scene, one flipped, one normal. The relevant code follows here. The code is somehow "compressed", so I stripped out unnecessary parts ... but anyhow there is nothing special, just normal pose-estimation. I have ... (more) |
2017-02-06 06:40:58 -0600 | commented question | Aruco: Z-Axis flipping perspective Hi, I am having exactly the same problem. I have 4 markers which are co-planar. Most of the time 3 are OK and one is "flipping" back and forth. Looking at the tvecs and revs they are quite similar for the OK-markers and the flipped one is different. Since these vectors are estimated using solvePNP internally I expect the bug in that direction. However solvePNP is way to complicated to change it. I am looking in the same directions as MrZander to exclude those wrong pose estimations, but how ? Any ideas ? @MrZander: Do you have a solution ? |