Ask Your Question

SimonH's profile - activity

2020-08-24 12:25:02 -0500 received badge  Popular Question (source)
2015-02-28 15:43:01 -0500 asked a question solvePnPRansac on coplanar planes gives randomly one of two solutions

I have a problem when using solvePnPRansac to track coplanar markers. I recorded a video of the problem here:

In the video you see the pink cube drawn with OpenCV flipping about every 2-3 frames. You also see on top a stable coordinate system I render with OpenGL but you can ignore it, I don't apply the rotation values calculated by the tracker, that's why the flipping is not passed to the OpenGL scene.

I assume the problem lies in the solvePnPRansac of OpenCV since the inputs (the 2D and 3D points) are very similar on each frame but the calculated projection matrix differs a log as you can see in the video. The tracked points all look fine. In the video I also render the coordinate axes and you can see that the z-axis is flipped/inverted while the rest of the axes (Y-axis: green pointing down and X-axis: red point to the right) stay correct.

My used parameters for solvePnPRANSAC: I used cv::ITERATIVE on the solvePnPRANSAC with the standard reprojection error 8.0 (tried several times with different values but no luck) and I switched of the extrinsic guess to false.

Are there always 2 correct pnp solutions for coplanar markers and that's why it randomly flips between both? Can I avoid this behaviour. I searched for people having similar problems but all I could find about this are two old questions which sound like it could be the same problem: and

2015-02-25 02:34:57 -0500 answered a question cv2.solvePnp axis flip with rotation

Yes I can confirm having the same problem using the iterative solvePnP approach on a set (about 30 on a flat marker) of coplanar points. The flip is always the same so not some random error produced by outliers.

Did you ever manage to fix it?

2015-02-19 12:15:14 -0500 answered a question Error Opencv4Android: Caused by: java.lang.IllegalArgumentException: Service Intent must be explicit

@Alexander I had the same idea, you can do it specifying the package to make the intent explicit:

Intent intent = new Intent("org.opencv.engine.BIND");
    if (AppContext.bindService(intent, helper.mServiceConnection,
            Context.BIND_AUTO_CREATE)) {
2015-02-11 03:40:21 -0500 received badge  Enthusiast
2015-02-09 04:50:34 -0500 commented question Problem when automatically calculating the intrinsic camera values for reduced camera resolutions

the problem ist that not only the resolution is changed but also the part of the image which is provided, there are parts cropped out when you pick the 640x480 resolution

2015-02-06 04:21:01 -0500 asked a question Problem when automatically calculating the intrinsic camera values for reduced camera resolutions

I have a problem automatically calculating the intrinsic camera values from the size of the frame and the fovY values the camera hardware tells me. It works fine for frames with full resolution and is nearly as good as a manually calibrated camera but if I set the resolution to 640x480 I get the following problem:

If I tell the OpenCV cam preview to use resolution 640x480 it provides frames in that resolution, the problem here is that the Android Camera class still tells me the same values for getHorizontalViewAngle() and getVerticalViewAngle() even if now the frames have a different aspect ratio and show not the same region as in the full resolution. So my problem is that instead of using the full frame from the sensor just in a smaller resolution the camera now gives me a cropped image but still the same values for fovY. is it possible to calculate the correct fovY based on the new resolution?

2014-02-10 03:44:55 -0500 commented question Color invariance when matching frames

that sounds promising, but will it help if there are other objects in the scene in addition to the object which should be detected? there other objects would also change the histogram right?

2014-02-07 09:36:56 -0500 asked a question Color invariance when matching frames

Is there a way to normalize incoming frames regarding the color? I am extracting ORB features from incoming frames and match them to a reference frame. But the different lightning conditions often make it really hard to match but when manually adjusting the brightness/contrast etc the matching improves. Is there a way to tell the ORB descriptors to adjust for higher lightning & color differences? Or is there a way to normalize the brightness of the complete frame? Or a complete different concept I should try?

2014-01-25 02:58:56 -0500 commented answer Which Android phones support CUDA

ok but there are already opencl devices out there right?

2014-01-19 01:37:01 -0500 asked a question Which Android phones support CUDA

so the latest version says "NVidia CUDA support on CUDA capable SoCs" but I could not find any concrete information which phones support Cude already (and how many will support it in the future?)

Where can I find morge infos, or which phones do you know which already have Cuda support?

And how does Cuda and OpenCL play together? For example the OpenCL module will there be an abstraction in the future to wrap both Cuda and OpenCL? Is there any information about this I can read to understand more about this?

2014-01-16 07:39:19 -0500 received badge  Nice Question (source)
2014-01-16 07:26:35 -0500 received badge  Editor (source)
2013-10-07 05:19:41 -0500 asked a question ORB pyramid not working

I am using the following detector and descriptor for the features I want to track:

Ptr<FeatureDetector> detector = Ptr<FeatureDetector>(
    new ORB(500, 2, 9, 31, 0, 2, ORB::HARRIS_SCORE, 31));
Ptr<DescriptorExtractor> descriptorExtractor = DescriptorExtractor::create("ORB");

So I have 9 size pyramids with distance 2 so every smaller pyramid-image is 50% of the last one right? And this 9 times (I tried it also with other numbers like the default 1.2 for the distance e.g.)

Still if I use a marker-image I want to detect in a current frame and the marker in the current frame is only 50% the size of the original marker then there are no matches anymore, the matches are only found for scale changes like 0.8 to 1.2 to the original marker.

Now if I scale the image manually down to 50% of its size it can be found again so a manuall pyramid approach is working but not the one directly done by the Orb detector. Am I doing something wrong here?

Here are some images I took to exmplain the problem:

Marker and scene have same size, everything is fine:

alt text

Different scale of marker and current scene, no matches anymore:

alt text

2 Images where i resized the marker image manually and then its found again as expected:

alt text

alt text

2013-10-01 08:24:02 -0500 received badge  Student (source)
2013-10-01 06:20:08 -0500 commented answer Galaxy s3 1920x1280 resolution

did you try the normal android camera preview and check the resolution of the passed frames there?

2013-10-01 06:18:11 -0500 asked a question Model-based tracking / Edge-Based tracking

I have a question about the concept behind POSIT, is it the same idea as in model-based / edge-based tracking tracking like these:

Conics Tracking (CT)

So there are also Sparse Line Tracking (SLT) and Dense Line Tracking (DLT)

And there is also ( ) where i don't know what approach they are using.

So does POSIT do the same? I couldn't find information if there is already any existing implementation in OpenCV for edge based tracking and I think I would not be able to implement it based on the existing papers ;) So i thought i should ask if POSIT can be used in a similar way or if I should do something else.

2013-10-01 03:27:11 -0500 received badge  Supporter (source)
2013-10-01 03:18:53 -0500 answered a question Galaxy s3 1920x1280 resolution

the highest resolution the camera supports for live streams is 1280x720, i think you are talking about taking single pictures right?