Ask Your Question

ajayramesh's profile - activity

2018-03-01 03:54:24 -0500 asked a question Plotting the normal vector of a plane in OpenCV

Plotting the normal vector of a plane in OpenCV I'm using a 2D barcode to identify a plane in 3D space and I want to plo

2017-11-09 12:43:50 -0500 received badge  Enthusiast
2017-10-03 02:01:19 -0500 asked a question solvePnP not woking

solvePnP not woking I'm trying to run the simplest possible example in solvePnP in pyOpenCV (3.3) camera_matrix_left =

2017-06-01 14:43:56 -0500 commented question Using OpenCV solvePnP for Augmented Reality in OpenGL

@LBerger I'm asking about the general method for doing something that people have done in OpenCV except that I'm using a solvePnP function from a different library that behaves exactly the same

2017-06-01 12:40:39 -0500 asked a question Using OpenCV solvePnP for Augmented Reality in OpenGL

I'm trying to build an Augmented Reality application in Android using BoofCV (OpenCV alternative for Java) and OpenGL ES 2.0. I have a marker which I can get the image points of using BoofCV's solvePnP function. I want to be able to draw the marker in 3D using OpenGL. Here's what I have so far:

On every frame of the camera, I call solvePnP

Se3_F64 worldToCam = MathUtils.worldToCam(__qrWorldPoints, imagePoints);
mGLAssetSurfaceView.setWorldToCam(worldToCam);

This is what I have defined as the world points

static float qrSideLength = 79.365f; // mm

private static final double[][] __qrWorldPoints = {
        {qrSideLength * -0.5, qrSideLength * 0.5, 0},
        {qrSideLength * -0.5, qrSideLength * -0.5, 0},
        {qrSideLength * 0.5, qrSideLength * -0.5, 0},
        {qrSideLength * 0.5, qrSideLength * 0.5, 0}
};

I'm feeding it a square that has origin at its center, with a sidelength in millimeters.

I can confirm that the rotation vector and translation vector I'm getting back from solvePnP are reasonable, so I don't know if there's a problem here.

I pass the result from solvePnP into my renderer

public void setWorldToCam(Se3_F64 worldToCam) {

    DenseMatrix64F _R = worldToCam.R;
    Vector3D_F64 _T = worldToCam.T;

    // Concatenating the the rotation and translation vector into
    // a View matrix
    double[][] __view = {
        {_R.get(0, 0), _R.get(0, 1), _R.get(0, 2), _T.getX()},
        {_R.get(1, 0), _R.get(1, 1), _R.get(1, 2), _T.getY()},
        {_R.get(2, 0), _R.get(2, 1), _R.get(2, 2), _T.getZ()},
            {0, 0, 0, 1}
    };

    DenseMatrix64F _view = new DenseMatrix64F(__view);

    // Matrix to convert from BoofCV (OpenCV) coordinate system to OpenGL coordinate system
    double[][] __cv_to_gl = {
            {1, 0, 0, 0},
            {0, -1, 0, 0},
            {0, -1, 0, 0},
            {0, 0, 0, 1}
    };

    DenseMatrix64F _cv_to_gl = new DenseMatrix64F(__cv_to_gl);

    // Multiply the View Matrix by the BoofCV to OpenGL matrix to apply the coordinate transform
    DenseMatrix64F view = new SimpleMatrix(__view).mult(new SimpleMatrix(__cv_to_gl)).getMatrix();

    // BoofCV stores matrices in row major order, but OpenGL likes column major order
    // I transpose the view matrix and get a flattened list of 16,
    // Then I convert them to floating point
    double[] viewd = new SimpleMatrix(view).transpose().getMatrix().getData();

    for (int i = 0; i < mViewMatrix.length; i++) {
        mViewMatrix[i] = (float) viewd[i];
    }
}

I'm also using the camera intrinsics I get from camera calibration to feed into the projection matrix of OpenGL

@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {

    // this projection matrix is applied to object coordinates
    // in the onDrawFrame() method

    double fx = MathUtils.fx;
    double fy = MathUtils.fy;
    float fovy = (float) (2 * Math.atan(0.5 * height / fy) * 180 / Math.PI);
    float aspect = (float) ((width * fy) / (height * fx));

    // be careful with this, it could explain why you don't see certain objects
    float near = 0.1f;
    float far = 100.0f;

    Matrix.perspectiveM(mProjectionMatrix, 0, fovy, aspect, near, far);

    GLES20.glViewport(0, 0, width, height);

}

The square I'm drawing is the one defined in this Google example.

@Override
public void onDrawFrame(GL10 gl) {

    // redraw background color
    GLES20 ...
(more)
2017-04-24 12:18:45 -0500 received badge  Editor (source)
2017-04-24 11:40:46 -0500 asked a question OpenCV tracker: The model is not initialized in function init

On the first frame of a a video, I'm running an object detector that returns the bounding box of an object like this:

<type 'tuple'>: ((786, 1225), (726, 1217), (721, 1278), (782, 1288))

I want to pass this bounding box as the initial bounding box to the tracker. However, I get the following error:

OpenCV Error: Backtrace (The model is not initialized) in init, file /Users/jenkins/miniconda/1/x64/conda-bld/conda_1486588158526/work/opencv-3.1.0/build/opencv_contrib/modules/tracking/src/tracker.cpp, line 81
Traceback (most recent call last):
  File "/Users/mw/Documents/Code/motion_tracking/motion_tracking.py", line 49, in <module>
    tracker.init(frame, bbox)
cv2.error: /Users/jenkins/miniconda/1/x64/conda-bld/conda_1486588158526/work/opencv-3.1.0/build/opencv_contrib/modules/tracking/src/tracker.cpp:81: error: (-1) The model is not initialized in function init

The frame shape is 1080 x 1920 and the values I'm passing into tracker look like this:

enter image description here

I'm not sure if the order I'm sending the bounding box is wrong, or if I'm doing something else wrong. It doesn't error out when I do something like bbox = (1,1,1,1) but obviously that doesn't do anything useful.

tracker = cv2.Tracker_create("MIL")
init_once = False

while True:

    (grabbed, frame) = camera.read()

    if not grabbed:
        break

    symbols = scan(frame)

    for symbol in symbols:
        if not init_once:
            bbox = (float(symbol.location[0][0]), float(symbol.location[0][1]), float(symbol.location[2][0]), float(symbol.location[2][1]))
            tracker.init(frame, bbox)
            init_once = True
            break
        # draw_symbol(symbol, frame)

    _, newbox = tracker.update(frame)

    if _:
        top_left = (int(newbox[0]), int(newbox[1]))
        bottom_right = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3]))
        cv2.rectangle(frame, top_left, bottom_right, (200, 0, 0))
        cv2.imshow("asd", frame)

    out.write(frame)

out.release()

X-posting from this SO question, as this is kind of time sensitive - would appreciate any help!

2016-12-29 15:07:48 -0500 received badge  Student (source)
2016-10-06 18:53:31 -0500 asked a question py-opencv import error in Bash for Windows 10

Hello,

I installed py-opencv using the following command on Bash for Windows 10:

conda install -c conda-forge opencv

I try importing cv2 in my Python3 interpreter and get the following error:

ImportError: libopencv_reg.so.3.1: cannot enable executable stack as shared object requires: Invalid argument

This works fine in regular Ubuntu but for some reason it doesn't work in Bash for Windows 10. If anyone has a work around it would be much appreciated!