Ask Your Question

maystroh10's profile - activity

2018-06-28 21:46:43 -0600 received badge  Famous Question (source)
2017-04-05 06:05:41 -0600 received badge  Notable Question (source)
2016-09-07 20:28:16 -0600 received badge  Popular Question (source)
2016-01-20 13:41:42 -0600 commented question compute global motion opencv 2.4.x C++

what do you mean by local deformation? For better understanding of the problem, you can check this link

2016-01-20 08:51:42 -0600 commented question compute global motion opencv 2.4.x C++

@StevenPuttemans, Edited question

2016-01-20 08:14:50 -0600 commented question compute global motion opencv 2.4.x C++

Even if I can do so? how can I get what I want? didn't get it..

2016-01-20 06:32:02 -0600 commented question compute global motion opencv 2.4.x C++

@StevenPuttemans, Can you just clarify what you said? just because I don't need to ask dump questions

2016-01-19 11:17:56 -0600 asked a question compute global motion opencv 2.4.x C++

Here are 2 images, one captured before an action has been made by the surgeon and the other afterwards.

BEFORE:

image description

AFTER:

image description

Difference: (After - Before) + 128. (The addition of 128 is just to have a better image)

image description

As pointed the white arrows, there has been a global motion affecting all the objects. So, I need to estimate it in order to get more valuable information on what's happening in the scene. I already knew that OpenCV 3.0 helps in this context where it's implemented some methods that estimate the dominant motion between 2 images or 2 list of points. But I'm using so far OpenCV 2.4.x because I have dependencies with libraries already installed on my machine so I'm looking for alternative solutions or any other code that does what I want.

Optical Flow:

image description

So as you can see above, I can't differentiate between motions after computing the optical flow.

Thanks in advance.

2015-07-21 11:50:34 -0600 asked a question Decompose 3D affine matrix

Is there any method in openCV to decompose the 3D affine transformation matrix?

2015-07-16 08:17:18 -0600 commented question 3D rotation matrix between 2 axis

@edited question

2015-07-16 07:26:33 -0600 commented question 3D rotation matrix between 2 axis

I know but I don't see something related I can use it to do what I want.

2015-07-16 06:46:30 -0600 commented question 3D rotation matrix between 2 axis

@Edited question.

2015-07-16 05:18:52 -0600 received badge  Editor (source)
2015-07-16 05:18:00 -0600 asked a question 3D rotation matrix between 2 axis

I have 2 known 3d points which are the origin of 2 axis plot in the space and I need to compute the 3D rotation matrix between them. I didn't really get what the difference Euler angles and the other type of angles? Any help please?

EDITED:

image description

I have Oc1 and Oc2 known points in the space and I know that using R1&T1 I can get to Oc1 and using R2&T2 I can get to Oc2 but I need to compute the 3D rotation matrix between Oc1 and Oc2. Is there any openCV method that computes such rotation?

EDITED Here my code for a sample to test c1Mc2 = (oMc1)^-1 oMc2:

vector <Point3f> listOfPointsOnTable;
    cout << "******** DATA *******" << endl;
    listOfPointsOnTable.push_back(Point3f(0,0,0));
    listOfPointsOnTable.push_back(Point3f(100,0,0));
    listOfPointsOnTable.push_back(Point3f(100,100,0));
    listOfPointsOnTable.push_back(Point3f(0,100,0));

    cout << endl << "Scene points :" << endl;
    for (int i = 0; i < listOfPointsOnTable.size(); i++)
    {
        cout << listOfPointsOnTable[i] << endl;
    }

    //Define the optical center of each camera
    Point3f centreOfC1 = Point3f(23,0,50);
    Point3f centreOfC2 = Point3f(0,42,20);
    cout << endl << "Center Of C1: " << centreOfC1 << " , Center of C2 : " << centreOfC2 << endl;

    //Define the translation and rotation between main axis and the camera 1 axis
    Mat translationOfC1 = (Mat_<double>(3, 1) << (0-centreOfC1.x), (0-centreOfC1.y), (0-centreOfC1.z));
    float rotxC1 = 0, rotyC1 = 0, rotzC1 = -45;
    int focaleC1 = 2;
    Mat rotationOfC1 = rotation3D(rotxC1, rotyC1,rotzC1);
    cout << endl << "Translation from default axis to C1: " << translationOfC1 << endl;
    cout << "Rotation from default axis to C1: " << rotationOfC1 << endl;
    Mat transformationToC1 = buildTransformationMatrix(rotationOfC1, translationOfC1);
    cout << "Transformation from default axis to C1: " << transformationToC1 << endl << endl;

    //Define the translation and rotation between main axis and the camera 2 axis
    Mat translationOfC2 = (Mat_<double>(3, 1) << (0-centreOfC2.x), (0-centreOfC2.y), (0-centreOfC2.z));
    float rotxC2 = 0, rotyC2 = 0, rotzC2 = -90;
    int focaleC2 = 2;
    Mat rotationOfC2 = rotation3D(rotxC2, rotyC2,rotzC2);
    cout << endl << "Translation from default axis to C2: " << translationOfC2 << endl;
    cout << "Rotation from default axis to C2: " << rotationOfC2 << endl;
    Mat transformationToC2 = buildTransformationMatrix(rotationOfC2, translationOfC2);
    cout << "Transformation from default axis to C2: " << transformationToC2 << endl << endl;

    Mat centreOfC2InMat = (Mat_<double>(3, 1) << centreOfC2.x, centreOfC2.y, centreOfC2.z);
    Mat centreOfC2InCamera1 = rotationOfC1 * centreOfC2InMat + translationOfC1;
    Mat translationBetweenC1AndC2 = -centreOfC2InCamera1;
    cout << endl << "****Translation from C2 to C1" << endl;
    cout << translationBetweenC1AndC2 << endl;
    Mat centreOfC1InMat = (Mat_<double>(3, 1) << centreOfC1.x, centreOfC1.y, centreOfC1.z);
    Mat centreOfC1InCamera2 = rotationOfC2 * centreOfC1InMat + translationOfC2;
    Mat translationBetweenC2AndC1 = -centreOfC1InCamera2;
    cout << "****Translation from C1 to C2" << endl;
    cout << translationBetweenC2AndC1 << endl;

    cout << "Tran1-1 * Trans2 = " << transformationToC1.inv() * transformationToC2 << endl;
    cout << "Tran2-1 * Trans1 = " << transformationToC2.inv() * transformationToC1 << endl;

Mat rotation3D(int alpha, int beta, int gamma)
{
    // Rotation matrices around the X, Y, and Z axis
    double alphaInRadian = alpha * M_PI / 180.0;
    double betaInRadian = beta * M_PI / 180.0;
    double gammaInRadian = gamma * M_PI / 180.0;
    Mat RX = (Mat_<double>(3, 3) <<
                  1,          0,           0,
                  0, cosf(alphaInRadian), sinf(alphaInRadian),
                  0, -sinf(alphaInRadian),  cosf(alphaInRadian));
    Mat RY = (Mat_<double>(3, 3) <<
                  cosf(betaInRadian), 0, sinf(betaInRadian),
                  0, 1,          0,
                  -sinf(betaInRadian), 0,  cosf(betaInRadian));
    Mat RZ = (Mat_<double>(3, 3) <<
                  cosf(gammaInRadian), sinf(gammaInRadian), 0 ...
(more)
2015-07-06 03:44:11 -0600 answered a question not able to include non free OpenCV 3.0

I was able to figure it out. The problem was in the path of the contrib module so putting the correct one has fixed it.

2015-07-01 05:55:17 -0600 asked a question not able to include non free OpenCV 3.0

I'm trying to use SURF/SIFT in the alpha version of the OpenCV 3.0. I already checked these links 1,2,3,4 without being able to solve the error occured when I include "opencv2/xfeatures2d/nonfree.hpp".

I debrief below what I've tried:

  1. I knew that SURF/SIFT are now in a separate module and we should load it upon builiding OpenCV to make it work so I got the module repository from this link and I built OpenCV using the following command :

cmake -DOPENCV_EXTRA_MODULES_PATH=/home/opencvContrib/modules ..

  1. Here is a text extracted from the results displayed after running this command:

OpenCV modules:

  1. To be built: core flann imgproc imgcodecs videoio highgui features2d calib3d ml objdetect photo video shape stitching superres ts videostab
  2. Disabled: world
  3. Disabled by dependency: -
  4. Unavailable: androidcamera cuda cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaoptflow cudastereo cudawarping cudev java python2 python3 viz

It seems to me weird to have these modules because they are just the modules in the openCV repository and none of them are from the module repository so I would assume it as a reason for my problem.

Any idea on what's going on?

2015-06-30 08:47:12 -0600 commented answer How to compile nonfree module in opencv 3.0 beta ?

I built OpenCV with the modules as you're mentioning but It always throws me an error after including it. Also what do you mean by "link to opencv_xfeatures2d(.lib)?

2015-06-30 03:41:32 -0600 commented answer install multiple versions of OpenCV on ubuntu

so what does exactly the flag "-DCMAKE_INSTALL_PREFIX"? Also, as I got it, I should have 2 separates Makefile files, right?

2015-06-29 20:05:53 -0600 asked a question install multiple versions of OpenCV on ubuntu

I'm already using OpenCV version 2.8 with CMake but now I want to test some of the new functionalities which have been added in the version 3.0 so I installed it but I'm not able to link my QT project to new version installed. I already checked this link where they explain how to have 2 different versions of OpenCV on the same PC but it's not clear enough how to link the project to new one. Any hints on how to achieve it? what should I modify in the CMake file?

2015-06-08 03:15:54 -0600 commented question Applying homography on non planar surface

What do you mean by 3D model points? can u please clarify?

2015-06-08 03:14:42 -0600 received badge  Supporter (source)
2015-06-07 14:16:30 -0600 received badge  Enthusiast
2015-06-06 10:58:03 -0600 commented answer Applying homography on non planar surface

so it detects just planar objects right?

2015-06-04 07:00:46 -0600 asked a question Applying homography on non planar surface

As I know, Homography (projective transformation) in computer vision can be used to detect object in images but all the object I've seen are plane objects. Does Homography only work on a planar surface surface object? Or It can detect any kind of objects? I'm asking because I tried to detect non planar surface image and it didn't work.

2015-06-04 06:57:37 -0600 received badge  Scholar (source)
2015-05-24 11:47:25 -0600 received badge  Self-Learner (source)
2015-05-20 16:38:25 -0600 answered a question Compute SURF/SIFT descriptors of non key points

I figured it out. The problem was in the way I'm computing the descriptors because as you can see in the code above, I was trying to compute the descriptors on small part of the image and not on the image itself. So when I put the image itself instead of partOfImageScene, something like extractor.compute( img_scene, keypoints_scene, descriptors_scene ); it worked perfectly and I didn't lose any keypoints from the list I had.

2015-05-19 03:37:21 -0600 received badge  Student (source)
2015-05-18 12:14:56 -0600 asked a question Compute SURF/SIFT descriptors of non key points

Actually, I'm trying to match a list of key points extracted from an image to another list of key points extracted from another image. I tried SURF/SIFT to detect the key points but the results were not as expected in terms of accuracy of the keypoints detected from each image. I thought about not using key point detector and just use the points of the connected regions then compute the descriptors of these points using SIFT/SUFT but most of times calling the compute method will empty the keypoint list.

Sample of code is below:

int minHessian = 100;
 SurfFeatureDetector detector(minHessian);  
Mat descriptors_object;
 SurfDescriptorExtractor extractor;
 detector.detect( img_object, keypoints_object); 
 extractor.compute( img_object, keypoints_object,descriptors_object ); 
 for (int index = 0; index < listOfObjectsExtracted.size(); index ++)
 {
         Mat partOfImageScene = listOfObjectsExtracted[index];
         vector<Point2f> listOfContourPoints = convertPointsToPoints2f(realContoursOfRects[index]);
         vector<KeyPoint> keypoints_scene;
         KeyPoint::convert(listOfContourPoints, keypoints_scene, 100, 1000);
         //detector.detect( partOfImageScene, keypoints_scene );
         if (keypoints_scene.size() > 0)
         {
             //-- Step 2: Calculate descriptors (feature vectors)
             Mat descriptors_scene;
             extractor.compute( partOfImageScene, keypoints_scene, descriptors_scene );
            //Logic of matching between descriptors_scene and descriptors_object 
         } 
}

So, after calling compute in Step 2, the keypoints_scene most of the times becomes empty.

I know they state the following in OpenCV documentation:

Note that the method can modify the keypoints vector by removing the keypoints such that a descriptor for them is not defined (usually these are the keypoints near image border). The method makes sure that the ouptut keypoints and descriptors are consistent with each other (so that the number of keypoints is equal to the descriptors row count).

But anyway to get better results? I mean to have descriptors for all the points I've chosen? Am I violating the way the keypoints should be used? Should I try different feature extractor than SIFT/SURF to get what I want? Or it's expected to have the same kind of problem with every feature detector implemeted in OpenCV?