2018-06-28 21:46:43 -0600 | received badge | ● Famous Question (source) |
2017-04-05 06:05:41 -0600 | received badge | ● Notable Question (source) |
2016-09-07 20:28:16 -0600 | received badge | ● Popular Question (source) |
2016-01-20 13:41:42 -0600 | commented question | compute global motion opencv 2.4.x C++ what do you mean by local deformation? For better understanding of the problem, you can check this link |
2016-01-20 08:51:42 -0600 | commented question | compute global motion opencv 2.4.x C++ @StevenPuttemans, Edited question |
2016-01-20 08:14:50 -0600 | commented question | compute global motion opencv 2.4.x C++ Even if I can do so? how can I get what I want? didn't get it.. |
2016-01-20 06:32:02 -0600 | commented question | compute global motion opencv 2.4.x C++ @StevenPuttemans, Can you just clarify what you said? just because I don't need to ask dump questions |
2016-01-19 11:17:56 -0600 | asked a question | compute global motion opencv 2.4.x C++ Here are 2 images, one captured before an action has been made by the surgeon and the other afterwards. BEFORE: AFTER: Difference: (After - Before) + 128. (The addition of 128 is just to have a better image) As pointed the white arrows, there has been a global motion affecting all the objects. So, I need to estimate it in order to get more valuable information on what's happening in the scene. I already knew that OpenCV 3.0 helps in this context where it's implemented some methods that estimate the dominant motion between 2 images or 2 list of points. But I'm using so far OpenCV 2.4.x because I have dependencies with libraries already installed on my machine so I'm looking for alternative solutions or any other code that does what I want. Optical Flow: So as you can see above, I can't differentiate between motions after computing the optical flow. Thanks in advance. |
2015-07-21 11:50:34 -0600 | asked a question | Decompose 3D affine matrix Is there any method in openCV to decompose the 3D affine transformation matrix? |
2015-07-16 08:17:18 -0600 | commented question | 3D rotation matrix between 2 axis @edited question |
2015-07-16 07:26:33 -0600 | commented question | 3D rotation matrix between 2 axis I know but I don't see something related I can use it to do what I want. |
2015-07-16 06:46:30 -0600 | commented question | 3D rotation matrix between 2 axis @Edited question. |
2015-07-16 05:18:52 -0600 | received badge | ● Editor (source) |
2015-07-16 05:18:00 -0600 | asked a question | 3D rotation matrix between 2 axis I have 2 known 3d points which are the origin of 2 axis plot in the space and I need to compute the 3D rotation matrix between them. I didn't really get what the difference Euler angles and the other type of angles? Any help please? EDITED: I have Oc1 and Oc2 known points in the space and I know that using R1&T1 I can get to Oc1 and using R2&T2 I can get to Oc2 but I need to compute the 3D rotation matrix between Oc1 and Oc2. Is there any openCV method that computes such rotation? EDITED
Here my code for a sample to test (more) |
2015-07-06 03:44:11 -0600 | answered a question | not able to include non free OpenCV 3.0 I was able to figure it out. The problem was in the path of the |
2015-07-01 05:55:17 -0600 | asked a question | not able to include non free OpenCV 3.0 I'm trying to use SURF/SIFT in the alpha version of the OpenCV 3.0. I already checked these links 1,2,3,4 without being able to solve the error occured when I include "opencv2/xfeatures2d/nonfree.hpp". I debrief below what I've tried:
It seems to me weird to have these modules because they are just the modules in the openCV repository and none of them are from the module repository so I would assume it as a reason for my problem. Any idea on what's going on? |
2015-06-30 08:47:12 -0600 | commented answer | How to compile nonfree module in opencv 3.0 beta ? I built OpenCV with the modules as you're mentioning but It always throws me an error after including it. Also what do you mean by "link to opencv_xfeatures2d(.lib)? |
2015-06-30 03:41:32 -0600 | commented answer | install multiple versions of OpenCV on ubuntu so what does exactly the flag "-DCMAKE_INSTALL_PREFIX"? Also, as I got it, I should have 2 separates Makefile files, right? |
2015-06-29 20:05:53 -0600 | asked a question | install multiple versions of OpenCV on ubuntu I'm already using OpenCV version 2.8 with CMake but now I want to test some of the new functionalities which have been added in the version 3.0 so I installed it but I'm not able to link my QT project to new version installed. I already checked this link where they explain how to have 2 different versions of OpenCV on the same PC but it's not clear enough how to link the project to new one. Any hints on how to achieve it? what should I modify in the CMake file? |
2015-06-08 03:15:54 -0600 | commented question | Applying homography on non planar surface What do you mean by 3D model points? can u please clarify? |
2015-06-08 03:14:42 -0600 | received badge | ● Supporter (source) |
2015-06-07 14:16:30 -0600 | received badge | ● Enthusiast |
2015-06-06 10:58:03 -0600 | commented answer | Applying homography on non planar surface so it detects just planar objects right? |
2015-06-04 07:00:46 -0600 | asked a question | Applying homography on non planar surface As I know, Homography (projective transformation) in computer vision can be used to detect object in images but all the object I've seen are plane objects. Does Homography only work on a planar surface surface object? Or It can detect any kind of objects? I'm asking because I tried to detect non planar surface image and it didn't work. |
2015-06-04 06:57:37 -0600 | received badge | ● Scholar (source) |
2015-05-24 11:47:25 -0600 | received badge | ● Self-Learner (source) |
2015-05-20 16:38:25 -0600 | answered a question | Compute SURF/SIFT descriptors of non key points I figured it out. The problem was in the way I'm computing the descriptors because as you can see in the code above, I was trying to compute the descriptors on small part of the image and not on the image itself. So when I put the image itself instead of partOfImageScene, something like extractor.compute( img_scene, keypoints_scene, descriptors_scene ); it worked perfectly and I didn't lose any keypoints from the list I had. |
2015-05-19 03:37:21 -0600 | received badge | ● Student (source) |
2015-05-18 12:14:56 -0600 | asked a question | Compute SURF/SIFT descriptors of non key points Actually, I'm trying to match a list of key points extracted from an image to another list of key points extracted from another image. I tried SURF/SIFT to detect the key points but the results were not as expected in terms of accuracy of the keypoints detected from each image. I thought about not using key point detector and just use the points of the connected regions then compute the descriptors of these points using SIFT/SUFT but most of times calling the compute method will empty the keypoint list. Sample of code is below: So, after calling I know they state the following in OpenCV documentation:
But anyway to get better results? I mean to have descriptors for all the points I've chosen? Am I violating the way the keypoints should be used? Should I try different feature extractor than SIFT/SURF to get what I want? Or it's expected to have the same kind of problem with every feature detector implemeted in OpenCV? |