Ask Your Question

Potato's profile - activity

2018-07-31 03:40:40 -0500 received badge  Famous Question (source)
2018-07-26 06:58:43 -0500 received badge  Notable Question (source)
2018-06-26 06:31:09 -0500 received badge  Notable Question (source)
2018-01-17 06:53:37 -0500 received badge  Notable Question (source)
2018-01-03 22:57:10 -0500 received badge  Popular Question (source)
2017-11-06 21:53:26 -0500 received badge  Popular Question (source)
2017-09-27 08:56:08 -0500 received badge  Popular Question (source)
2016-12-17 21:11:08 -0500 received badge  Notable Question (source)
2016-10-28 07:17:09 -0500 marked best answer How does the perspectiveTransform( ) function work?

In the tutorial ->
I understand that the syntax for this function is as follows: perspectiveTransform(obj_corners, scene_corners, H);
Where obj_corners and scene_corners are vectors of Points and obj_corners contains 4 points of the object to be found and H is the homography matrix. My question is, what exactly does this function do?

2016-01-25 05:26:49 -0500 received badge  Popular Question (source)
2016-01-07 13:36:59 -0500 marked best answer Image Registration (OpenCV 3.0.0)

I have been trying to run the sample code given in the opencv_contrib repo -->

It is the image registration module. I am compiling the map_test.cpp program. -->

Now, the code compiles the first time. However, when you #define COMPARE_FEATURES to compare feature based vs. pixel based registration, the program doesn't compile and gives these errors

map_test.cpp: In function ‘void calcHomographyFeature(const cv::Mat&, const cv::Mat&)’:
map_test.cpp:299:41: error: no matching function for call to ‘cv::xfeatures2d::SURF::SURF(int&)’
map_test.cpp:299:41: note: candidates are:
In file included from /app_MS/ocv3/include/opencv2/xfeatures2d.hpp:43:0,
                 from map_test.cpp:51:
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:116:20: note: cv::xfeatures2d::SURF::SURF()
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:116:20: note:   candidate expects 0 arguments, 1 provided
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:116:20: note: cv::xfeatures2d::SURF::SURF(const cv::xfeatures2d::SURF&)
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:116:20: note:   no known conversion for argument 1 from ‘int’ to ‘const cv::xfeatures2d::SURF&’
map_test.cpp:299:22: error: cannot declare variable ‘detector’ to be of abstract type ‘cv::xfeatures2d::SURF’
In file included from /app_MS/ocv3/include/opencv2/xfeatures2d.hpp:43:0,
                 from map_test.cpp:51:
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:116:20: note:   because the following virtual functions are pure within ‘cv::xfeatures2d::SURF’:
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:132:26: note:  virtual void cv::xfeatures2d::SURF::setHessianThreshold(double)
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:133:28: note:  virtual double cv::xfeatures2d::SURF::getHessianThreshold() const
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:135:26: note:  virtual void cv::xfeatures2d::SURF::setNOctaves(int)
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:136:25: note:  virtual int cv::xfeatures2d::SURF::getNOctaves() const
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:138:26: note:  virtual void cv::xfeatures2d::SURF::setNOctaveLayers(int)
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:139:25: note:  virtual int cv::xfeatures2d::SURF::getNOctaveLayers() const
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:141:26: note:  virtual void cv::xfeatures2d::SURF::setExtended(bool)
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:142:26: note:  virtual bool cv::xfeatures2d::SURF::getExtended() const
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:144:26: note:  virtual void cv::xfeatures2d::SURF::setUpright(bool)
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:145:26: note:  virtual bool cv::xfeatures2d::SURF::getUpright() const
map_test.cpp:307:26: error: cannot declare variable ‘extractor’ to be of abstract type ‘cv::xfeatures2d::SURF’
In file included from /app_MS/ocv3/include/opencv2/xfeatures2d.hpp:43:0,
                 from map_test.cpp:51:
/app_MS/ocv3/include/opencv2/xfeatures2d/nonfree.hpp:116:20: note:   since type ‘cv::xfeatures2d::SURF’ has pure virtual functions

There seems to be a problem with the SURF feature detector and the Matcher etc. Does anyone know a fix for this? Are there library linking issues? I am ... (more)

2015-10-05 03:59:43 -0500 marked best answer Creation of images with warp/distortion due to natural effects on camera lens.

I am trying to take an image stream and make the images appear as though they have been taken from a relatively simple camera like a CCD camera. I have added some Gaussian and salt pepper noise to give it a 'normal' look.

My next step is to warp the images or distort them. In real time this could happen due to temperature changes and lens defects of the camera. My goal is to artificially recreate this. Note that the distortion would be extremely minimal but present nonetheless. Ideally, I would like the distortion to be per image in the stream and over the sequence of images as well.

So far I have tried using perspectiveTransform( ) and warpPerspective( ), but I cannot seem to get the right parameter adjustments. Can anyone help me with this? I have also thought of using the fish-eye but could only find OpenCV functions that 'undistort' or fix fish-eye image distortion and nothing that would add a fish eye distortion.

Any thoughts or ideas on other approaches I could take to solve this problem? Thank you.

2015-08-14 11:35:35 -0500 commented question How to convert tiff to png?

Try reading it with imread("imagename.tiff", 1) and save it using imwrite("imagename.png", image);

2015-07-31 13:25:28 -0500 commented answer Where can i find the steps to install latest version of openCV in Linux(Ubuntu 14.04)?

Would make -j $(nproc) be safer?

2015-07-31 13:23:37 -0500 commented question face detection and matching with another face

You can try looking into descriptor extraction and matching. OpenCV has many techniques like SURF, SIFT etc. that you can use for your task. It might be a good place to start.

2015-07-23 16:00:27 -0500 asked a question What do the numbers in the descriptor matrix represent?

I am using the xfeatures::SURF detectAndCompute() function to pull out descriptors from an image.

I see that the descriptors are stored in a Mat object. (Mat descriptors). On visually displaying the Mat I get a very strange image.

And on displaying the Mat to the console output, I get a sequence of extremely seemingly random floating point numbers. Can anyone explain to me what these actually represent? I am trying to improve my understanding of descriptors.

Thank you.

2015-07-22 10:03:25 -0500 commented answer How would you get all the points on a line?

I was initially trying to implement something along these lines. Thank you for the code. It helped. Although using the line iterator was exactly what I needed.

2015-07-22 10:02:20 -0500 commented answer How would you get all the points on a line?

This was exactly what I was looking for. Thank you.

2015-07-21 16:08:36 -0500 commented question Setting CV_CAP_PROP_POS_FRAMES fails when reading image sequence

Could you please provide your code?

2015-07-21 16:06:44 -0500 asked a question How would you get all the points on a line?

I have detected the presence of lines in an image using the HoughLinesP() function. However, as an output, only the start and end points (coordinates) of the line are calculated by the function. I would however like to store all the points that fall on this line.

I initially thought of iterating from the starting coordinate to the end coordinate. But that gets quite complicated depending on the orientation of the line (horizontal, vertical, diagonal).

Any ideas on how I can achieve this?

2015-06-24 11:51:28 -0500 commented answer A Ubuntu step by step installation of OpenCV for dummies?

You might have to disable OpenCL in your CMAKE statement. cmake ... -D WITH_OPENCL=OFF

2015-05-29 11:07:19 -0500 commented answer How can custom keypoints / descriptors be created?

That is what I wanted to know. What I am trying to achieve is, being able to represent the features in an image in a way that can be fed as inputs into a neural network. Do you have any ideas or suggestions for this?

2015-05-28 13:39:25 -0500 commented answer How can custom keypoints / descriptors be created?

On the same lines, Is there a way to reverse the computation? For example, take a set of descriptors and convert it into say a floating point value?

2015-05-28 11:19:21 -0500 commented answer How can custom keypoints / descriptors be created?

Thank you for the concise answer. The SO link was quite helpful. I might just have to crack open the code and take a look inside.

2015-05-28 09:54:05 -0500 asked a question How can custom keypoints / descriptors be created?

I am aware of the DescriptorExtractor class, but I wanted to know if there is a way to create my own key-points and my own descriptors. Say for example, I detect the co-ordinates of corners in an image, and I want to convert them to key-points and descriptors, How can this be done?

While researching, I have seen that descriptors, are calculated based on a 16x16 neighbourhood and using histograms. Can someone please explain how this can be achieved?

Thank you.

2015-05-25 11:45:13 -0500 asked a question What does the Mat of descriptors represent in a feature/descriptor extractor?

I have been using the SURF feature detector and trying to understand how it works. The usage of the SURF detector according to OpenCV 3 is --> surf->detectAndCompute(image, Mat(), keypoints, descriptors);

I want to know what exactly the Mat descriptors represent. On using imshow( ), a weird image with pixels of different gray scale intensities was shown. On printing to console, The matrix output ranged from different values like -0.65433e-05 etc.

Does anyone know what this represents?

2015-05-19 09:23:18 -0500 commented answer Registering images

What is your compile command?

2015-05-15 10:18:18 -0500 commented answer Registering images

You're absolutely right. When I said it, I meant in an ideal case. As a matter of fact I also worked on registering images of a starfield, and I had a hard time detecting corresponding points as there is only so much information that you can pull out from a starfield. I found that the pixel based registration worked well for me.

2015-05-15 09:05:19 -0500 commented answer Registering images

Yes, you would need the same number of corresponding points and the number of points have to be greate than 4. No problem. If you get stuck just come back and ask. Take a look into the pixel reg as well. You might find something there.

2015-05-15 08:48:23 -0500 answered a question Registering images

There are two ways that I know you can register images with openCV. Feature based registration, and pixel based registration. I can see that you already have reference points to work with to register your image, so you could use techniques used in feature based registration.

Like you say, you have a reference image and a image to register. With the points that you have computed with the homemade algorithm, you could use the "findHomography( )" function. --> Mat H = findHomography(ImageToRegister_pts, ReferenceImage_pts, CV_RANSAC); where ImageToRegister_pts and ReferenceImage_pts are vector<Point2f>of your points This will give you the homography matrix between the two images which you can use to warp/align the given image to the reference image. (Reference:

Then use the "warpPerspective function( )" to effectively warp the image. --> warpPerspective(imageToWarp, OutputImage, Hinv, ImageToWarp.size()); where ImageToWarp is the Mat image, OutputImage is a Mat to store the final output and Hinv is the inverse of the Homography Matrix (Hinv = H.inv()). There are alternatives to the warPerspective function like "perspectiveTransform( )" as used in this example -->

Another registration technique can be found in OpenCV 3 under the opencv_contrib functionality. It uses pixel based registration methods. Follow this link for more information -->

As per my experience, I used feature based registration as it was simpler, however pixel based registration is more accurate. You will have to look into the source code to really understand what is happening. Your choice all depends on your application.

Hope this helps!

2015-05-14 09:10:14 -0500 asked a question What truly 'digital' information can I pull out from an Image?

My main reason for asking this question is that I want to pull out digitial information from an image, and use that information to create a training set for a neural network. So far, I have pulled out the RGB values of each individual pixel in a 3 channel image. This is the only usable data that I see that can be used as inputs to the neural network.

Does anyone have any other ideas as to what I could use from a regular image that could be truly represented 'digitally'. For example, could I do something along the lines of representing edges as simple floating point numbers that I could simply plug into a neural network?

2015-04-11 19:06:53 -0500 marked best answer estimateRigidTransform( ) in Opencv 3.0.0

Does anyone know what library to include to use this function in opencv 3? I get the error: estimateRigidTransform() was not declared in this scope

My includes list:

#include "opencv2/core.hpp" #include "opencv2/core/utility.hpp" #include "opencv2/core/ocl.hpp" #include "opencv2/imgcodecs.hpp" #include "opencv2/highgui.hpp" #include "opencv2/features2d.hpp" #include "opencv2/calib3d.hpp" #include "opencv2/calib3d/calib3d_c.h" #include "opencv2/imgproc.hpp" #include "opencv2/xfeatures2d.hpp" #include "opencv2/imgproc/types_c.h" #include "opencv2/imgcodecs/imgcodecs_c.h" #include "opencv2/highgui/highgui_c.h"

2015-04-07 15:56:41 -0500 commented question find a point on a line?

Do all of the 5 points fall on the same line?