Ask Your Question

# How can I convert vector<Point2f> to vector<KeyPoint>?

I want to extract SIFT descriptor from my choosen points. How can I convert vector<point2f> to vector<keypoint>? Here is my code segment. Image resolution is 256 by 256. Thanks in advance.

int main(...)
{
.......................
Mat image = imread("test0.png",CV_LOAD_IMAGE_GRAYSCALE);
std::vector<cv::Point2f> inputs;

inputs.push_back(cv::Point2f(0,0));
inputs.push_back(cv::Point2f(0,30));
inputs.push_back(cv::Point2f(0,90));
inputs.push_back(cv::Point2f(0,120));

vector<KeyPoint> keypoints;

// Here how can I convert vector<cv::Point2f> inputs to vector<KeyPoint> keypoints?

//Similarly, we create a smart pointer to the SIFT extractor.
Ptr<DescriptorExtractor> featureExtractor = DescriptorExtractor::create("SIFT");

// Compute the 128 dimension SIFT descriptor at each keypoint.
// Each row in "descriptors" correspond to the SIFT descriptor for each keypoint
Mat descriptors;
featureExtractor->compute(image, keypoints, descriptors);
.........................
}

edit retag close merge delete

## 2 answers

Sort by ยป oldest newest most voted

Just create for each point a new Keypoint with size 1:

std::vector<cv::Keypoint> keypoints;
for( size_t i = 0; i < inputs.size(); i++ ) {
keypoints.push_back(cv::Keypoint(inputs[i], 1.f));
}


As berak mentioned, typically this doesn't make sense, since your descriptor is neither scale nor rotational invariant any more.

However in several circumstances it is totally valid! E.g. the combination of of a dense grid sampling + SIFT is typically used for bag of words approaches. The dense grid sampling over the image is nothing else than a Keypoint of size 1 at each step size (e.g. 5) location in the image (note that you don't need to implement it yourself, since OpenCV has the detector "Dense" which does exactly this for you).

more

## Comments

oh, ofc. thanks for highlighting this !

( 2013-11-27 03:36:02 -0500 )edit

"I want to extract SIFT descriptor from my choosen points." - No, you can't choose the points yourself, instead leave it to SIFT:

Ptr<FeatureDetector> detector  = FeatureDetector::create( "SIFT" );
vector<KeyPoint> keypoints;
detector->detect( img, keypoints );

more

## Comments

Thank u. Is this possible to convert vector<Point2f> to vector<KeyPoint>? Neglecting the issue related to SIFT extractor. I have tried in following way, but failed. vector<KeyPoint>keypoints;

KeyPoint::convert(inputs,keypoints, 1, 1, 0, -1);

( 2013-11-27 02:55:31 -0500 )edit

not really.

(yes, you could construct a new Keypoint for each Point, and push it into a vector, but it does not make any sense, since you won't be able to come up with sensible values for anything besides the position)

again, what you want is not feasible.

( 2013-11-27 03:00:40 -0500 )edit

Have you read the paper from Lowe concerning SIFT? If you try to understand the properties of the lineare scale space, you will understand on your own why your idea is not as good as you may think!

( 2013-11-27 05:51:20 -0500 )edit

Official site

GitHub

Wiki

Documentation

## Stats

Asked: 2013-11-27 02:22:46 -0500

Seen: 14,695 times

Last updated: Nov 27 '13