Inspired by the Mathworks Tutorial for Augmented Reality I wanted to create a similar Application for Android, where the Recognition and Tracking are implemented.
After some research I've seen, that the mentioned pointTracker from Mathworks uses the KLT-Algorithm which is also implemented in OpenCV with calcOpticalFlowPyrLK
I've implemented the algorithm, which takes the last frame, where I recognized my Points and try to estimate their new position in the current frame with this method:
int BruteForceMatcher::trackWithOpticalFlow(cv::Mat prevImg, cv::Mat nextImg, std::vector<cv::Point2f> &srcPoints, std::vector<cv::Point2f> &srcCorners){
std::vector<cv::Point2f> estPoints;
std::vector<cv::Point2f> estCorners;
std::vector<cv::Point2f> goodPoints;
std::vector<cv::Point2f> leftsrc;
std::vector<uchar> status;
std::vector<float> error;
if(srcPoints.size() > 0) {
cv::calcOpticalFlowPyrLK(prevImg, nextImg, srcPoints, estPoints, status, error);
for (int i = 0; i < estPoints.size(); i++) {
if (error[i] < 20.f) {
//LOGW("ERROR : %f\n", error[i]);
goodPoints.push_back(estPoints[i]);
leftsrc.push_back(srcPoints[i]);
}
}
//LOGD("Left Points (est/src): %i, %i", goodPoints.size(), leftsrc.size());
if(goodPoints.size() <= 0){
//LOGD("No good Points calculated");
return 0;
}
cv::Mat f = cv::findHomography(leftsrc, goodPoints);
if(cv::countNonZero(f) < 1){
//LOGD("Homography Matrix is empty!");
return 0;
}
cv::perspectiveTransform(srcCorners, estCorners, f);
srcCorners.swap(estCorners);
srcPoints.swap(goodPoints);
status.clear();
error.clear();
return srcPoints.size();
}
return 0;
}
And the Method which will be called through a JNICALL:
std::vector<cv::Point2f> findBruteForceMatches(cv::Mat img){
int matches = 0;
std::vector<cv::Point2f> ransacs;
BruteForceMatcher *bruteForceMatcher = new BruteForceMatcher();
double tf = cv::getTickFrequency();
if(trackKLT){
LOGD("TRACK WITH KLT");
double kltTime = (double) cv::getTickCount();
matches = bruteForceMatcher->trackWithOpticalFlow(prevImg, img, srcPoints, scene_corners);
kltTime = (double) cv::getTickCount() - kltTime;
LOGD("KLT Track Time: %f\n", kltTime*1000./tf);
if(matches > 3){
trackKLT = true;
prevImg = img;
delete bruteForceMatcher;
return scene_corners;
}else{
trackKLT = false;
prevImg.release();
srcPoints.clear();
scene_corners.clear();
delete bruteForceMatcher;
return scene_corners;
}
} else{
LOGD("RECOGNIZE OBJECT");
[... FAST/SIFT Recognition! ...]
}
}
Unfortunately this method runs only at 200 ms (~5 Fps) which is to slow for my Application. Is there any other similar algorithm, which could track a couple of points in a Image? Or is there a way, to speed up my algorithm?
In a paper I read, that they use a cross correlation tracking algorithm, is there sth. like that in openCV?
Some specs:
Phone: Nexus 5x (6.0.1)
OpenCV C++ Native Library Vers. (Master-Branch downloaded 26.09.2016)
Android SDK 21 - 24