Ask Your Question

ShadowTS's profile - activity

2019-10-15 09:47:23 -0600 received badge  Popular Question (source)
2019-01-07 03:24:08 -0600 received badge  Notable Question (source)
2017-06-29 11:59:55 -0600 received badge  Popular Question (source)
2015-03-20 02:16:51 -0600 received badge  Nice Answer (source)
2014-04-29 09:19:22 -0600 received badge  Teacher (source)
2013-09-09 04:30:20 -0600 asked a question Replicate OpenCV resize with bilinar interpolation in C (shrink only)

Hello, I'm trying to rewrite the resizing algorithm of OpenCV with bilinear interpolation in C. What I want to achieve is that the resulting image is exactly the same (pixel value) to that produced by OpenCV. I am particularly interested in shrinking and not in the magnification, and I'm interested to use it on single channel Grayscale images. On the net I read that the bilinear interpolation algorithm is different between shrinkings and enlargements, but I did not find formulas or implementations, so it is likely that the code I wrote is totally wrong. What I wrote comes from my knowledge of interpolation acquired in a university course in Computer Graphics and OpenGL. The result of the algorithm that I wrote are images visually identical to those produced by OpenCV but whose pixel values are not perfectly identical.

Mat rescale(Mat src, float ratio){

float width = src.cols * ratio; //resized width
int i_width = cvRound(width);
float step = (float)src.cols / (float)i_width; //size of new pixels mapped over old image
float center = step / 2; //V1 - center position of new pixel
//float center = step / src.cols; //V2 - other possible center position of new pixel
//float center = 0.099f; //V3 - Lena 512x512 lower difference possible to OpenCV

Mat dst(src.rows, i_width, CV_8UC1);

//cycle through all rows
for(int j = 0; j < src.rows; j++){
    //in each row compute new pixels
    for(int i = 0; i < i_width; i++){
        float pos = (i*step) + center; //position of (the center of) new pixel in old map coordinates
        int pred = floor(pos); //predecessor pixel in the original image
        int succ = ceil(pos); //successor pixel in the original image
        float d_pred = pos - pred; //pred and succ distances from the center of new pixel
        float d_succ = succ - pos;
        int val_pred = src.at<uchar>(j, pred); //pred and succ values
        int val_succ = src.at<uchar>(j, succ);

        float val = (val_pred * d_succ) + (val_succ * d_pred); //inverting d_succ and d_pred, supposing "d_succ = 1 - d_pred"...
        int i_val = cvRound(val);
        if(i_val == 0) //if pos is a perfect int "x.0000", pred and succ are the same pixel
            i_val = val_pred;
        dst.at<uchar>(j, i) = i_val;
        //printf("-- Pos & val %d %d \n", i, i_val);
    }
}

return dst;
}
2013-07-24 06:01:32 -0600 commented answer Accessing Point<int> (Point2i) data from memory

Yes, probably the first time I've implemented assembly in a wrong way. Today I've tried again to check what data are extracted from memory and I've found they are "x"s and "y"s in sequence.

2013-07-24 03:31:55 -0600 answered a question Accessing Point<int> (Point2i) data from memory

Probably I've made a mistake before asking this question.

Using the memory address of the pointer:

const Point* points

You can access Points data. There are only "x"s and "y"s stored, more precisely, starting from the address of the pointer, in this way:

points[0].x, points[0].y, points[1].x, points[1].y, ...

Where every element is 4 byte (32 bit).

2013-07-19 08:26:01 -0600 commented answer Porting from Desktop C++ to Android NDK

But in my project settings, under C/C++ Path & Symbols there is ${OPENCVROOT}/sdk/native/jni/include , in fact Eclipse do not show every line as an error but only some, for example "int x = keypoint.pt.x" is marked as error, but calling "Point pt = keypoint.pt" and then "int x = pt.x" is not an error.

2013-07-19 08:19:23 -0600 commented question Accessing Point<int> (Point2i) data from memory

Probably I have not explained well. Extending to Array is not the problem here.

Let's suppose we have a single Point (or Point2i, it is the same). Starting from the memory address of Point I have to access to Point.x and Point.y without calling .x and .y because "Point.x" is not a valid memory address. I'll use the memory address to access Point.x and Point.y in an Assembly method. The question is more like: how is Point2i structure stored in memory?

2013-07-19 04:51:49 -0600 received badge  Self-Learner (source)
2013-07-19 03:37:07 -0600 answered a question Porting from Desktop C++ to Android NDK

The problem is due to Android ADT and/or Android NDK. The editor identifies false errors, in fact, the Console indicates that the C code is compiled without errors. However, Eclipse does not allow to run code/apps which, according to him, contains errors. So the fastest way to get around this is to modify the properties of the project in:

Project -> Properties -> C / C + + General -> Code Analysis

changing the "severity" of the "problems" that appear in the editor to "warning" or other types.

2013-07-19 03:35:31 -0600 asked a question Accessing Point<int> (Point2i) data from memory

I've a:

const Point* points

How are datas stored in memory? I need to access points[i].x and points[i].y in ARM assembly. I've tried to load 32 bit (standard int dimension) from memory starting from *points address, but my assumption that this array is stored this way seems wrong:

points[0].x, points[0].y, points[1].x, points[1].y, ...
2013-07-14 10:19:00 -0600 asked a question Replicate GaussianBlur in C++

I'm interested to replicate the GaussianBlur filter with specific settings (7x7 mask, sigma=2) in C/C++. I've implemented it as a double 1D filter in the following way, however results is not identical to OpenCV. Checking result difference with cv::abs(Mat a, Mat b) the resulting Mat is not totally 0 but values from 0 to 2 (with higher values in the lighter regions of the original image). Probably it is a rounding or precision problem, so how to obtain the OpenCV filter in a "readable" C code?

Mat myGaussianBlur(Mat src){

Mat dst(src.rows, src.cols, CV_8UC1);
Mat temp(src.rows, src.cols, CV_8UC1);
float sum;
int x1;

double coeffs[] = {0.070159, 0.131075, 0.190713, 0.216106, 0.190713, 0.131075, 0.070159};

// filter horizontally - inside the image
for(int y = 0; y < src.rows; y++){
    for(int x = 3; x < (src.cols - 3); x++){
        sum = src.at<uchar>(y, x) * coeffs[3];
        for(int i = -3; i < 0; i++){
            int tmp = src.at<uchar>(y, x + i) + src.at<uchar>(y, x - i);
            sum += coeffs[i + 3]*tmp;
        }
        temp.at<uchar>(y,x) = (sum + 0.5f);
    }
}
// filter horizontally - edges
for(int y = 0; y < src.rows; y++){
    for(int x = 0; x <= 2; x++){
        sum = 0.0;
        for(int i = -3; i <= 3; i++){
            x1 = reflect101(src.cols, x + i);
            sum += coeffs[i + 3]*src.at<uchar>(y, x1);
        }
        temp.at<uchar>(y,x) = (sum + 0.5f);
    }
}
for(int y = 0; y < src.rows; y++){
    for(int x = (src.cols - 3); x < src.cols; x++){
        sum = 0.0;
        for(int i = -3; i <= 3; i++){
            x1 = reflect101(src.cols, x + i);
            sum += coeffs[i + 3]*src.at<uchar>(y, x1);
        }
        temp.at<uchar>(y,x) = (sum + 0.5f);
    }
}

// transpose
transpose(temp, temp);

// filter horizontally - inside the image
for(int y = 0; y < src.rows; y++){
    for(int x = 3; x < (src.cols - 3); x++){
        sum = temp.at<uchar>(y, x) * coeffs[3];
        for(int i = -3; i < 0; i++){
            int tmp = temp.at<uchar>(y, x + i) + temp.at<uchar>(y, x - i);
            sum += coeffs[i + 3]*tmp;
        }
        dst.at<uchar>(y,x) = (sum + 0.5f);
    }
}
// filter horizontally - edges
for(int y = 0; y < src.rows; y++){
    for(int x = 0; x <= 2; x++){
        sum = 0.0;
        for(int i = -3; i <= 3; i++){
            x1 = reflect101(src.cols, x + i);
            sum += coeffs[i + 3]*temp.at<uchar>(y, x1);
        }
        dst.at<uchar>(y,x) = (sum + 0.5f);
    }
}
for(int y = 0; y < src.rows; y++){
    for(int x = (src.cols - 3); x < src.cols; x++){
        sum = 0.0;
        for(int i = -3; i <= 3; i++){
            x1 = reflect101(src.cols, x + i);
            sum += coeffs[i + 3]*temp.at<uchar>(y, x1);
        }
        dst.at<uchar>(y,x) = (sum + 0.5f);
    }
}

transpose(dst, dst);

return dst;
}

int reflect101(int M, int x){
if(x < 0){
    return -x;
}
if(x >= M){
    return 2*M - x - 2;
}
return x;
}
2013-06-25 11:33:33 -0600 asked a question Porting from Desktop C++ to Android NDK

My intention is to test some modifications to an OpenCV algorithm, in particular ORB feature detector and descriptor. So I start developing on my desktop with C++ and I copied some .cpp file from OpenCV sources, in particular orb.ccp, fast.cpp, precomp.cpp, fast_score.cpp. The first one is the main file of the algorithm, the second one is the feature detector used by ORB, the third and fourth are necessary as called with "includes" in fast.cpp. Obviously I copied the headers and method declarations from OpenCV .hpp(s), creating my headers. I changed the names of the two classes involved (ORB and FAST -> myORB and myFAST) in all files, so that you can distinguish the calls to my versions of the algorithm from those of OpenCV, since it is still necessary to have OpenCV for all imgproc functions, core, Mat class ... used inside the cpp files modified.

So far so good, it works, I am able to use my version of ORB copied and eventually apply changes to the algorithm.

Problems arise when I switch to Android NDK. What I do, after setting a project with NDK, is to create a JNI method in which I implement the code that will use the algorithm ORB, then I import my files .cpp and .hpp above, set the .mk file and other related stuffs to compile my files, and obviously set up the project to use OpenCV4Android.

The problem is the following: my algorithm works, the code is compiled/builded and launched by Eclipse ADT, and I'm able to call myORB class. However when I open my version of the files, for istance (my)orb.cpp, in the editor of ADT, problems arise: the code has dozens of errors, relative to (opencv) methods non exsisting ("could not be resolved"), (opencv) methods called with invalid arguments ... So after I've opened the file it shows the errors, and then it's impossible to build the project in ADT again, but if I delete and copy back the .cpp file into the project it is compiled again without problems until I open it again...

2013-06-18 16:34:40 -0600 commented question match() & knnMatch duplicates results

No, i do NOT use CrossCheck. I manually make the Ratio Test to discard results with the first and second match with similar "distance" values. The threshold is: firstmatch.distance /secondmatch.distance < 0.75. In this case i save the first match in my "good matches". However this is totally indipendent, because I simply delete some matches, but in the single (best) matches there are duplicate query indexes and duplicate train indexes.

2013-06-18 04:44:28 -0600 received badge  Editor (source)
2013-06-18 04:43:29 -0600 asked a question match() & knnMatch duplicates results

Hello, I have a problem in my Andorid (NDK) implementation of Feature Detector + Descriptor + Matcher.

I'm using knnMatch with k = 2 to match descriptors extracted from camera frame to descriptor extracted from a fixed image.

Then I filter outliers with ratio test, checking the ratio between the first match "distance" property and the second match "distance" property, for each.

If the ratio is smaller then a threshold then i save the first match in a new vector containing only "good matches". I also save trainIdx and queryIdx corresponding to each "good match" in 2 vecors.

So now I expect that one between trainIdx vector and queryIdx vector does not contain duplicates. I think this because I expect that knnMatch() and match() functions cicle through all the descriptors of train (OR query) descriptor-vectors and find the best (or best 2) matches in the other descriptor-vector. In this way there could be duplicates between query indexes, because 2 train descriptors CAN match the same query descriptor, but there should not be duplicates in the train indexes! Even if the match are computed in the opposite way (functions cicle through all the descriptors of query descriptor-vectors and find the best (or best 2) matches in the train descriptor-vector), one of the two indexes vectors must not have duplicates.

So I'm doing something wrong, or the matching functions work in a different way from what I think?

Notes: I'm using ORB keypoint detector and descriptor extractor, and BFMatcher.

2013-06-13 08:43:08 -0600 received badge  Critic (source)
2013-05-30 10:18:29 -0600 commented question Performance issue cloning features algorithm sources into my project

@Guanta you're right! I was using eclipse with build configuration "debug". Switching to "release" took back performance to values ​​comparable to those of opencv!

2013-05-29 04:09:31 -0600 asked a question Performance issue cloning features algorithm sources into my project

Hello, I'm working on a project for which I want to modify the ORB algorithm. To do this within my C++ project I copied the sources orb.cpp, fast.cpp, fast_score.cpp, fast_score.hpp, precomp.cpp, precomp.hpp. The source of FAST is necessary as it is called by the algorithm ORB and I would like to optimize that as well. The last 4 sources are necessary as called with "include" in fast.cpp. I also created a new header by copying the class declarations (fast and orb) from features2d.hpp in a new header file.

Because I want to reuse the data structures of OpenCV in my project, I added the include to features2d.hpp, core.hpp etc etc because I need structures such as Mat, KeyPoint...

As a first trial to test if it works I simply changed the names of all classes of the copied sources, so they can be instantiated in a distinct manner.

The problem is the following: the cloned algorithm works, but the performance (computational time, FPS) is worse by almost 50% compared to invoke the algorithm of OpenCV. What can be the problem? I did not expect a gap so large!

Actually I'm working and testing on a Mac laptop, but I will eventually take my project to Android.

2013-05-29 04:08:49 -0600 received badge  Supporter (source)
2013-05-13 05:57:19 -0600 received badge  Student (source)
2013-05-13 05:52:00 -0600 asked a question [Features2D] implementation differences of different Features Detectors/Extractors/Matchers

Hello, i'm testing images matching with Feature Detectors, Extractors and Matchers. I've found a problem when i try to use OrbFeatures. I'l try to explain.

If i use SURF in this way:

SurfFeatureDetector detector;
SurfDescriptorExtractor extractor;
BFMatcher matcher; //or Flann, not relevant
[...]//features detection and extraction here both for object and scene
std::vector< Mat > descriptors;
descriptors.push_back(descriptors_object);
matcher.clear();
matcher.add(descriptors);
matcher.train();
matcher.match( descriptors_scene, matches);

And then i print the DMatches this way:

//match is cycled through all the matches
printf("-- Match trainIdx, queryIdx, distance: %d, %d, %f \n", match.queryIdx, match.trainIdx, match.distance );

The result are matches ordered by trainIdx.

But when i use ORB this way:

OrbFeatureDetector detector;
OrbDescriptorExtractor extractor;
BFMatcher matcher;
[...]//features detection and extraction here both for object and scene
std::vector< Mat > descriptors;
descriptors.push_back(descriptors_object);
matcher.clear();
matcher.add(descriptors);
matcher.train();
matcher.match( descriptors_scene, matches);

Then i print the results same way, the matches are ordered by queryIdx!

The problem is not the matches order, but it seems that ORB results are not correct, like if the matcher takes every feature from the scene image (query) and try to find the most similar feature in the object image (train), opposite to the SURF case in which it seems that the matcher takes every feature from from the object image (train) and try to find the most similar feature in the scene image (query).