Ask Your Question

Franz Kaiser's profile - activity

2019-01-21 13:30:07 -0500 received badge  Popular Question (source)
2017-09-04 07:34:20 -0500 commented answer Feature Matching; findObject: Concept behind MultiDetection

Thanks again. At the moment I: Extract the features as KeyPoints of scene (2500) and object (110) compute the descript

2017-09-04 07:33:07 -0500 commented answer Feature Matching; findObject: Concept behind MultiDetection

Thanks again. At the moment I: Extract the features as KeyPoints of scene (2500) and object (110) compute the descript

2017-09-04 07:02:38 -0500 edited question Feature Matching; findObject: Concept behind MultiDetection

Feature Matching; findObject: Concept behind MultiDetection Hello everyone, for the feature-matching modul there is a g

2017-09-04 06:58:22 -0500 commented answer Feature Matching; findObject: Concept behind MultiDetection

thank you for your help, i tried to implement your approach (see code in the original question) - but either i have a mi

2017-09-04 06:51:37 -0500 edited question Feature Matching; findObject: Concept behind MultiDetection

Feature Matching; findObject: Concept behind MultiDetection Hello everyone, for the feature-matching modul there is a g

2017-09-04 06:50:42 -0500 edited question Feature Matching; findObject: Concept behind MultiDetection

Feature Matching; findObject: Concept behind MultiDetection Hello everyone, for the feature-matching modul there is a g

2017-09-04 06:49:19 -0500 edited question Feature Matching; findObject: Concept behind MultiDetection

Feature Matching; findObject: Concept behind MultiDetection Hello everyone, for the feature-matching modul there is a g

2017-09-04 06:41:38 -0500 edited question Feature Matching; findObject: Concept behind MultiDetection

Feature Matching; findObject: Concept behind MultiDetection Hello everyone, for the feature-matching modul there is a g

2017-09-04 04:23:38 -0500 asked a question Feature Matching; findObject: Concept behind MultiDetection

Feature Matching; findObject: Concept behind MultiDetection Hello everyone, for the feature-matching modul there is a g

2017-07-26 05:36:56 -0500 commented question how can i reshape a Mat 2D to vector<vector<Mat>>

Okay, i didn't use the reshape command before. Is the performance sufficient for you?

2017-07-25 12:03:00 -0500 commented question how can i reshape a Mat 2D to vector<vector<Mat>>

So you want a vector<vector<vector<uint8>>> containing 10 continuous elements? I don't think that is usefull but you could write a function for this purpose. Do you know how to access the pixels in a mat?

2017-07-21 05:23:57 -0500 commented question UndistortPoints randomly returning huge values

Just a guess: Are your data-types overflowing?

2017-07-20 08:18:33 -0500 commented question Sorting by size

I think in this vide someone is doing a similar detection for tracking. Also you can take a look at the simple Blob detector.

2017-07-18 08:05:25 -0500 commented answer CV_64F ConvertTo CV_8U Problem with cast

Thank you that was exactly what i was looking for! For my understandings: the scaling factor 255 means the double was multiplied by that and casted afterwards, is that right?

Also thank you for your second comment: I didn't know the range operator. But for single pixel acess at is the right way?

2017-07-18 07:27:12 -0500 asked a question CV_64F ConvertTo CV_8U Problem with cast

Hi everyone, I try to cast a CV_64F Mat into a CV_8U Mat for thresholding (See code below). Unfortunately this doesn't work the way i want it to. If you uncomment the cout-command you will see that the whole mat contains ones. What am i doing wrong? Thank you for your help!

Mat testMat = Mat(10, 10, CV_64F);
double above = 0.9111;
double below = 0.6665;
double minDbl = 0.9;
// Fill Mat:
for (int rows=0; rows<testMat.rows; rows++)
{
    for (int cols=0; cols<testMat.cols; cols++)
    {
        if (rows < 5)
            testMat.at<double>(rows, cols) = above;
        else
            testMat.at<double>(rows, cols) = below;
    }
}
//cout << testMat;

Mat convertedMat;
testMat.convertTo(convertedMat, CV_8U);

//cout << convertedMat;

//What i actually want:
cv::threshold(convertedMat, convertedMat, minDbl, 1., CV_THRESH_TOZERO);
//cout << convertedMat;
2017-07-13 06:23:54 -0500 commented question Plugins for Visual Debugging of OpenCV applications in c++

Thank you @berak, this looks great. Unfortunately i was not able to test it yet -> the cvv-modul in my contribute installation seems to be missing.

2017-07-13 02:21:53 -0500 received badge  Critic (source)
2017-07-12 16:09:33 -0500 commented question Plugins for Visual Debugging of OpenCV applications in c++

Its more a question of comfort than needing: Both tools seem to provide a nearly autonomously updating display of used Mat-files.

2017-07-12 16:06:47 -0500 commented question Detect a specific Voronoi like pattern

I "accidentally" read saw something about a simular problem which could help you here. They also have code on github. Unfortunately i can't help you more with this.

2017-07-12 15:49:07 -0500 asked a question Plugins for Visual Debugging of OpenCV applications in c++

Unfortunately my search for an useful debugging tool was not successful so far. For VS there is the plugin image-watch. Python-Applications can be supplemented with Visual logging . I was wondering if there is any adequate solutions for other IDEs (qt-creator) with this function. Thanks a lot.

2017-07-12 04:56:12 -0500 commented question Descriptors/Features of non-textured parts

Thank you @gfx for your help. The question was you described as case(1). The box doesn't need to be in the image. Training a classifier for finding the same tool with nearly no perspective changes seems to be a overkill, doesn't it? But thanks again for your help.

2017-07-12 04:49:07 -0500 commented question C++ opencv3 no ideas how to divide these thresholded image

Curvature. Perhaps the getting points where the curvature is over a threshold can help you? I have tried it (see picture) but there seems to be a problem left. Also I don't know if it is repeatable for you specific problem.

2017-07-11 11:10:28 -0500 commented question C++ opencv3 no ideas how to divide these thresholded image

To get this right: The problem that you have is to find the marked corners and the strain we are seeing shows 4 objects (perhaps overlapping each other)? To the line you have drawn: Is it always nearly horizontal and the smallest distance to an other corner?

2017-07-11 11:01:22 -0500 commented answer Rotate points by an angle

Seems to be a good solution for the rotation as well. My problem was asked a bit unclear: Instead of fitting a Mat to the corresponding points, my output-Points should begin at the point (0,0) and be positiv. I managed to do this by subtract the bb.x and bb.y values from every single point. The Question is answered - thank you again @LBerger and sorry for the duplicate with SO.

2017-07-11 10:56:13 -0500 marked best answer chamfer Matching error in implementation

I found a implimentation of chamfer Matching here , which seems to have an error in this line:

Point& new_point = Point(model_column,model_row);

-> see beraks comment - thank you!

The program runs, but the results are not as i expected. I translated the image 7 Pixels in each direction and still get (0,0) as a match, because the matching image is just 1x1 px.

I would divide the matching part in the following steps: 1. model points from canny output are stored in a vector 2. A matching space is created ->*if the model dimensions are substracted, does this mean, that the template has to fit on the image ? * 3. For every templatepoint the value of distance transform is added to a matching score. This is where i especially don't understand the following line:

        matching_score += (float) *(chamfer_image.ptr<float>(model_points[point_count].y+search_row) +
        search_column + model_points[point_count].x*image_channels);

Thank you for your help:

whole code:

cv::Mat imgTranslate(cv::Mat src, int col,  int dx, int dy)
{
    cv::Mat dst(src.size(), src.type(), cv::Scalar::all(col) );
    src(cv::Rect(dy,dx, src.cols-dy,src.rows-dx)).copyTo(dst(cv::Rect(0,0,src.cols-dy,src.rows-dx)));
    return dst;
}

void ChamferMatching( Mat& chamfer_image, Mat& model, Mat& matching_image )
{
    // Extract the model points (as they are sparse).
    vector<Point> model_points;
    int image_channels = model.channels();
    for (int model_row=0; (model_row < model.rows); model_row++)
    {
        uchar *curr_point = model.ptr<uchar>(model_row);
        for (int model_column=0; (model_column < model.cols); model_column++)
        {
            if (*curr_point > 0)
            {
                const Point& new_point = Point(model_column,model_row);
                model_points.push_back(new_point);
            }
            curr_point += image_channels;
        }
    }
    int num_model_points = model_points.size();
    image_channels = chamfer_image.channels();
    // Try the model in every possible position
    matching_image = Mat(chamfer_image.rows-model.rows+1, chamfer_image.cols-model.cols+1, CV_32FC1);
    for (int search_row=0; (search_row <= chamfer_image.rows-model.rows); search_row++)
    {
        float *output_point = matching_image.ptr<float>(search_row);
        for (int search_column=0; (search_column <= chamfer_image.cols-model.cols); search_column++)
        {
            float matching_score = 0.0;
            for (int point_count=0; (point_count < num_model_points); point_count++)
                matching_score += (float) *(chamfer_image.ptr<float>(model_points[point_count].y+search_row) + 
                search_column + model_points[point_count].x*image_channels);
               *output_point = matching_score;
            output_point++;
        }
    }
}

int main()
{
    Mat templateImage = imread(img1, IMREAD_GRAYSCALE);
    Mat queryImage = imgTranslate(templateImage, 255, 7, 7);

    Mat edge_image, chamfer_image, model_edge;
    Canny( queryImage, edge_image, 100, 200, 3);
    threshold( edge_image, edge_image, 127, 255, THRESH_BINARY_INV );
    distanceTransform( edge_image, chamfer_image, CV_DIST_L2, 3);

    Canny( templateImage, model_edge, 100, 200, 3);

    Mat resultImage;
    ChamferMatching(chamfer_image, model_edge, resultImage);

    double min, max;
    cv::Point min_loc, max_loc;
    cv::minMaxLoc(resultImage, &min, &max, &min_loc, &max_loc);

    cout << min_loc << endl;
    return 0;
}
2017-07-11 10:54:21 -0500 received badge  Scholar (source)
2017-07-11 09:33:40 -0500 commented question Rotate points by an angle

Thank you LBerger, I thought about asking the question directly at stackoverflow, because the solution was from there but you were faster with your comment than I was with deleting the question. If the duplication is troublesome, I can remove it here.

To your comment: With Mat(p[i]) you mean the input point, don't you?

2017-07-11 09:14:30 -0500 commented question How can I detect the color of a certain portion of an image if I know the coordinates of this portion as a rectangle?

To get the color of an image at a specific position you can use Vec3b color = image.at<Vec3b>(Point(x,y)); What do you need exactly? How much different colors are in your images?

2017-07-11 08:54:18 -0500 asked a question Rotate points by an angle

Hello, i am trying to rotate a set of points in a vector<points> by an user-defined angle and found a solution at SO. In the following code the dimension of the output image (rotated by 45 degree) is correct but the position of the points seem to be shifted. Can someone give me a tip, what the problem is?

cv::Point rotate2d(const cv::Point& inPoint, const double& angRad)
{
    cv::Point outPoint;
    //CW rotation
    outPoint.x = std::cos(angRad)*inPoint.x - std::sin(angRad)*inPoint.y;
    outPoint.y = std::sin(angRad)*inPoint.x + std::cos(angRad)*inPoint.y;
    return outPoint;
}

cv::Point rotatePoint(const cv::Point& inPoint, const cv::Point& center, const double& angRad)
{
    return rotate2d(inPoint - center, angRad) + center;
}


int main( int, char** argv )
{
    // Create an dark Image with a gray line in the middle
    Mat img = Mat(83, 500, CV_8U);
    img = Scalar(0);
    vector<Point> pointsModel;

    for ( int i = 0; i<500; i++)
    {
        pointsModel.push_back(Point(i , 41));
    }

    for ( int i=0; i<pointsModel.size(); i++)
    {
        circle(img, pointsModel[i], 1, Scalar(120,120,120), 1, LINE_8, 0);
    }
    imshow("Points", img);

    // Rotate Points
    vector<Point> rotatedPoints;
    Point tmpPoint;
    cv::Point pt( img.cols/2.0, img.rows/2.0 );
    for ( int i=0; i<pointsModel.size(); i++)
    {
        tmpPoint = rotatePoint(pointsModel[i] , pt , 0.7854);
        rotatedPoints.push_back(tmpPoint);
    }
    Rect bb = boundingRect(rotatedPoints);
    cout << bb;
    Mat rotatedImg = Mat(bb.height, bb.width, img.type());
    rotatedImg = Scalar(0);

    for (int i=0; i<rotatedPoints.size(); i++ )
    {
        circle(rotatedImg, rotatedPoints[i], 1, Scalar(120,120,120), 1, LINE_8, 0);
    }
    imshow("Points Rotated", rotatedImg);
    waitKey();

    return 0;
}
2017-06-28 09:19:13 -0500 asked a question Fast Matching Pyramids Rotationinvariance

A approach in fast matching seems to be downsizing template and queryimage in pyramids. What would be a good strategy to make this method rotation-invariant in a situation where more than one instances of the (non-scaled) template can arise? I am very sure that the first step has to rotate the

templateImage on smallest pyramid level with big steps of rotation angle

But i do not know how to make a fit, where in each step of the following levels the estimation of the rotation angle gets more preciser. In this SO-Question the last comment refers to the unclear par as "rotate your template roughly on the low resolution levels, and when you trace the template back down to the higher resolution levels"

Can someone give me an idea -perhaps in the form of pseudocode- how such an algorithm could work?