Ask Your Question

Viatorus's profile - activity

2016-01-22 07:00:33 -0600 received badge  Nice Answer (source)
2016-01-22 06:59:49 -0600 received badge  Student (source)
2015-10-29 11:33:38 -0600 received badge  Teacher (source)
2014-12-07 04:44:55 -0600 received badge  Scholar (source)
2014-12-07 04:43:36 -0600 answered a question How do you use OpenCV TLD from Python?

Have a look here: https://github.com/Itseez/opencv_contrib

In the final release it should be also included.

2014-10-31 14:48:11 -0600 commented question How To Load Large Image About 10g

Which format have these images?

2014-02-10 02:07:57 -0600 commented question Image Stitching (Java API)

When you separate the matches into good and bad ones also have a look where they are located. If you take a horizontal panorama than there can´t be a match at pixel(y,x) (5, 20) and (70, 40). --> Point point1 = keyPoints1[matchesList.get(i).queryIdx].pt; Point point2 = keyPoints2[matchesList.get(i).trainIdx].pt; if (Math.abs(point1.y - point2.y) > 10) // cant be a good match Try also to use grayscale images for detector.

2014-02-07 00:36:28 -0600 commented question Image Stitching (Java API)

For FeatureDetection I use (and the guy from the tutorial) also Surf like the extractor and the matcher is flannbased. If you have a look at your detected features than it should be clear what happen. There are diagonal matches in a horizontal panorama. They are wrong! you must sort them out.

2014-02-06 02:28:23 -0600 commented question Image Stitching (Java API)

The constructor would be: Mat half = new Mat(result, new Rect(0,0,image2.cols(),image2.rows())); Also have a look at Features2d.drawMatches to see if your matches are okay.

2014-02-06 02:20:34 -0600 commented question Image stitching module for Java

Actually there is no official C++ interface binding for the stitching libary. But you could create your own JNI mapping.

2014-02-06 02:11:12 -0600 answered a question How can you use K-Means clustering to posterize an image using c++?

You are looking for clustering. Have look here: http://stackoverflow.com/a/10242156/1611317

#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"

using namespace cv;

int main( int argc, char** argv )
{
  Mat src = imread( argv[1], 1 );
  Mat samples(src.rows * src.cols, 3, CV_32F);
  for( int y = 0; y < src.rows; y++ )
    for( int x = 0; x < src.cols; x++ )
      for( int z = 0; z < 3; z++)
        samples.at<float>(y + x*src.rows, z) = src.at<Vec3b>(y,x)[z];


  int clusterCount = 15;
  Mat labels;
  int attempts = 5;
  Mat centers;
  kmeans(samples, clusterCount, labels, TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 10000, 0.0001), attempts, KMEANS_PP_CENTERS, centers );


  Mat new_image( src.size(), src.type() );
  for( int y = 0; y < src.rows; y++ )
    for( int x = 0; x < src.cols; x++ )
    { 
      int cluster_idx = labels.at<int>(y + x*src.rows,0);
      new_image.at<Vec3b>(y,x)[0] = centers.at<float>(cluster_idx, 0);
      new_image.at<Vec3b>(y,x)[1] = centers.at<float>(cluster_idx, 1);
      new_image.at<Vec3b>(y,x)[2] = centers.at<float>(cluster_idx, 2);
    }
  imshow( "clustered image", new_image );
  waitKey( 0 );
}
2014-01-28 07:38:41 -0600 asked a question Histogram alignment of two images

Hey Community,

I want to stitch two images together and it works well with FeatureDetection, findHomography and wrapPerspektiv. The only thing is the missing color correction.

But I do not know how I can change the histogram of an image? I know this page to compare histograms but after I have the numeric results, how to alignment them?

Thanks in advance!

2014-01-10 03:46:02 -0600 commented answer Binaries for MinGW on Windows 7

I get it to work but the problem is, that cmake only generates dll.a but I want static libraries. :/

2014-01-09 08:29:03 -0600 asked a question Binaries for MinGW on Windows 7

Hey Community,

I want to use openCV in C++ but the problem is, I can´t find the libraries for mingw under build/x86/. There are only the vcXX libraries. Where can I get them or must I compile them by my own? That would be terrible, because I don´t have admin rights and cmake need them. >.<

Thanks in advance!

Got it:

I have not seen that there is also a zipped cmake for download. This does not need admin permission.

Thank you anyway.

2013-12-20 05:26:36 -0600 asked a question Non linear blending - How to?

Hey Community,

I want do blend two images non linear. AddWeighted can only blend two images linear. I am looking for something like this:

non linear blending

(http://graphics.cs.cmu.edu/courses/15-463/2008_fall/Lectures/blending.pdf)

The best would be, if I could use my own mask for that. I thought to use a alpha mask (explained here) but I don´t know how to get back to rgb and blend again..

Anyone an idea? Thanks in advance!

2013-12-17 09:00:23 -0600 received badge  Supporter (source)
2013-12-17 08:49:00 -0600 commented answer KeyPoints&DMatch - What does distance stands for?

Ah....am I correct, that the distance doesn´t say me the pixel position (row, column) offset. It say me the pixel to pixel difference. So if query is COLOR(255,0,0) and the train would be (254,0,0) the hamming distance would be 1.... for me distance was a measure of length.

2013-12-17 06:35:15 -0600 received badge  Editor (source)
2013-12-17 06:34:48 -0600 asked a question KeyPoints&DMatch - What does distance stands for?

Hey Community,

I don´t understand the parameter distance in the struct DMatch.

There is often the code for get only the "good matches":

double max_dist = 0;
double min_dist = 100;
for( int i = 0; i < descriptors.rows; i++ ){
 double dist = matches[i].distance; 
 if( dist < min_dist ) min_dist = dist; 
 if( dist > max_dist ) max_dist = dist;
}
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors.rows; i++ ){ 
 if( matches[i].distance <= 2*min_dist ) { 
  good_matches.push_back( matches[i]); 
 }
}

But distance? What is this? It is a float, less than 1. But I can´t get a context between the distance and the pixel to pixel distance.

Say, a DMatch has his query by pixel (25, 50) [first image] and his train by (35, 50) [second image]. For me, the distance would be 10... DMatch.distance is something alá 0.065. And another distance comparison would be pixel (30, 65) and (40, 65). Again 10 pixel but DMatch.distance would be 0.062?!?

Anyone can help me, please?