Ask Your Question

Tomas's profile - activity

2020-07-04 04:18:16 -0600 received badge  Famous Question (source)
2019-02-04 21:51:00 -0600 received badge  Good Question (source)
2018-10-08 11:40:28 -0600 received badge  Notable Question (source)
2018-03-12 00:32:54 -0600 received badge  Popular Question (source)
2018-01-17 22:38:35 -0600 received badge  Popular Question (source)
2015-11-21 22:16:51 -0600 received badge  Nice Question (source)
2014-01-10 05:58:24 -0600 received badge  Student (source)
2014-01-08 18:53:29 -0600 asked a question Background color similar to object color - How isolate it?

I would isolate an object(in my case, a tuna) from the background. The problem is that they are very similar by color. Here is an example of image:

image description

For isolate, i mean, create a countour or change the color of the tuna, or something that isolate my object, because after i would make an object detection based on shape.

There are any elaboration, distorsion or technique that can i apply to my image to do that? Is it possible?

If not, what object detection tecnique should i use? P.S: think i cannot use background subtraction because my camera moves a little..

I'm very new of this world so i would be glad if someone can hel me :)

Thank you!!

2014-01-08 14:44:39 -0600 asked a question traincascade strange crash when creating classifier

I'm trying to create a cascade classifier with traincascade.

I run this command:

opencv_traincascade -vec vett.vec -data trained -bg NEGATIVE\neg.txt -numPos 5 -numNeg 30 -w 265 -h 182

where vett.vec is the previously created vec file with the positive samples(created with opencv_createsamples with the same -w -h parameters). trained is an empty dir.

After 2 seconds the program crash, no report of error or anything else. Don't know why maybe is a bug or something is wrong..

Thank you

2014-01-08 14:15:29 -0600 asked a question Problem with face detection(haar features) example

Hi all, I'm trying to execute the face detection example you can find here: http://docs.opencv.org/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html

For testing the cascade classifier..

But when it perform a detection the program crashes, here is the error:

image description

The program crash only when it performs a successful detection, and only after the return instruction of detectAndDisplay function. I try to change the frame's source to a video file, but nothing change.

Hope someone can help me!! thank you ahead

2014-01-08 14:06:23 -0600 commented answer Problems with CascadeClassifier detection. False positives

well it's a good idea! thank you

2014-01-08 09:32:01 -0600 asked a question Problems with CascadeClassifier detection. False positives

Hi, I'm making some tests with traincascade(to detect tuna), I create my positive samples with opencv_createsamples tool, then i create my own cascade.xml with opencv_traincascade, i'm doing very very simple tests so I use only 5 positive images and 1 negative. My positive samples have size 530x364 (some of them have only the object to detect in), when i launch the opencv_createsamples i use -w 26 -h 18 because if i use the original size opencv_traincascade needs too much memory for my pc. Here i have one question, if I use that parameters, would this cascade generated work properly?

And here is my main problem: I'm trying to do a detection to one of my positive samples. I use the sample code shown in the haar_cascade facedetection example. Here's the result.

image description

Maybe to make a better classifier I have to delete the background and put only the tuna in the image?

After show the image I get this error but this is another problem

image description

Hope someone can help me! thank you ahead!

2013-09-29 13:01:32 -0600 commented question How may I solve this?

Thanks a lot! :)

2013-09-24 13:18:12 -0600 commented question How may I solve this?

Thank you very very much :) And... can you suggest me some practice solutions to use motion detection? For info, because I don't know precisely what do you mean for motion detection in practice.

2013-09-20 04:21:52 -0600 asked a question How may I solve this?

Hi all, I have to make an object detection in this image:

image description

I need to detect only the tunas that have passed the grid, so, only on the right side of the image (where they are more horizontal). What is the best way you think to do that?

Thank you all!

2013-09-20 03:54:07 -0600 commented answer Svm error unsupported format or combination of formats ..

ok no problem :)

2013-09-19 11:58:05 -0600 commented answer Svm error unsupported format or combination of formats ..

I thought it did..ops, I'm very confused because the are a lot oh technique and I'm new of this world. Ok so, if i have to do an object detection I can use something like haar cascades(that work well with a lot of images learned) or something like SIFT,SURF and the others techniques that extract key points from images?

2013-09-18 09:14:59 -0600 commented answer How improve object detection robustness (it gives me false positives)

I've modified the code, and I post the changes editing the question. I hope is right!

2013-09-18 08:29:19 -0600 commented answer Svm error unsupported format or combination of formats ..

problem solved, thank you very much! But I have another question, how I can map (get a rectangle that contain the object) the object/objects found in the big image?

2013-09-18 08:27:03 -0600 received badge  Scholar (source)
2013-09-18 07:12:44 -0600 asked a question Svm error unsupported format or combination of formats ..

I'm trying to do object detection, training the SVM. The part of training is ok, he read the 2 images and create a file with the learned information. But when the code execute the predict function, he crash and below i post the error message (that i don't understand).

Do you have any idea why this happen?

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/ml/ml.hpp>

#include <io.h>

using namespace cv;

int main()
{

    if(access("learned.svm",0)){
    // Set up training data
    int width = 1920, height = 1080;

    int num_files=2;
    std::string files[2]={"tonno1.jpg",
        "tonno2.jpg"};;

    int img_area=width*height;
    Mat training_mat(num_files,img_area,CV_32FC1);

    for(int z=0;z<num_files;z++){
        Mat img_mat = imread(files[z],CV_32FC1);
        int ii = 0; // Current column in training_mat
        for (int i = 0; i<img_mat.rows; i++) {
            for (int j = 0; j < img_mat.cols; j++) {
                training_mat.at<float>(z,ii++) = img_mat.at<uchar>(i,j);
            }
        }
    }

    Mat labels(num_files,1,CV_32FC1);
    labels.at<float>(0,0)=1.0;
    labels.at<float>(1,0)=0.0;

    // Set up SVM's parameters
    CvSVMParams params;
    params.svm_type    = CvSVM::C_SVC;
    params.kernel_type = CvSVM::POLY;
    params.gamma = 3;
    params.term_crit   = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
    params.degree=10;

    // Train the SVM
    CvSVM svm;
    svm.train(training_mat, labels, Mat(), Mat(), params);
    svm.save("learned.svm");

    return 1;

    }

    CvSVM svm;
    svm.load("learned.svm");

    Mat img;
    img=imread("Sequenza 01.Immagine001.jpg",CV_32FC1);

    float f=svm.predict(img);
    std::cout<<f<<std::endl;

    waitKey(0);

}

Here is the image:

image description

2013-09-17 15:17:06 -0600 commented answer How improve object detection robustness (it gives me false positives)

thanks! tomorrow i'll try.

2013-09-17 14:40:41 -0600 asked a question How improve object detection robustness (it gives me false positives)

Hi all, I have to improve the robustness of the object detection (I need a very very strong detection, no problem of constrains time), because how you can see in the image, he calculates false positives and give wrong results. Do you have any idea, how I can increase robustness? I use bruteforce matcher because I think he find the best matching but it isn't.

Here is the code:

#include <stdio.h>
#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/calib3d/calib3d.hpp"

 #include <opencv2/objdetect/objdetect.hpp>
    #include <opencv2/features2d/features2d.hpp>
    #include <opencv2/core/core.hpp>
    #include <opencv2/highgui/highgui.hpp>
    #include <opencv2/legacy/legacy.hpp>
    #include <opencv2/legacy/compat.hpp>
    #include <opencv2/flann/flann.hpp>
    #include <opencv2/calib3d/calib3d.hpp>
    #include <opencv2/nonfree/features2d.hpp>
    #include <opencv2/nonfree/nonfree.hpp>

using namespace cv;

void readme();

/** @function main */
int main( int argc, char** argv )
{
  if( argc != 3 )
  { readme(); return -1; }

  Mat img_object = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
  Mat img_scene = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE );

  if( !img_object.data || !img_scene.data )
  { std::cout<< " --(!) Error reading images " << std::endl; return -1; }

  //-- Step 1: Detect the keypoints using SURF Detector
  int minHessian = 400;//20000

  SurfFeatureDetector detector( minHessian );

  std::vector<KeyPoint> keypoints_object, keypoints_scene;

  detector.detect( img_object, keypoints_object );
  detector.detect( img_scene, keypoints_scene );

  //-- Step 2: Calculate descriptors (feature vectors)
  SurfDescriptorExtractor extractor;

  Mat descriptors_object, descriptors_scene;

  extractor.compute( img_object, keypoints_object, descriptors_object );
  extractor.compute( img_scene, keypoints_scene, descriptors_scene );

  //-- Step 3: Matching descriptor vectors using FLANN matcher
  //FlannBasedMatcher matcher;
  BFMatcher matcher(NORM_L2,true);

  std::vector< DMatch > matches;
  matcher.match( descriptors_object, descriptors_scene, matches );

  double max_dist = 0; double min_dist = 100;

  //-- Quick calculation of max and min distances between keypoints
  for( unsigned int i = 0; i < descriptors_object.rows; i++ )
  { 
      if(i==matches.size()) break;

      double dist = matches[i].distance;
    if( dist < min_dist ) min_dist = dist;
    if( dist > max_dist ) max_dist = dist;
  }

  printf("-- Max dist : %f \n", max_dist );
  printf("-- Min dist : %f \n", min_dist );

  //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
  std::vector< DMatch > good_matches;

  for( unsigned int i = 0; i < descriptors_object.rows; i++ )
  { 
      if(i==matches.size()) break;

      if( matches[i].distance < 3*min_dist )
     { good_matches.push_back( matches[i]); }
  }

  Mat img_matches;
  drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
               good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
               vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

  //-- Localize the object
  std::vector<Point2f> obj;
  std::vector<Point2f> scene;

  for( unsigned int i = 0; i < good_matches.size(); i++ )
  {
    //-- Get the keypoints from the good matches
    obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
    scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
  }

  Mat H = findHomography( obj, scene, CV_RANSAC );

  //-- Get the corners from the image_1 ( the object to be "detected" )
  std::vector<Point2f> obj_corners(4);
  obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
  obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
  std::vector<Point2f> scene_corners(4);

  perspectiveTransform( obj_corners, scene_corners, H);

  //-- Draw lines between the corners (the mapped object in the scene - image_2 )
  line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners ...
(more)
2013-07-21 12:01:25 -0600 received badge  Editor (source)
2013-07-02 14:21:04 -0600 asked a question Multiple object tracking

Hi all, I have to detect more and more objects that are passing in a little area of my video. I was thinking to using SIFT feature detector(the better, for what i've read), because I don't have to do it in real time. I know how I can detect the objects that are passing, but I don't know how to count them when they exit the area...can anyone help me? And I have another little problem, the objects and the foreground have little difference of color (like blue and azure), may I use particular settings to improve detection?