Ask Your Question

MikeStrike's profile - activity

2017-03-17 20:01:38 -0600 received badge  Popular Question (source)
2013-11-07 16:07:06 -0600 received badge  Student (source)
2013-11-07 14:03:39 -0600 asked a question How to establish data base for object detection?

I want to set up an object detection based on camera images. The idea is that I start with an empty database and subsequently add objects into it with the camera. So far I have a tracking algorithm (https://code.google.com/p/opencv-cookbook/source/browse/trunk/Chapter%2010/featuretracker.h?r=2) (modified but basically the same).

What I am now not sure about is, how to set up the database.

The idea is the following:

I am detecting SIFT feature points (lets say frame1 1000 FeaturePoints(FPs)) in the first frame. In the subsequent frame2 I am taking over the features I recognized in frame1 due to the tracking algorithm (say 500). With this 500 I go to frame3 and take over those FPs I can track from frame2 (say 400) and so on. This means that the number of feature points in constantly dropping over the frames.

Similar to the Algorithm I want to introduce two thresholds of say 100FPs and 40FPs. If I undergo 100Fps (assume this happens in frame6) - then I start to detect again SIFT features in frame6 and push the points with that back to 1000. Unless the drop is so high that I do not only undergo 100 FPs but also 40FPs in frame6 then I consider this frame to show a new Object.

Let's say I have to refresh my FPs every 6 frames and after 7 refreshs I drop below 40 and want to classify the object. This means I have recognized 7000 Fps for Object 1. But many of them are very similar or even the same. Therefore I thought about putting this FPs into an SVM to learn the object. Now the question is what kind of SVM is the best for my purpose. I cannot do a "negative training", means I have no images which are showing the wrong object to train the classifier. I could do that once I have detected a few objects.

Additionally I want to save my results on the hard disk so that I can load them and proceed another time learning/matching other objects.

I hope my problem became clear and somebody can give me inspiration or good advice how to proceed.

Best Regards

2013-11-07 11:04:19 -0600 asked a question Feature tracking - calcOpticalFlowPyrLK??

Hello,

I use the following feature tracker: https://code.google.com/p/opencv-cookbook/source/browse/trunk/Chapter%2010/featuretracker.h?r=2

What I do not understand is how it works with the in-/output point positions in the images. If there is no prev. Image, the algorithm stores the current image (gray) in (gray_prev) and then calls

 cv::calcOpticalFlowPyrLK(gray_prev, gray, // 2 consecutive images
                        points[0], // input point position in first image
                        points[1], // output point postion in the second image
                        status,    // tracking success
                        err);      // tracking error

In the first iteration gray_prev and gray are the same and therefore all points in points[0] can be copied to points[1] - fine. Then the algrithm kicks out some points in points[1] and swaps gray_prev and gray respectively points[1] and points[0].

Which means that in the next iteration - right after calling the processing method, the keypoints in points[0] correspond to the previus frame ( gray_prev ). Now, if the number of points in points[0] is to low, new keypoints - which were found in the current image (gray) - are added

                        points[0].insert(points[0].end(),features.begin(),features.end());
                        initial.insert(initial.end(),features.begin(),features.end());

This means that we mix keypoints from both images in points[0]?! How does this work?

2013-10-22 13:46:37 -0600 commented question errormessage: undefined reference to '...' ???

Thank you very much, that was the reason!

2013-10-21 14:51:09 -0600 asked a question errormessage: undefined reference to '...' ???

Hi,

I get the error message mentioned above when I try to compile the following (extract from my VideoProcessor.cpp file):

...

    void VideoProcessor::run() {

                  // current frame
                  cv::Mat frame;
                  // output frame
                  cv::Mat output;

                  // if no capture device has been set
                  if (!isOpened())
                          return;

                  stop= false;

                  while (!isStopped()) {

//                           read next frame if any
                          if (!readNextFrame(frame))
                                  break;

                          // display input frame
                          if (windowNameInput.length()!=0)
                                  cv::imshow(windowNameInput,frame);

                      // calling the process function or method
                          if (callIt) {

                                // process the frame
                                if (process)
                                    process(frame, output);
                                else if (frameProcessor)
                                        frameProcessor->process(frame,output);
                                // increment frame number
                            fnumber++;

                          } else {

                                output= frame;
                          }

                          // write output sequence
                          if (outputFile.length()!=0)
                                  writeNextFrame(output);

                          // display output frame
                          if (windowNameOutput.length()!=0)
                                  cv::imshow(windowNameOutput,output);

                          // introduce a delay
                          if (delay>=0 && cv::waitKey(delay)>=0)
                                stopIt();

                          // check if we should stop
                          if (frameToStop>=0 && getFrameNumber()==frameToStop)
                                  stopIt();
                  }
          }

...

The two error messages say:

  1. undefined reference to 'VideoProcessor::readNextFrame(cv::Mat&)'
  2. undefined reference to 'VideoProcessor::writeNextFrame(cv::Mat&)'

while the corresponding header file looks the like that:

...

private:

bool readNextFrame(cv::Mat& frame);

void writeNextFrame(cv::Mat& frame);

public:

...

void run();

...

I included the header file and all other methods in this class work very well.

Thanks

2013-10-07 16:35:14 -0600 asked a question BoundingBoxes with CannyEdgeDetector

Hi,

I tried to modify the code presented in the tutorial you can find following the link below.

http://docs.opencv.org/trunk/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html

The thing is about finding the edges (or contours) of an image and then building bounding boxes around the ojects surrounded by a contour line. I found out that the contour lines of the gray image are often not giving me the desired result and I think I could get a better result if I would find the contour lines of an canny filtered image.

I tried the following:

#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <stdio.h>
#include <iostream>

using namespace cv;
using namespace std;
/// Global variables

Mat src, src_gray;
Mat dst, detected_edges;

int edgeThresh = 1;
int lowThreshold;
int const max_lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
char* window_name = "Edge Map";

/**
 * @function CannyThreshold
 * @brief Trackbar callback - Canny thresholds input with a ratio 1:3
 */
void CannyThreshold(int, void*)
{
  /// Reduce noise with a kernel 3x3
  blur( src_gray, detected_edges, Size(3,3) );

  /// Canny detector
  Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );

  /// Using Canny's output as a mask, we display our result
  dst = Scalar::all(0);


  imshow( window_name, dst);
 }


/** @function main */
int main( )
{
  /// Load an image
    src = imread("depth1.png");
    Mat tmp;
    src.convertTo(tmp, CV_8UC1);
    tmp.copyTo(src);

  if( !src.data )
  { return -1; }

  /// Create a matrix of the same type and size as src (for dst)
  dst.create( src.size(), src.type() );

  /// Convert the image to grayscale
  cvtColor( src, src_gray, CV_BGR2GRAY );

  /// Create a window
  namedWindow( window_name, CV_WINDOW_AUTOSIZE );

  /// Create a Trackbar for user to enter threshold
  createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold );

  /// Show the image
  CannyThreshold(0, 0);

  /// Wait until user exit program by pressing a key

    vector<vector<Point> > contours;



                dst.convertTo(tmp, CV_8UC1);
                tmp.copyTo(dst);

                findContours(checkitout,contours,0,1);//Here the error occurs
                drawContours(dst,contours,-1,Scalar(192,0,0),2,3);

                int numContours = contours.size();
                vector<vector<Point> > contours_poly( numContours );
                vector<Rect> boundRect( numContours );

                vector<Mat> subregions;
                 namedWindow("Rects");
//
//
                Mat gray_copy;

                dst.create( src.size(), src.type());

//
                for(int i = 0; i < numContours; i++ ){
                    approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true ); //Approximates a polygonal curve(s) with the specified precision
//                    //If true curve is closed, otherwise not clossed
                    boundRect[i] = boundingRect( Mat(contours_poly[i]) );
                    Mat mask = Mat::zeros(src.size(), CV_8UC1);
                    drawContours(mask, contours, -1, Scalar(255), -1);
//
                    Mat contour_roi;
                    Mat img_roi;
                    Mat BGR_sample;

                    src.copyTo(BGR_sample, mask);

                    contour_roi = BGR_sample(boundRect[i]);
                    subregions.push_back(contour_roi);

                 }

                 //For loop just to have a look at results.
                vector<Mat>::iterator it;
                int count = 0;
                for (it = subregions.begin(); it!=subregions.end(); (it++))
                    {
                        count += 1;
                        char buf[10];
                        sprintf(buf, "%d", count);
                        namedWindow(buf);
                        imshow(buf, *it);

                    }

  waitKey(0);
}

But when I try to run the code an error occurs in line 78 ( findContours(checkitout,contours,0,1) ) telling me "OpenCV Error: Unsupported format or combination of formats ([Start]FindContours support only 8uC1 and 32sC1 images), file ... (more)

2013-10-02 08:38:43 -0600 commented question Run Asus Xtion with HighGUI - HOWTO?!

thank you very much! It works now :).

2013-10-01 08:08:04 -0600 commented question Run Asus Xtion with HighGUI - HOWTO?!

Thanks for the quick reply! BuildInformation() gave me the following: with getBuildInformation()

Video I/O: OpenNI: NO OpenNI PrimeSensor Modules: NO

sorry for the stupid question, but does that mean it is not supported or just no activated? And when I reinstall opencv, do I have to uninstall the current version first or can I just rebuild it using cmake?

2013-10-01 07:32:01 -0600 asked a question Run Asus Xtion with HighGUI - HOWTO?!

Hi,

I am trying to make the Asus Xtion pro live work with my computer. I am using ubuntu 12.04 with Opencv 2.4.6.1. I have been trying this for quite some time now but I cannot make it work.

I downloaded the the tools from http://www.asus.com/Multimedia/Xtion_PRO_LIVE/#support_Download_5 a. PrimeSense Software Package 20.4.2.20 : 1. OpenNI Framework (Version 1.5.2.23) 2. Sensor DDK (version 5.1.0.41) 3. NITE (version 1.5.2.21) 4. USB driver (version 3.1.3.1)

Then I followed the installation instructions and tried to run the Xtion in NiViewer, which works fine. After that I wanted to make the example code on http://docs.opencv.org/doc/user_guide/ug_highgui.html run.

But when I try to run the code, I get the the error Message "Can not open a capture object", which is cause by

if( isVideoReading )
        capture.open( filename );
    else
        capture.open(CV_CAP_OPENNI );

    cout << "done." << endl;

    if( !capture.isOpened() )
    {
        cout << "Can not open a capture object." << endl;
        return -1;
    }

This is what is mentioned in 2. on http://docs.opencv.org/doc/user_guide/ug_highgui.html. How can I change the CMake variables in Codeblocks? I have no CMakeLists.txt as I just created an empty project and added a .cpp file.

Thanks!