Ask Your Question

petititi's profile - activity

2015-03-13 08:18:55 -0600 received badge  Scholar (source)
2015-01-20 06:13:24 -0600 commented question [ATTENTION] Regarding topics and answers not appearing right away, but requesting moderation

OK, no problem then! I know you have done a lot for this QA forum, thanks!

2015-01-20 03:56:24 -0600 commented question [ATTENTION] Regarding topics and answers not appearing right away, but requesting moderation

Same as @kbarni: if our answers are stuck in the moderation queue, we lose motivation...

2015-01-20 03:50:46 -0600 answered a question How to specify modules with CMake?

You can also use ccmake (if you are a linux user) which will help you to select the feature you want (or not)... There is also a beautiful graphic interface (cmake-gui), depending the version you have installed...

2014-12-02 13:43:22 -0600 answered a question Limiting the no. of keypoints detected by detector created by FeatureDetector

Hi!

In 2.4.9, you can still use the Algorithm::set function to change the value of parameters... In your case, you can adjust the "thres" parameter to reduce the number of detected points:

 cv::Ptr<FeatureDetector> detect = FeatureDetector::create("BRISK");
 detect->set("thres", 50);//default value is 30

But in Opencv3, you will have to use the parameters of the BRISK class...

2014-12-01 13:22:19 -0600 received badge  Great Question (source)
2014-12-01 03:59:34 -0600 received badge  Nice Answer (source)
2014-11-30 14:34:19 -0600 answered a question how blury the background apart from ROI

Hi!

This can be done using additions (easy to code, but not very efficient...):

 Mat imgSrc;//here is your input img
 vector<Rect> boundingBoxes;//and the bounding boxes

 Mat imgBlur, imgMask;
 blur( imgSrc, imgBlur, Size( 5, 5 ) );
 imgMask = Mat::zeros(imgSrc.size(), CV_8UC1);
 for(size_t i=0; i<boundingBoxes.size(); i++)
 {
   //Remove part of image
   rectangle(imgBlur, boundingBoxes[i], Scalar(0), CV_FILLED);
   //and create the mask
   rectangle(imgMask, boundingBoxes[i], Scalar(255), CV_FILLED);
 }

 Mat output = imgBlur.clone();
 add(imgBlur, imgSrc, output, imgMask);
2014-11-30 06:44:07 -0600 edited question Opencv3 and Algorithm (create, getList, etc.)

Hi, I just moved my project from 2.4.9 to 3.0.0alpha and I noticed that most of feature2d algorithms don't use the macro the CV_INIT_ALGORITHM which allows us to use:

cv::Ptr<FeatureDetector> detect = FeatureDetector::create<FeatureDetector>("SIFT");

My project highly relies on such feature (and also on get/set parameter, list algos and parameters...), so my question is : will this feature included in OpenCV3 or Algorithm is going to be a dead-end ?

2014-11-25 08:45:09 -0600 asked a question Vision challenge: tracking and Pedestrian Detection

Hi!

I'm a bit confused with the OpenCV Vision Challenge: on the main page, they give the list of categories covered by the challenge, and you can find the "tracking" category. In PDF, you also have a link to the database, but nothing in the dataset page... Is this topic still covered by the challenge?

There is also a new category, named "Pedestrian Detection", but the footnote seems to state that this dataset is not used in the challenge???

Can someone (Dmitry Anisimov?) gives us some details? Thanks in advances!

2014-11-23 12:46:44 -0600 commented question Videowriter and capturing - memory leak ?

Hi, please post here your code, from webcam opening to video writing. I don't have such problem here...

2014-11-23 12:43:29 -0600 answered a question How to get SURF to work in these scenarios?

Indeed, SURF is not affine-invariant. It means that non-uniform scaling (as well as perspective transformations) produces too much perturbations, and the SURF features vectors will be very distant.

Unfortunately, there is not yet a good algorithm which can deals with such transformation... So you will have to find an other way to match your images :-(

For the bounding-box problem, you have to draw a rotated bounding box using points of each corners, and lines...

2014-11-23 12:32:25 -0600 answered a question OpenCV ref count: function returning cv::Mat?

Hi!

This should not be a problem... Please add:

 CV_ASSERT(src.depth()==CV_64F);

That way, you are sure the matrix type is OK... You should also use direct access to speedup a little the iterations:

 cv::Mat rowwise_maximum(const cv::Mat& src, int32_t width)
 {
    CV_ASSERT(src.depth()==CV_64F);
    cv::Mat dest = src.clone();
    double* singleRowSrc, singleRowDest;
    for(int row = 0; row < dest.rows; ++row)
    {
        singleRowSrc = src.ptr<double>(row);
        singleRowDest= dest.ptr<double>(row);
        for(int col = 0; col < dest.cols; ++col)
        {
            singleRowDest[col] = 0.5 * singleRowSrc[col];
        }
    }

    std::cout << "ok" << std::endl;

    return dest;
 }

max_foo.refcount is still 0 because code probably crash before assignment (I think it occurs while removing that data from the stack).

2014-11-23 12:10:19 -0600 answered a question Simple matrix operation not working : is it normal ?

Hi!

there is probably something wrong with your code, because this works for me... Can you post the code you use?

2014-11-22 09:38:16 -0600 answered a question A way to ensure Canny runs in serial?

Hi,

Indeed, you have to make sure you build OpenCV without TEGRA_OPTIMIZATION or IPP (you have to install such library so by default, this is not enabled). Then, you have to disable opencl using cv::ocl::setUseOpenCL(false) to prevent OpenCV to use your graphic card.

After a brief investigation of the source code (2.4.9), it seems Canny (as well as Sobel) was not optimized using TTB... It should then run using only one processor. Please tell us if it's not the case!

2014-11-22 09:23:39 -0600 answered a question How to extract data from edge detection of a video

Hi!

I thing the best way to do this is to compare the mean edge response of the original video with the blurred one., You can do it like this:

Mat imgSrc, imgTest;
Mat edgeSrc, edgeTest;
double totalSum = 0, nbFrame = 0;
while(!srcVideo.empty())
{
   imgSrc = srcVideo.getFrame();
   imgTest= testVideo.getFrame();
   Sobel(imgSrc, edgeSrc, -1, 1, 0);
   Sobel(imgTest, edgeTest, -1, 1, 0);
   totalSum += sum(edgeSrc)[0] - sum(edgeTest)[0];
   nbFrame += 1.;
}
totalSum /= nbFrame ;

Then using totalSum, you know if the second video is blurred or not (if totalSum < 0, the test video is blurred...)

2014-11-20 16:41:46 -0600 commented question How to extract data from edge detection of a video

HI! Please precise your problem: if you want to see if someone blur a video, I would do differently than using Canny. So do you really need a binary edge map (canny) comparison or not?

2014-11-20 07:18:51 -0600 answered a question How is OpenCVConfig.cmake detectable by cmake on windows

This behavior depends on the operating system. I assume you are on windows, the path is found thanks to cmake-gui: according to documentation (section 5.), find_package will search for project in build trees recently configured in a cmake-gui. But you can deactivate this behavior using "NO_CMAKE_BUILDS_PATH" constant in your CmakeLists.txt file...

2014-11-20 06:57:15 -0600 commented answer Coordinate System Transform

An homography is like an affine transformation, but allows perspective deformations...

If you just want to correct the motion of camera, you can follow the tutorial I posted before... Instead of matching points from one object image to "live" image, you will match points from first image of your sequence to current image. Then, using the homography computed by findHomography, you will be able to correct the transformation using warpPerspective. Depending the way you match your points, you will probably need to invert the transformation (or use the WARP_INVERSE_MAP flag).

2014-11-19 18:35:24 -0600 answered a question Coordinate System Transform

Hi!

I am assuming that your object is flat because otherwise you can't find an homography transformation (rotation, scale, translation) which put the object back in the same position : the perspective will change the object 2D projection (for example, one face of the object can be hidden)...

If your object is flat, this tutorial will probably help you! If not, please ask for details!

2014-11-19 18:23:22 -0600 answered a question OpenCV Function Not in Docs... Need Help!

Hi!

opencv_createsamples and opencv_traincascade are not opencv function... They are executable you have to build (using CMake rule BUILD_opencv_apps).

The source code can be found in /apps/traincascade and /apps/haartraining of the opencv repository... Then you can use this tutorial to understand how to use the programs!

Good luck ;)

2014-11-14 03:01:59 -0600 received badge  Nice Answer (source)
2014-11-11 13:46:59 -0600 received badge  Self-Learner (source)
2014-11-05 09:54:01 -0600 answered a question Opencv3 and Algorithm (create, getList, etc.)

A small bump as this is not yet answered... Maybe someone know how to list every feature2d algorithms the user has access to (at run-time of course), without Algorithm::getList()?

2014-11-05 09:35:49 -0600 answered a question how to pass command line argument to excute this code in visual studio 2013

I'm not sure I understand your question, but if you want to know how to pass command line argument in Visual studio 2013, watch this video!

2014-11-04 11:20:16 -0600 commented answer Guarantee continuous Mat Image

Please mark it as solved ;-)

2014-11-03 15:48:50 -0600 commented answer Guarantee continuous Mat Image

Hi,

your code is correct, you just made small mistakes: you forgot the "u" in isContinuous, and the function clone() should have parenthesis :

  if(!Frame.isContinuous()) Frame=Frame.clone();
2014-10-31 07:58:35 -0600 commented question how to calibrate camera (stereo)

I'm sorry, but I have the same problem as you (black rectified images...). How do you acquire your images? Are you sure the relative position of the two camera doesn't change (it's very important as you are working with very small pattern)? You didn't process your images after that (rescale, rotation...)? As I don't have any clue, I edited your question with the information you gave, maybe someone with more experience will give it a try ;-)

2014-10-31 07:30:57 -0600 edited question how to calibrate camera (stereo)

Hi i am trying to calibrate two cameras which are looking at a mini object through a microscope. The calibration target is a circle grid.

I have to mark these images appropriately in order to find the circles (images with markers)

markers data : (points only; format is objectX,objectY,objectZ,img1X,img1Y,img2X,img2Y per line) if you want you can load these points as the input for stereocalib.

My program is very similar to the one from opencv example. I tried both :

  • First: to calibrate the camera individually before stereo-calibration (ofc fix-intrinsic flag is set while using stereocalib)
  • Second: directly use stereocalib

I still have black images as rectification output.

From what I read the calibration should be done by moving the calibration target around, so that it fulfill some criteria:

  1. size (by moving the target approaching the camera and away from the camera)
  2. area (by moving along the x axis and y axis direction)
  3. skew (by tilting the target with various angle)

But due to space and lighting limitation I can only move up and down using the calibration target. It is impossible to do the (stereo) calibration like this? Since I got only black images after using stereo rectify.

Thank you

2014-10-30 13:42:09 -0600 answered a question Cannot detect yellow balls using OpenCV

It's very strange because your binary image is very good! Try with the two last parameters:

CvSeq *circles = cvHoughCircles(hough_in, storage,
    CV_HOUGH_GRADIENT, 4, size.height/10, 100, 40, 1, size.height);

If it's not working, try to analyse storage value (I don't use C version, so I'm not sure what the function returns). But as suggested by thdrksdfthmn, use the C++ version of opencv ;-) Look for this exemple in c++, it's easier and well tested!

2014-10-30 13:31:15 -0600 answered a question getting area points inside contour

You can get the pixels inside the shape using drawcontours: you will create a black image of the same size of your image, then draw the shape on it (be careful to use thickness=CV_FILLED) in white. You will then be able to detect if pixel (x,y) is inside the shape or not...

You can also use pointpolygontest to test points, but the performances are probably worse than the previous version...

2014-10-30 13:22:28 -0600 commented question how to calibrate camera (stereo)

Yes it is possible. Your problem is probably elsewhere. Can you show us your code and the calibration pattern you use?

2014-10-30 05:37:26 -0600 answered a question Capturing Video - OpenCV

Hi!

Do you do any particular treatment on your video? Can you post your code?

2014-10-30 05:34:50 -0600 answered a question Guarantee continuous Mat Image

Hi! Are you sure your images are not continuous (img.isContinuous())?

If they aren't, the solution you post seems correct, it's working for me. Can you give use more details on your code and error?

2014-10-30 05:08:35 -0600 answered a question Interfaces std::Vector to cv::Mat

Hi!

  1. It is possible to convert a vector to a cv::Mat, but only if vector inside-type is supported by OpenCV. That is, there is a structure called "DataType" which define how data can be used by Mat constructor (look in modules\core\include\opencv2\core\traits.hpp for examples). So this is defined for classic types (float, int...) and some more complex types, like cv::Complex<>, std::complex<>, cv::Vec<> etc. If you want to add your specific type, you just have to specialize this class. But be careful: inside-type can't be a structure of different types like cv::Keypoint is (there is float and int)! Indeed, a Mat with such data doesn't make sense as a Mat is defined with a single data type (CV_32F for instance), so the values would be incoherent.

  2. The constructor of Mat allow you to specify if you want to copy the data or not. If you don't want to copy data, the overhead is null!

  3. I don't understand this question ;-)

  4. Assuming your Mat is contiguous, the solution you post is the best you can do!

So here is a snippet of how to do this with classical types:

std::vector<cv::Point3d> test;
test.push_back(cv::Point3d(0, 1, 2)); test.push_back(cv::Point3d(3, 4, 5));
cv::Mat cvt(test, false);//second param for data copy (here, data are not duplicated!)
2014-10-29 16:14:10 -0600 answered a question image processing to improve tesseract OCR accuracy

Hi! Without seeing your images, it's quite difficult to know which specific problem your images have. But I suggest you to resize images if they are too small, convert them to grayscale and normalize them.

You can also try to deblur (sharpen) your images using for instance Wiener filter. Unfortunately, there is no such function in opencv, so you will have to do it yourself. For instance, here is a small code which will sharpen your image:

  //convert first your image to float to improve precision...
  img.convertTo(imgTmp, CV_32F);
  GaussianBlur(imgTmp, imgResult, cv::Size(0, 0), 3);
  addWeighted(imgTmp, 1.5, imgResult, -0.5, 0, imgResult);

  // convert back to 8bits gray scale
  imgResult.convertTo(imgResult, CV_8U);
2014-10-29 15:57:37 -0600 answered a question Categorizing lines on a shirt

Hi! Your first guess is I think the best way to handle this problem. The fact that a line is broken into multiple lines is not a problem : just take the orientation of each line (in degree for example), put them into a vector and try to count vertical lines ((angle>80&&angle<100) || (angle>260&&angle<280)) and horizontal lines (angle<10 || angle>350 || (angle>170&&angle<190)). Using the ratio of those values to the total number of lines, you will probably be able to find the right class... But of course, if the image is slightly rotated, this method will not work...

If you want to be more robust, you can try to cluster this vector using K-Means, for instance. Using 2 or 3 class, and analyzing variances of each class, you can have some guess about the type of the shirt (horizontal/vertical ; both ; neither). Then the problem is easier: you have to choose between vertical or horizontal shirt...

2014-10-28 16:17:21 -0600 commented answer Circular Region of Interest before thresholding

The problem with such methods is the window size: taking a too small window will lead on a binary edge map (the interior of the circular object will be white), and taking a too big window will lead to the same problem than Otsu's method...

2014-10-27 17:19:24 -0600 answered a question Multi object tracking with Haar cascade detection

I suppose you are working on a video with high temporal resolution (at least 10 images / seconds). Indeed, below this, you will have a lot of problems to track your objects.

The solution you can implement depends also on the processing time you can spend to track objects. If you need a close-to-realtime solution, I suggest using sparse Lukas-Kanade tracking algorithm. You will first take point in the center of each objects detected by your classifier, then use calcopticalflowpyrlk to track those points along the sequence.

If you can spend more CPU, you can use dense optical-flow algorithm which are more robust. You will then have to compute the mean translation of the flow field inside the bounding box of each objects to have the movement of your object.

2014-10-27 16:57:45 -0600 edited question How to find angle of corners detected?

i want to find angles between the corners detected from a sketched rectangle, i have used harris corner detection and canny edge detection and also have drawn circles around the corners , can i used all of these together to find angles of each corner?

I want to find angles from this image for rectangle classification. image description

I have so far done this: image description

Here is a snippet of code (I have saved the vertices of the drawn circle and want to use it with detected edges from canny):

       cornerHarris( detected_edges, dst, blockSize, apertureSize, k, BORDER_DEFAULT );
       normalize( dst, dst_norm, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );
       convertScaleAbs( dst_norm, dst_norm_scaled );

       /// Drawing a circle around corners
       for( int j = 0; j < dst_norm.rows ; j++ )
       {
         for( int i = 0; i < dst_norm.cols; i++ )
         {
           if( (int) dst_norm.at<float>(j,i) > thresh )
           {
             circle( dst_norm_scaled, Point( i, j ), 5,  Scalar(0), 2, 8, 0 );
         count++;
           }
         }
       }

       vector<Point>* circle_points;  
       circle_points= new vector<Point>[count];
       int pt=0;
       for( int j = 0; j < dst_norm.rows ; j++ )
       {
         for( int i = 0; i < dst_norm.cols; i++ )
         {
            if( (int) dst_norm.at<float>(j,i) > thresh )
            {
               ellipse2Poly( Point(i,j), axes, 0, 0, 360, 1, circle_points[pt] ); 
               pt++;
            }
         }
       }
2014-10-27 16:52:00 -0600 answered a question Circular Region of Interest before thresholding

Hi! This problem is classical with global-threshold methods (like Otsu). It's a very difficult task, and to my knowledge, there is no general method to handle this case.

During my thesis, I developed an algorithm for binarization which was focused on text but you can try with your images here : demoFAIR.

As your images have a big amount of blur, I suggest you to use a very low value for K (for instance 0.07), or if you can try with images before you blurred them, it will probably have better results. Of course, I would be happy to give you more details if you think this method worth a try!

2014-10-27 05:33:20 -0600 received badge  Critic (source)
2014-10-27 05:04:50 -0600 answered a question How to tag persons in an image in OpenCV?

Hi, I would decompose the problem in two steps : create a learning database of the person you want to tag, and the recognition step. For the first step, you will first have to create a database containing multiple view of the face of each person you want to recognize. That database will consist of a list of pictures with the corresponding name. So to sum up:

vector<Mat> images;
vector<int> labels;
vector<string> names;
images = [pic1;pic2;pic3;pic4;...]
labels = [0   ;0   ;1   ;2   ;...]
names  = [nom1;nom2;nom3]

In this example, you have 3 persons, with some pictures of them. labels correspond to the index in the names vector.

You can then use a classifier (I suggest you the LBP face detector):

Ptr<FaceRecognizer> model =  createLBPHFaceRecognizer();
model->train(images, labels);

Now model contains everything you need to tag peoples. To get the class, you just have to do that:

int predicted = model->predict(askingImg);
cout<<"Hello "<<names[predicted]<<"!"<<endl;