Ask Your Question

alexcv's profile - activity

2017-08-02 01:25:00 -0600 received badge  Famous Question (source)
2016-07-29 19:05:32 -0600 received badge  Necromancer (source)
2015-06-08 05:02:34 -0600 received badge  Notable Question (source)
2014-12-09 13:43:29 -0600 marked best answer sphinx documentation validation process ?

Hi, I would like to know the best method to check if the sphinx classes documentation or tutorials are properly designed. Then, some simple questions :

_once documentation has been compiled using make docs_html, how to ensure online documentation is valid before committing ? Some css, etc. can miss on the local machine so that rendering is different from the final online version.

_When do changes/commits appear on the online doc ?

Thanks a lot

2014-08-28 07:56:35 -0600 received badge  Popular Question (source)
2013-05-13 08:34:52 -0600 received badge  Self-Learner (source)
2013-04-23 09:19:24 -0600 received badge  Great Answer (source)
2013-04-23 09:19:24 -0600 received badge  Guru (source)
2013-02-11 10:00:10 -0600 commented question How to control Dense SIFT

Yes, i actually took a look at the code and this is not really clear how dense keypoint detection is performed. Documentation should be completed to help user not going into wrong stuff.

It seems that by default, the scale parameter of keypoints starts with value=1 and scaling factor is lower than 1. Some questions remain, one important being : _depending on the descriptor that is used how does it react ? => for example SIFT with scale=1, Does OpenCV SIFT understand native SIFT size or does it force less ?

2013-02-09 08:48:40 -0600 answered a question SIFT and OpponentSIFT normalisation

This was actually a false question ;o)

SIFT is already L2 normalized using the cited normalization method. But for optimization purpose, SIFT signature is converted to unsigned char format (loose precision but gain computational efficacy for the next processing stages). To do so, each descriptor bin has been multiplied by 512.f (check constant value : SIFT_INT_DESCR_FCTR within opencv/modules/nonfree/src/sift.cpp).

Have a nice coding and experimentation !

2013-02-09 03:15:11 -0600 asked a question SIFT and OpponentSIFT normalisation

Hi all, When using SIFT or SURF descriptors, there output is normalized in a different way.

This can impact on the post-processing that is used afterwards. SURF seems to be L2 normalized. Regarding SIFT it is not the case.

Then, if using SIFT or Opponent SIFT, what normalization should be applied to converge to a L2 normalized version ? One can use a simple sift/(||sift||l2 +e) but regarding the current OpenCV SIFT implementation, is there a more convenient way to do ?

Regards

2012-12-04 02:00:46 -0600 received badge  Supporter (source)
2012-11-29 05:28:59 -0600 received badge  Good Answer (source)
2012-11-13 03:29:32 -0600 received badge  Good Answer (source)
2012-11-13 03:29:32 -0600 received badge  Enlightened (source)
2012-11-08 16:11:37 -0600 answered a question Python Load OpenCV FileStorage

Hi, The solution seems easy when found ;o) I hope doc will be extnded soon !

Here is how to load and use a matrix stored in an opencv generated xml file :

#load xml file
fileToLoad="myFile.xml"
myLoadedData = cv.Load(fileToLoad)

#check data properties
print myLoadedData # this print shows matrix size, datatype, cool !

#access a cell of the matrix
rowIdx=1
colIdx=10
print "Accessing matrix at col:"+str(colIdx)+", row:"+str(rowIdx)
print myLoadedData[rowIdx, colIdx]

That's done ! Hope it helps

Regards

2012-11-07 10:10:51 -0600 received badge  Necromancer (source)
2012-11-07 08:58:40 -0600 answered a question Red Eye detection

Hi, regarding, the illumination problem, you should be robust against this or use detectors robust to this. A simple solution it to use the Retina model available in the contrib module as a preprocessing tool. Check the related tutorial to experiment on your videos, with the retina model. However, take care of the retina configuration, depending on it, you may have to retrain face detectors at its output. The critical parameter is related to mean luminance pass-trough setup, you should let it process at least some luminance data (parameter 1 > hcellsgain > 0).

Now, regarding specific "Faces with Helmets" detection problem, something you can test, depending on the data you have is to train a face detector on this specific kind of faces. If you have a large dataset on which you label several faces, then, it can be good. But before this "hard" solution, did you test all the provided xml files describing the numerous already trained face detectors (there are frontal, side, etc. with various degrees of precision)?

Anyway, face detection is always difficult in uncontrolled datasets where faces do not perfectly face the camera... the cameraman cannot always ask actors to face him and smile ;o)

2012-11-07 08:23:03 -0600 received badge  Nice Answer (source)
2012-11-07 07:49:43 -0600 commented answer How to categorize the images based on Illumination / shadow ?

Hi, thank you for the remark, i updated doi links. it should work now. regards

2012-11-07 03:36:33 -0600 commented answer How to use parallel_for?

Yes, this is the way, for more than 1D matrix cases, "manually" do 'for' loops on the first dimensions and finally, make the last dimension be processed in parallel. This is a way, efficient with multicores systems, but other solutions are also possible.

2012-11-06 14:14:11 -0600 answered a question How to categorize the images based on Illumination / shadow ?

Hi, Actually, measuring illumination amplitude may be not correlated to your application context. Indeed, your interest is related to facial features, luminance is only a distractor. So your features extraction should rather be robust against luminance rather than finding thresholds, etc to adapt to it in a risky way. In previous research, we used as a preprocessing tool the retina model recently proposed in OpenCV (check the contrib module). The aim was to limit the impact of lighting and enhance local face features. It can also enable face motion extraction. Have a look at it, it can improve your algorithm robustness and generalization potential.

Regards Some references : Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773: http://dx.doi.org/10.1016/j.cviu.2010.01.011. Benoit A., Caplier A., "FUSING BIO-INSPIRED VISION DATA FOR SIMPLIFIED HIGH LEVEL SCENE INTERPRETATION: APPLICATION TO FACE MOTION ANALYSIS", Elsevier,Computer Vision and Image Understanding 114 (2010), pp. 774-789: http://dx.doi.org/10.1016/j.cviu.2010.01.010.

2012-11-04 08:46:34 -0600 received badge  Nice Answer (source)
2012-11-04 05:45:10 -0600 answered a question 3d models from 2d image slices

Hi, As previously answered, OpenCV is more targeted to 2D(+t) image processing. If you need 3D reconstruction, you can check the efficient PointCloudLibrary (http://www.pointclouds.org/).

I used it to display 2D OpenCV images "slice" with the third dimension related to time. It is really easy to make the link between the two libs.

Once transfered your 2D slices to PCL, there are many state of the art methods to segment 3D objects, extract 3D envelops, etc.

Check my question/autoanswer that shows a basic OpenCV cv::Mat transfer to PCL. http://answers.opencv.org/question/3098/display-in-3d-at-set-of-binary-labelled-cvmat/

3D rendering is really impressive, that great even for demos.

Regarding 3D object segmentation, here is a link to related PCL doc :

http://docs.pointclouds.org/trunk/group__segmentation.html

Good luck !

2012-11-04 04:32:02 -0600 answered a question How to use parallel_for?

Hi,

As shown by Vladislav, you only need to derivate the cv::ParallelLoopBody class to make your own.

To complete, and answer Q3 (previous Qx may be related to includes, you should give more details about encountered errors). Then, related to Q3 : if you need to process in parallel local memory buffers or other data, you need a constructor that will point to the buffers to process when operator() is called.

Here is a sample code i use that may help you : It is a simple loop that clips buffer values to max and min values. I consider here classical tables of any type using templates. You can change this using std::vectors, cv::Mat or any other, only keep in mind that you have to create private identifiers that points to the beginning of data buffer that you want to manage.

In constructor, you show which buffer to process and eventually what constants to take into account. Once done everything is prepared to run parallel. In the operator() method, create new local pointers to the target block range

Hope it helps.

Regards

Alex

template <class type>

class Parallel_clipBufferValues: public cv::ParallelLoopBody
{   
private:
  type *bufferToClip;
  type minValue, maxValue;

public:
  Parallel_clipBufferValues(type* bufferToProcess, const type min, const type max)
    : bufferToClip(bufferToProcess), minValue(min), maxValue(max){}

  virtual void operator()( const cv::Range &r ) const {
    register type *inputOutputBufferPTR=bufferToClip+r.start;
    for (register int jf = r.start; jf != r.end; ++jf, ++inputOutputBufferPTR)
    {
        if (*inputOutputBufferPTR>maxValue)
            *inputOutputBufferPTR=maxValue;
        else if (*inputOutputBufferPTR<minValue)
            *inputOutputBufferPTR=minValue;
    }
  }
};

Finally, how to use it :

const int SIZE=10;
int myTab[SIZE];
int minVal=0, maxVal=255;
parallel_for_(cv::Range(0,SIZE-1), Parallel_clipBufferValues<int>(myTab, minVal, maxVal));
2012-10-31 07:01:11 -0600 received badge  Nice Answer (source)
2012-10-23 04:57:08 -0600 commented answer Image Processing Filter

Well, i think we missunderstood but no problem about this. Actually, you already did this hand labelling on the image you provided at the beginning, you showed where is your target. To go further, once feature points are detected in your hand circled areas, just store the corresponding image descriptor you chose (SIFT or else) at each keypoint. Store this in "Positive samples descriptors matrix". Consider them as reference target points that you can try to match later with new image samples. You should also store feature points that are not detected in your hand labelled areas and store there descriptor signature in a "non target matrix". This would allow better classification methods to work. Well, this was a simplified description of features matching methods, also consider BagOfWords.

2012-10-21 12:04:53 -0600 commented answer Which feature descriptor should I use with Harris corner detector?

Hi, actually not, Regarding the detector aspect, Harris and SIFT differ from the threshold value computation (hessian matrix vs second moment criteria). Regarding the SIFT description : descriptor is generally extracted from the same scale as its SIFT keypoint scale. It takes into account magnitude and orientation of local information computed on a grid of small patches around the keypoint. You should take a look at Lowe's paper (1999) <http://dx.doi.org/10.1109%2FICCV.1999.790410> and also compare to other descriptors GLOH, etc.) .

Regards

2012-10-21 10:01:28 -0600 answered a question Which feature descriptor should I use with Harris corner detector?

Hi,

A classical combination is Harris detector+SIFT descriptor, you can also consider OpponentSIFT is color information is an important/selective criteria.

Regards

2012-10-21 09:04:46 -0600 received badge  Self-Learner (source)
2012-10-21 08:44:28 -0600 answered a question Display in 3D at set of binary labelled cv::Mat images... somekind of slices of a 3D object

Hi all, Thank you Martins for recommendations. I answer my own question for those who are interested by the topic:

You can create in few lines a 3D viewer on which you can add point clouds and shapes : Please refer to the official tutorials for details http://pointclouds.org/documentation/tutorials/pcl_visualizer.php

/* generic 3D viewer display, similar to the official tutorials. changes added : camera setup :

_ use the viewer->resetCameraViewpoint("Considered blobs"); to point to the middle of your point cloud

_ use viewer->setCameraPosition (cameraPoxitionX, cameraPoxitionY, cameraPoxitionZ, viewX, viewY, viewZ) // to set the camera position and its directions

*/

Consider the following includes... first check your system configuration (add pcl library and dependencies),

#include <pcl/common/common_headers.h>
#include <pcl/point_cloud.h>
#include <pcl/impl/point_types.hpp>>
#include <pcl/io/pcd_io.h>
#include <pcl/visualization/pcl_visualizer.h>
#include <boost/thread/thread.hpp>

// This method display 3D COLOURED points

boost::shared_ptr<pcl::visualization::PCLVisualizer> createVisualizer (pcl::PointCloud<pcl::PointXYZRGB>::ConstPtr cloud)
{

    boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer (new pcl::visualization::PCLVisualizer ("3D Viewer"));
    viewer->setBackgroundColor (0, 0, 0);
    pcl::visualization::PointCloudColorHandlerRGBField<pcl::PointXYZRGB> rgb(cloud);
    viewer->addPointCloud<pcl::PointXYZRGB> (cloud, rgb, "myPoints");
    viewer->setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, "myPoints");
    viewer->addCoordinateSystem ( 1.0 );
    viewer->initCameraParameters ();
    viewer->resetCameraViewpoint("myPoints"); // camera points to the center of the point cloud
    viewer->setCameraPosition   (   camXPos, camXPos, camXPos, // camera position

            0, 1, 0 // camera direction to cloud center
            );  // ... then camera faces the point cloud
    return (viewer);
}

Once prepared and considering a cv::Mat image "colorImage" and a binary mask "areas" , add points to your point cloud and display it : ... take care of X&Y inversions (take into account the top left (0,0) of cv::Mat reference points) ...HINT, the Z axis can be used as the temporal information like in this exemple (time is refered as "frameIndex" here

Allocate your point cloud :

pcl::PointCloud<pcl::PointXYZRGB>::ConstPtr point_cloud_ptr(new pcl::PointCloud<pcl::PointXYZRGB>);

Fill the point cloud with the localizes and coloured segmented blob points

        for(int y=0;y<colorImage.rows;++y)
            for(int x=0;x<colorImage.cols;++x)
            {
                // get pixel data
                cv::Scalar maskPoint = areas.at<unsigned char>(cv::Point2d(x,y));
                cv::Vec3b colorPoint = colorImage.at<cv::Vec3b>(cv::Point2d(x,y));

                if (maskPoint[0])
                {
                    //std::cout<<"New point (x,y,z) = "<<x<<", "<<y<<", "<<frameIndex
                    //      <<" // (r,g,b) = "<<(int)pr<<", "<<(int)pg<<", "<<(int)pb<<std::endl;

                    //Insert info into point cloud structure
                    pcl::PointXYZRGB point;
                    point.x = -x;
                    point.y = -y;
                    point.z = frameIndex;
                    uint32_t rgb = (static_cast<uint32_t>(colorPoint.val[2]) << 16 |
                            static_cast<uint32_t>(colorPoint.val[1]) << 8 | static_cast<uint32_t>(colorPoint.val[0]));
                    point.rgb = *reinterpret_cast<float*>(&rgb);
                    point_cloud_ptr->points.push_back (point);
                }
            }

Finally, when the cloud is ready, display it by calling the first procedure :

... check doc to add any supplementary shape, here, a cude is added (dims are image size*timeLength).

    std::cout<<"Pointcould size : "<<point_cloud_ptr->points.size()<<std::endl;
    boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer = createVisualizer( point_cloud_ptr );
    viewer->addCube (-colorImage.cols, 0, -colorImage.rows, 0, 0, timeLength);
    //Main user ...
(more)
2012-10-21 08:12:52 -0600 commented answer Image Processing Filter

Hi, Actually, step 3 is mainly done by hand : you introduce human knowledge to the system by manually labelling a set of features to tell the system if it is a target or not. Typically, considering a dataset, you (and collaborators) create a report file (xml?) that provides, for each potential target if it is a target to consider or not. This is a manual long and boring (but required) step. Once done, in general high level features detection, we apply a classification stage where a classifier (SVM, KNN, etc.) is trained on the labelled dataset. The classifier learns how to distinguish detected features of the dataset taking into account the hand made groundtruth labels. In your case, this classification may be simplified is features signatures are easy to distinguish.

Hope it helps i

2012-10-19 09:29:59 -0600 answered a question Which matcher is best for SURF?

Hi,

We experimented various matchers with SURF.

FLANN is fast but... gives low performances in difficult context (heterogeneous/various) dataset.

Brute force matchers like L1 or L2 based distance give good results.

If you consider L2 based bruteforce matchers, consider L2 distance without the square root computation which does not introdue error in this matching case and allows less processing to be performed.

Typical use :

//Allocate your image descriptor and your matcher with a OpenCV pointer (do not care about the object delete step):

//-> 1. descriptor:

cv::Ptr<cv::DescriptorExtractor> _descExtractor = DescriptorExtractor_Custom::create("SURF");

/*-> 2. matcher: define a string keyword that shows which matcher to choose :

*BruteForce (it uses real L2 )

*BruteForce-SL2 (not in the documentation, BUT this is the one that skeeps the squared root !)

*BruteForce-L1

*BruteForce-Hamming

*BruteForce-Hamming(2)

*FlannBased

*/

cv::Ptr<cv::DescriptorMatcher> _descMatcher = cv::DescriptorMatcher::create(keyword );

Finally, regarding good matches sorting, you should take a look at the RANSAC method that allows global a displacement identification and not corresponding matches pop out. Have a nice coding ;o)

2012-10-17 07:31:13 -0600 received badge  Scholar (source)
2012-10-17 07:30:56 -0600 commented answer Display in 3D at set of binary labelled cv::Mat images... somekind of slices of a 3D object

Hi, thank you for your answer. Yes, OpenGL would definitely be a solution, but i was curious about the PCL links with OpenCV as shown on you blog (nice job&presentation) Thanks Alex

2012-10-12 11:26:54 -0600 received badge  Student (source)
2012-10-12 07:50:46 -0600 asked a question Display in 3D at set of binary labelled cv::Mat images... somekind of slices of a 3D object

Hi all,

I would like to display for demonstration purpose a set of cv::Mat that are actually the slices of a 3D shape. Each slice is a labbelled blob image of the same size. PointCloud library sounds nice, however, can we use it from OpenCV ? i also saw this discussion here http://opencv.willowgarage.com/wiki/OpenCVandPCL but no other information. Also an example from martin Peris seems interesing http://blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html but is a hand made data transfert from one lib to the other. Then, is there an "official" compatibility porting tool to display such 3D volumes in a simple way for free visualisation & object observation ?

Thanks people

2012-09-25 06:01:26 -0600 answered a question Error: msvcr90d.dll can not be find

Hi, Your error seems to be windows dll related but it can also come from some library install problems, is your dll corrupted ?
Also, when dealing with video processing, you need a video file decoder. FFMPEG is required. Then, check your installation/configuration. If you recompile from sources, check if FFMPEG is found when configuring with the cmake tool.
Have a look at install guide for more details.

Hope it helps.

2012-09-20 03:00:14 -0600 received badge  Nice Answer (source)
2012-09-09 04:10:42 -0600 answered a question Image Processing Filter

Hi, since the circled areas you wanna catch have specific spatial features. And also if these features appear to be similar on all the dataset you use, then, a features detection+description process can be applied, using classical features detector/descriptor like SIFT.

A simple way to do this can be like this : Considering a dataset of image samples where these artifacts are visible, then, apply the SIFT detector (check http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html?highlight=featuredetector#Ptr<featuredetector< a=""> FeatureDetector::create(const string& detectorType)> > ).

  1. First check if the detector is able to provide some keypoints on these features (And do not care about the other detected features ! They will be distinguished later on) !. Use method the drawkeypoint method for visual check ( <http://docs.opencv.org/modules/features2d/doc/drawing_function_of_keypoints_and_matches.html?highlight=drawkeypoint#void drawKeypoints(const Mat& image, const vector<keypoint>& keypoints, Mat& outImage, const Scalar& color, int flags)> ). You can make tests with various keypoint detector and choose the one that detects your features the more often (whatever the over detected keypoints are !).

  2. Second step : describe features in order to distinguish them. For that use a feature descriptor, choose the most appropriate doing tests using the flexible method DescriptorExtractor::create ( http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_extractors.html?highlight=features%20descriptor#descriptorextractor-create ) Use DescriptorExtractor::compute method to describe each previously detected keypoint.

  3. Then, manually label the features that the detector finds and store... this will help you to distinguish your targets from the other detected keypoints.

  4. Finally, considering a new image dataset, do the same 1 and 2 steps (detection+description) and match your stored hand labelled target features with each the new image detected and described keypoints. You can use a descriptor matcher available from the flexible DescriptorMatcher::create method ( http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html?highlight=descriptor%20match#descriptormatcher-create ).

This is a simple kind of spatial feature matching that can be really efficient... IF your features are reproducible from one image to the other. !

For more advanced features matching, check out research papers on "bag of words" to push the button further !!!

Have a nice coding and experimentation !