Ask Your Question

lightalchemist's profile - activity

2020-10-04 13:50:30 -0500 received badge  Good Question (source)
2013-04-24 06:15:18 -0500 received badge  Good Answer (source)
2013-03-08 03:52:49 -0500 received badge  Nice Question (source)
2013-01-08 10:23:43 -0500 received badge  Nice Answer (source)
2012-11-09 02:58:27 -0500 received badge  Teacher (source)
2012-10-19 01:01:12 -0500 received badge  Student (source)
2012-10-18 21:50:37 -0500 asked a question How to write python wrapper for OpenCV C++ code

Hi, I'm trying to write a python wrapper for some C++ code that make use of OpenCV but I'm having difficulties returning the result, which is a OpenCV C++ Mat object, to the python interpreter.

At the moment, I'm using Boost Python to interface between Python and C++. While I'm able to pass my image from python to the C++ function (using some conversion code taken from OpenCV's legacy code that converts to CvMat/IplImage), I'm unable to return a Boost Python wrapped object to the python interpreter.

I know that I'm suppose to write a boost python converter but the resources I've found online (specifically the Boost Python website) only provide bits and pieces of code and does not clearly indicate how the bits of code should fit together.

I've also looked at the OpenCV source. I found some conversion functions in the file "cv2.cpp". In particular, the following seems relevant:

static PyObject* pyopencv_from(const Mat& m)
    if( ! )
    Mat temp, *p = (Mat*)&m;
    if(!p->refcount || p->allocator != &g_numpyAllocator)
        temp.allocator = &g_numpyAllocator;
        m.copyTo(temp); // Segmentation fault here.
        p = &temp;
    return pyObjectFromRefcount(p->refcount);

However, while I'm able to run the code, it gives a segmentation fault at the commented line above. Can anyone point me to resources on how to wrap OpenCV code for Python?

2012-10-03 01:59:12 -0500 received badge  Necromancer (source)
2012-10-03 01:41:21 -0500 answered a question Facial feature detection

The Flandmark Facial point detector (with code) can be found here:

It will return you the four corner points of the eyes, corner of mouth, center of nose, and center of face. It does however require you to give it a bounding box of the face so you will probably have to use the Viola Jones face detector in OpenCV (or any other method) to locate the face first, which you are already doing.

I've compiled the code on Ubuntu and it works very well, provided the bounding box you give it is "just right". If it is too tightly cropped it might miss the feature points near the border of the image. For such cases, you can try to extend the border and specify the bounding box as the "inner" image (excluding border) and sometimes it works. On the other hand when the bounding box is too large it might converge to some nonsense points. But on the whole it works really well, even on rotated faces, faces with glasses, and those that are close to side profile.

2012-08-02 20:46:15 -0500 answered a question Python Face Recognition with OpenCV

Here is a rough sketch of a system that you can consider:

1) First you should try to decide which features to use to represent each face e.g. Local Binary Pattern (LBP), Fisherface, etc.

2) Detect faces in all the images in your database if they are not already face images.

3) Preprocess the face images appropriately. This involves resizing the images to the same size, converting them to the same colorspace (e.g. grayscale), face alignment etc.

4) For each of the preprocessed face images, compute the feature vector to represent it. In this step, you might want to use a data structure (e.g. a dictionary) to map each feature vector (likely an index of the array it is stored) to the image the face is found in.

5) At this stage, each of the faces found in the images in your database is represented by a vector. A direct approach would be to convert the faces in your input image to feature vectors and compare them to every feature vector of the faces in your database and look for the K nearest ones using Euclidean distance (or any other distance measure). This might be what Philipp has in mind in his answer. You can then map those vectors to the image they appear in using the data structure in step 4.

6) There are a number of plausible methods to "improve" on the matching process outlined in step 5. One direct way would be to use Approximate Nearest Neighbor (ANN) matching instead of direct matching. OpenCV has an interface for an ANN implementation which you can use.

Another approach is to use Locality Sensitive Hashing (LSH) where you "hash" each of your input face vectors to find its nearest neighbors. I'm a bit fuzzy about the details for this one so I cannot help much but you can probably find tutorials on LSH easily.

A reason for using methods such as ANN and LSH is that they will speed up your matching process considerably. They also help alleviate the "Curse of Dimesionality" problem, which arises when you try to compare vectors of high dimensions directly, and might give better matches. In case you find it counter-intuitive that approximate methods might be more accurate than direct comparisons you are in good company :).

I have only outlined a rough sketch of one possible system. No doubt there are many other possible approaches. Hope the above is useful to you :).