Ask Your Question

xaffeine's profile - activity

2020-10-04 09:31:00 -0600 received badge  Nice Question (source)
2020-03-25 03:01:45 -0600 received badge  Notable Question (source)
2017-07-20 06:25:48 -0600 received badge  Famous Question (source)
2017-05-31 15:16:15 -0600 received badge  Notable Question (source)
2017-05-31 15:16:15 -0600 received badge  Popular Question (source)
2016-11-29 12:00:18 -0600 received badge  Good Answer (source)
2016-06-28 07:55:33 -0600 received badge  Nice Answer (source)
2016-04-19 19:12:39 -0600 received badge  Notable Question (source)
2016-02-28 08:32:34 -0600 received badge  Nice Answer (source)
2016-02-12 04:21:53 -0600 received badge  Famous Question (source)
2016-01-11 03:04:32 -0600 received badge  Good Answer (source)
2015-08-24 15:25:23 -0600 received badge  Popular Question (source)
2015-06-16 05:10:43 -0600 received badge  Favorite Question (source)
2015-06-16 01:06:21 -0600 received badge  Notable Question (source)
2015-01-29 15:21:43 -0600 received badge  Popular Question (source)
2014-12-29 09:55:16 -0600 received badge  Notable Question (source)
2014-12-09 14:00:47 -0600 marked best answer CSV File for Yale Faces

I am working through the example for face recognition and I would like to use the Yale database for training and comparing various recognizers. Has anyone got a CSV file to use with that database? Or would I have to create it myself?

I guess I would do it by moving the files into subject-specific folders and then running the create_csv.py script provided in contrib/doc/facerec/src

2014-12-09 14:00:47 -0600 marked best answer how can I align Face Images

I see that there's a nice little script in the facerec tutorial that aligns faces in images given positions for the eyes.

I need to get the coordinates for the eyes, though, somehow. Would it work to use the CascadeClassifier with the haarcascade_mcs_eyepair_big model?

I understand that there are many ways to do it (including the patented ASEF technique). I'm just wondering if this is a good way to get started.

2014-12-09 13:55:46 -0600 marked best answer Is Toe-in fatal for Stereo Correspondence?

I have had some success using pairs of webcams for stereo vision. More recently, I have tried using consumer 3D cameras for the same application, and I am having trouble with them.

It appears that the lenses are not exactly parallel and that they are "cross-eyed" enough that more-distant features sometimes have less disparity than closer features. In other words, the two views converge something in the middle distance has zero disparity.

This would seem to be an insurmountable problem. Are there any known-good ways to deal with it?

2014-12-09 13:55:01 -0600 marked best answer Is CascadeClassifier_GPU really thread-unsafe?

In my multithreaded C++ program, I get malfunctions unless I keep CascadeClassifier_GPU::detectMultiScale inside a critical section. This is true even though each of my calling threads has a separate instance of CascadeClassifier_GPU.

Why is this? Is it a bug?

In the code below, all I have to do to make it break is to remove the scoped lock. This seems to prove the thread-unsafety.

void OcvGpuFaceFinder::detectMultiScale( const cv::Mat & img, std::vector< cv::Rect> & faceRects
            , double scaleFactor
            , int minNeighbors, int flagsIgnored
            , cv::Size minFaceSize
            , cv::Size maxFaceSizeIgnored
            )


{
    static MJCCritSect critter;


    cv::gpu::GpuMat d_img;
    d_img.upload( img );

    int numFound= 0;
    cv::Mat rectMat;
    cv::gpu::GpuMat d_objBuf;

    {
        MJCCritSect::ScopedLocker locker( critter );
        numFound = m_impl.detectMultiScale( d_img, d_objBuf
            , scaleFactor
            , minNeighbors
            , minFaceSize
            );

    }
    // download the part of the gpu dest Mat that contains found face rectangles
    d_objBuf.colRange(0, numFound).download( rectMat );
    cv::Rect* faces = rectMat.ptr<cv::Rect>();

    faceRects.clear();
    // copy face rects to final destination
    for( int ii=0; ii < numFound; ++ii )
    {
        faceRects.push_back( faces[ ii ] );
    }

}
2014-12-09 13:47:54 -0600 marked best answer How should I enable SSE2 in Visual Studio 2008 builds?

Even though I set ENABLE_SSE and ENABLE_SSE2 to true in CMake, I see that CV_SSE2 is not defined in the generated projects. In internal.hpp, the only place where CV_SSE2 can be defined to 1 is inside a <#if defined __SSE2__ || defined _M_X64 || (defined _M_IX86_FP && _M_IX86_FP >= 2)> block.

It seems that the author(s) expected the compiler to predefine __SSE2__ whenever sse2 instructions are available, but this is not the case with Visual Studio 2008.

What is the best workaround for this bug in OpenCV 2.4.3? Should I manually change internal.hpp, or is there a configuration option that I should know about?

2014-12-09 13:43:54 -0600 marked best answer Can I open 2 android cameras concurrently

When using a personal computer, the code below works well for getting frames concurrently from two video cameras. On Android, so far, it is not working for me.

    std::vector< cv::VideoCapture > readers( 2 );

    for( int camID = 0; camID < 2; ++camID )
    {
        TRACE("opening camera %d", CV_CAP_ANDROID + camID );
        readers[camID].open( CV_CAP_ANDROID + camID );
    }

    while( !stopRequested() )
    {
        std::vector< cv::Mat > frames( 2 );
        TRACE("reading frames");
        readers[0].grab();
        readers[1].grab();
        readers[0].retrieve( frames[0] );
        readers[1].retrieve( frames[1] );
            // do something with the frames
    }

The first VideoCapture opens fine, but the second open causes log output as shown below. Frames from the first VideoCapture object are good, whereas those from the second are not. Both cameras work fine when accessed individually; only when opened for simultaneous access do they fail. Is this supposed to work? This is all happening on a Nexus 10.

03-01 15:11:58.275: W/ADNC(1216): opening camera 1001
03-01 15:11:58.275: D/OpenCV::camera(1216): CvCapture_Android::CvCapture_Android(1)
03-01 15:11:58.275: D/OpenCV_NativeCamera(1216): CameraHandler::initCameraConnect(0x7288a8c9, 1, 0x70c9fdf8, 0x0)
03-01 15:11:58.275: D/OpenCV_NativeCamera(1216): Connecting to CameraService v 2.3
03-01 15:11:58.275: E/OpenCV_NativeCamera(1216): initCameraConnect: Unable to connect to CameraService
03-01 15:11:58.275: E/OpenCV::camera(1216): CameraWrapperConnector::connectWrapper ERROR: the initializing function returned false
03-01 15:11:58.275: E/OpenCV::camera(1216): Native_camera returned opening error: 6
2014-11-19 15:04:02 -0600 received badge  Popular Question (source)
2014-11-14 13:12:29 -0600 received badge  Nice Question (source)
2014-06-10 13:28:57 -0600 received badge  Citizen Patrol (source)
2014-05-05 12:30:59 -0600 commented answer How to build with opencv and native_app_glue

I did static linking. It was a long time ago, though, so things may have changed.

2014-04-24 23:45:06 -0600 received badge  Popular Question (source)
2014-04-20 16:18:55 -0600 received badge  Necromancer (source)
2014-03-28 13:51:02 -0600 commented answer FaceRecognizer Confidence

Something like that :-)

2014-03-28 13:46:32 -0600 answered a question Efficient matrix operator

There is a lot more memory traffic in the second version of the code. Your arithmetic operations are simple enough that they take almost no time compared to the memory operations. Specifically, your calls to multiply() and sum() cause extra memory writes and reads, proportional to the size of your matrix.

In any situation where speed is most important, you need to be careful how much memory access you do.

2014-03-27 16:19:05 -0600 commented answer FaceRecognizer Confidence

Yes, I think you need to determine it empirically using data that is relevant to your application. There is no built-in way of converting the distance metric to a percent. If you feed large numbers of relevant examples into the recognizer and keep all the returned metrics, you can do statistics on that.

2014-03-27 14:43:12 -0600 answered a question FaceRecognizer Confidence

Similar questions are frequently asked. For example, answer found via google.

2014-03-27 14:37:02 -0600 answered a question WaitKey without waiting?

OpenCV does not have a priority on high-efficiency GUI implementation. If you don't like calling waitkey(1), you may need to use another toolbox for image display. Some OpenCV users use QT for that, others use lower-level API.

2014-03-27 14:29:37 -0600 commented answer Unusual behavior of Matx

In Matlab, indices start with 1.

2014-03-26 12:40:05 -0600 answered a question VideoCapture::get(CV_CAP_PROP_POS_MSEC) returns -1

On question 2, if you want to use cv::getTickCount() to estimate a time stamp, you should call it immediately after a grab, not before, since grab() may wait for a frame to be available.

I'll leave question 1 to others, other than to say I wouldn't surprised if prop_pos_msec doesn't work for live streams.

2014-03-25 14:05:35 -0600 commented answer Up and Down Sampling wihtout Gaussian Blur

I usually use INTER_LINEAR, but I thought nearest would work for that. Does it not?

2014-03-25 13:01:52 -0600 answered a question Up and Down Sampling wihtout Gaussian Blur

Just use resize with interpolation=INTER_NEAREST

2014-03-25 12:55:57 -0600 answered a question error in findStereoCorrespondenceBM

Convert all your images to single-channel 8-bit form using cvtcolor and, if necessary, convertTo, threshold, or normalize.

2014-03-25 12:53:38 -0600 answered a question Expectation Maximization using HSV

I would expect trouble if a lot of your data has hue near zero. If hue-space is not well populated, you can work around this by adding a "magic" value to every hue. Otherwise, you might need to a more appropriate color space.

2014-03-25 12:40:07 -0600 commented answer Matlab sub2ind / ind2sub in OpenCV /c++

This is a good answer if the input is 2D and doesn't have padding on each row.

2014-03-21 14:16:13 -0600 commented answer Use VideoWriter class to save video in mp4 container.

I'm not sure it's 100% possible. You will probably have to use one of the Microsoft API. I've had trouble getting accurate time stamps. I suspect most people create them based on a system clock.

2014-03-19 16:59:18 -0600 commented answer Use VideoWriter class to save video in mp4 container.

You might need to use an actual video-capture toolkit. OpenCV does not have a priority on detailed capture-device support. What platform and type of camera are you using?