Ask Your Question

Lamar Latrell's profile - activity

2020-06-17 08:23:14 -0500 received badge  Nice Question (source)
2016-12-16 02:48:27 -0500 received badge  Famous Question (source)
2016-03-17 15:14:21 -0500 received badge  Notable Question (source)
2016-01-05 18:17:34 -0500 asked a question Python reference documentation for 3.x ?


Coming from 2.4 and knowing the reference documentation included all the python info also:

image description

Where is the same for the 3.x ?

image description

If it hasn't been completed, where can we find information on a possible ETA? Or contribute?

2016-01-05 18:11:55 -0500 commented question How to install openCV 3.x for use with python 3.x in Windows 10

installed it with python 2.7 after (finally) finding: https://opencv-python-tutroals.readth...

2016-01-04 02:54:00 -0500 commented question How to install openCV 3.x for use with python 3.x in Windows 10

rebuild ? bindings? (I'm a dummy) ... This is a fresh install on a new machine, I can get Visual Studio again but wasn't aware it could be used with Python. I wasn't attempting to use mingw, I have no idea what that is, it was just the first option (of 4) in the Sublime Text 2 options in cmake ... So, ok, I'll just keep bashing away and see what happens...

2016-01-03 00:15:31 -0500 asked a question How to install openCV 3.x for use with python 3.x in Windows 10

Is there a 'for dummies' list of instructions to install openCV 3.x for use with Python 3.x on Windows 10?

A Hello-World of sorts? The OpenCV site doesn't appear to have one.

I am happy to use whatever IDE but as I have used Sublime Text 2 I would prefer to keep doing so.

I have spent quite a bit of time developing a reasonably complex and succesful C++ 2.* openCV application in Visual Studio and managed (with some effort) to get that development environment working for me, but when it comes to cmake/builds/github/source/compilers and all the configurations required for Python I am facing too many free variables and unknowns (and my own ignorance regarding these things).

In all the tutorials, SE Q&A and other internet dialog I've discovered there is jargon heavy and includes missed steps and assumed knowledge.

Potentially irrelevant info (??) follows: looks good, but cmake is complaining:

"CMake Error: CMake was unable to find a build program corresponding to "MinGW Makefiles". CMAKE_MAKE_PROGRAM is not set."

Which is probably something to do with my selection of 'Sublime Text 2 - minGW' in 'configure'. Maybe not, no idea, the answer given doesn't suggest which option I'm meant to chose and why...

2015-12-08 17:56:46 -0500 received badge  Popular Question (source)
2015-03-27 19:30:12 -0500 commented question Using custom kernel in opencv 2DFilter - causing crash … convolution how?

yes, it's true ! the kernel needs to be 1 channel only ... (and I even wrote that in the comments). The error gave me no indication of this, I need to look into exception handling settings (are there any in VS?). Put it up as an answer and I'll upvote and accept :)

2015-03-27 06:43:11 -0500 asked a question Using custom kernel in opencv 2DFilter - causing crash … convolution how?

Hello all,

Thought I'd try my hand at a little (auto)correlation/convolution today in openCV and make my own 2D filter kernel.

Following openCV's 2D Filter Tutorial I discovered that making your own kernels for openCV's Filter2D might not be that hard. However I'm getting unhandled exceptions when I try to use one.

Code with comments relating to the issue here:

#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <iostream>

using namespace cv;
using namespace std;

int main(int argc, char** argv) {

    //Loading the source image
    Mat src;
    src = imread( "1.png" );

    //Output image of the same size and the same number of channels as src.
    Mat dst;
    //Mat dst = src.clone();   //didn't help...

    //desired depth of the destination image
    //negative so dst will be the same as src.depth()
    int ddepth = -1;        

    //the convolution kernel, a single-channel floating point matrix:
    Mat kernel = imread( "kernel.png" );
    kernel.convertTo(kernel, CV_32F);     //<<not working
    //normalize(kernel, kernel, 1.0, 0.0, 4, -1, noArray());  //doesn't help

    //cout << kernel.size() << endl;  // ... gives 11, 11

    //however, the example from tutorial that does work:
    //kernel = Mat::ones( 11, 11, CV_32F )/ (float)(11*11);

    //default value (-1,-1) here means that the anchor is at the kernel center.
    Point anchor = Point(-1,-1);

    //value added to the filtered pixels before storing them in dst.
    double delta = 0;

    //alright, let's do this...
    filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );

    imshow("Source", src);     //<< unhandled exception here
    imshow("Kernel", kernel);
    imshow("Destination", dst);

    return 0;

As you can see, using the tutorials kernel works fine, but my image will crash the program, I've tried changing the bit-depth, normalizing, checking size and lots of commenting out blocks to see where it fails, but haven't cracked it yet.

The image is, '1.png':


And the kernel I want 'kernel.png':


I'm trying to see if I can get a hotspot in dst at the point where the eye catchlight is (the kernel I've chosen is the catchlight). I know there are other ways to do this, but I'm interested to see how effective convolving the catchlight over itself is. (autocorrelation I think that's called?)

Direct questions:

  • why the crash?
  • is the crash indicating a fundamental conceptual mistake?
  • or (hopefully) is it just some (silly) fault in the code?

Thanks in advance for any help :)

2015-03-13 05:11:33 -0500 commented answer OpenCV C++ contours - keeping results contiguous over frames

Done this now and it's successful... Position and velocity State Kalman filter predicts position from last iteration, current measurements (with false positives included!) are then Hungarian associated, associations are then fed to the Kalman update - rinse and repeat :)

2015-03-13 02:28:23 -0500 asked a question Query re. how to set up an SVM, which SVM variation … and how to define a metric


I’d like to learn how best set up an SVM in openCV (or other C++ library) for my particular problem (or if indeed there is a more appropriate algorithm).

My goal is to receive a weighting of how well an input set of labeled points on a 2D plane compares or fits with a set of ‘ideal’ sets of labeled 2D points.

I hope my illustrations make this clear – the first three boxes labeled A through C, indicate different ideal placements of 3 points, in my illustrations the labelling is managed by colour:

enter image description here

The second graphic gives examples of possible inputs:

enter image description here

If I then pass for instance example input set 1 to the algorithm it will compare that input set with each ideal set, illustrated here:

enter image description here

I would suggest that most observers would agree that the example input 1 is most similar to ideal set A, then B, then C.

My problem is to get not only this ordering out of an algorithm, but also ideally a weighting of by how much proportion is the input like A with respect to B and C.

For the example given it might be something like:

A:60%, B:30%, C:10%

Example input 3 might yield something such as:

A:33%, B:32%, C:35% (i.e. different order, and a less 'determined' result)

My end goal is to interpolate between the ideal settings using these weights.

To get the ordering I’m guessing the ‘cost’ involved of fitting the inputs to each set maybe have simply been compared anyway (?) … if so, could this cost be used to find the weighting? or maybe was it non-linear and some kind of transformation needs to happen? (but still obviously, relative comparisons were ok to determine the order).

Am I on track?

Direct question>> is the openCV SVM appropriate? - or more specifically:

  • A series of separated binary SVM classifiers for each ideal state and then a final ordering somehow ? (i.e. what is the metric?)
  • A version of an SVM such as multiclass, structured and so on from another library? (...that I still find hard to conceptually grasp as the examples seem so unrelated)

Also another critical component I’m not fully grasping yet is how to define what determines a good fit between any example input set and an ideal set. I was thinking Euclidian distance, and I simply sum the distances? What about outliers? My vector calc needs a brush up, but maybe dot products could nose in there somewhere?

Direct question>> How best to define a metric that describes a fit in this case?

The real case would have 10~20 points per set, and time permitting as many 'ideal' sets of points as possible, lets go with 30 for now. Could I expect to get away with ~2ms per iteration on a reasonable machine? (macbook pro) or does this kind of thing blow up ?

2015-03-01 00:02:14 -0500 received badge  Critic (source)
2015-02-25 01:49:01 -0500 asked a question OpenCV matrix multiplication assetion fail inside class, but not outside


Trying to avoid a C-style struct and making my first c++ class. An issue though...

Ok, so using OpenCv I define a minimal class to show the issue I'm having. MatrixMathTest.cpp:

#include "MatrixMathTest.h"


    float temp_A[] = {1.0, 1.0, 0.0, 1.0};
    Mat A = Mat(2,2, CV_32F , temp_A);
    float temp_x[] = {3.0, 2.0};
    Mat x = Mat(2,1, CV_32F , temp_x);

void MatrixMathTest::doSomeMatrixCalcs(){
    x = A * x;    //expecting matrix mults, not element wise mults
    A = A.inv();  //proper matrix inversion

Then MatrixMathTest.h:

#include "opencv2/opencv.hpp"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace std;
using namespace cv;

class MatrixMathTest { 

    void MatrixMathTest::doSomeMatrixCalcs(); 

    Mat x;
    Mat A;

And then run this:

#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include "MatrixMathTest.h"

using namespace cv;
using namespace std;

void main(int argc, char** argv) {

    float temp_A[] = {1.0, 1.0, 0.0, 1.0};
    Mat A = Mat(2,2, CV_32F , temp_A);
    A = A.inv();
    cout << A << endl;

    float temp_x[] = {3, 2};
    Mat x = Mat(2,1, CV_32F , temp_x);
    x = A * x;
    cout << x << endl;

    MatrixMathTest tester;

The same code in main will work as expected, but once it's in the class it fails:


If I place the A.inv(); line first I get slightly different:

inverse error

There were no assertion failures when I ran the same code directly in main with CV_32F as the type. A search of that error message, shows people solving it by changing the variable types but I have tried all the different number variable types the assertion(s) mention and more, it's simply left at CV_32F as this is the last I tried.

I figure it's something to do with being in the class ? ??

But what? Or something else ?

Something (terribly basic?) I am yet to learn ?

... and if it is related to the type, how to reconcile wanting to eventually do both mults and inversions on the same matrices - do the different types in those assertions exclude that??

2015-02-14 20:28:30 -0500 commented answer OpenCV C++ contours - keeping results contiguous over frames

Just read the wikipedia page in the Hungarian method. I grasp the basics of it, so I develop a metric (or group of metrics) using my knowledge of the situation, then the algorithm optimises the assignment according to that metric? hrrrm, coming up with an 'optimal' metric isn't exactly a trivial task ! :) EDIT: oh so the Kalman filter prediction is the metric ?

2015-02-14 19:39:57 -0500 commented answer OpenCV C++ contours - keeping results contiguous over frames

googling at the moment - "multi object tracking" is a great search term so thanks for the heads up, very relevant and I've discovered 'data association' also :)

2015-02-14 17:00:08 -0500 commented answer OpenCV C++ contours - keeping results contiguous over frames

Thanks for your interest :) It is helmet mounted facial motion capture with markers - e.g.

The markers currently are black dots and the camera is monochrome - I could use IR LEDs, retro-reflective markers and IR filter the lens for very clean data. However retroreflective markers are either poisonous around the eye (paint), or too large (adhesive balls). That is why I'm keen to just try a dark marker, but something like a nostril or otherwise might give a false positive.

I was also thinking of using the moments capability available in contours - i.e. make the markers have very specific orientations and aspect ratios, also markers within markers (bullseyes) ...

Keen to learn though

2015-02-14 07:00:45 -0500 asked a question OpenCV C++ contours - keeping results contiguous over frames


I have a real time application in OpenCV where I need to take a current video frame, and analyse it for contours then work with the centroids of those contours. So far the basics are all good and working.

The issue I foresee is that my input frames are 'noisy' to the extent that I may see different amounts of centroids for each frame, it's how to deal with this that is my interest.

The objects that I'm interested in give positive hits in every frame so for instance if I'm expecting 14, then I'll get at least 14 for every frame. I'm also a-priori aware of the spatial relationships between the objects in frame (for instance, they'll never cross, there is symmetry and for instance among other rules the top-left one is exactly that: top-left - always) so I can trust that whatever order in which the openCV findContours function finds them will remain constant (for instance it might turn out the third contour found is always the top-left one).

It's the false positives that are the issue, in that they freak with the ordering such that the 'logic' I just outlined falls apart, unkindly. False-positives can appear anywhere, I could end up with 15 centroids and as such '3' might now be the false positive, and '4' is top left, and so on.

Question: It looks like just a one-hit function - clear your vector and start again kind of thing - but does OpenCV have a way around this built in?

If I have to roll my own - what are they usual tactics?

As the objects in frame are contiguous 'within a bound' between successive frames I can play with that I guess - and that, combined with the aforementioned known spatial relationships (again, 'within a bound') - I could be cooking with gas :) - thing is, I'd be doing this between every frame...

I'm up against a wall of a very short period to play with in terms of loop execution time - if I can rely on already solved problems with optimal algorithmic complexity vs. my own kludge I'm all for it.

Anyone got anything to teach me? (did I make sense?)

2015-01-19 04:48:02 -0500 received badge  Enthusiast
2015-01-15 06:18:18 -0500 received badge  Nice Answer (source)
2015-01-14 22:33:05 -0500 commented answer waitKey(1) timing issues causing frame rate slow down - fix?

Thanks! Also, just edited out a lot of my camera specifics and made it slightly more general ...

2015-01-14 22:31:59 -0500 commented answer waitKey(1) timing issues causing frame rate slow down - fix?

Well, using both DLIB and openGL gave similar draw periods - although must admit I was pretty much in copy and paste mode for most of that 'development' period :) Maybe I was neglecting some part of the process that weighed down the loop ...

2015-01-14 22:27:56 -0500 received badge  Scholar (source)
2015-01-14 02:12:57 -0500 received badge  Self-Learner (source)
2015-01-14 02:12:57 -0500 received badge  Teacher (source)
2015-01-13 21:41:00 -0500 commented question Is there a faster way to display video than NamedWindow and WaitKey? (Linux)(Python)

Might be a bit late, but in case anyone else is here - have a look:

2015-01-13 20:25:58 -0500 commented answer waitKey(1) timing issues causing frame rate slow down - fix?

Maybe it's not the done thing accepting your own answer - but it is... the answer :) I need 20+ points or something though ??

2015-01-13 02:12:50 -0500 received badge  Student (source)
2015-01-12 22:46:20 -0500 commented answer waitKey(1) timing issues causing frame rate slow down - fix?

Forum/browser software deleted a more detailed answer - but the following code is working with threads :)

I get 120fps in fastLoop and 63fps in the viewLoop - adding processing to fastLoop shows an eventual slowdown, once this reaches 63 they are both 63 and the threads I guess are then redundant.

It's pretty bare - but worked straight away, which was surprising. Maybe someone can point out issues with it?

Quick and hopefully correct explanation: mtx.lock means all memory locations within the lock are only accessible by the code within the lock, for the duration of the lock - this avoids concurrent read writes of the Mat memory locations by each thread. It is my understanding that it grants the fastLoop thread more priority over the viewLoop thread.

//#include something specific to your camera ...

#include "opencv2/opencv.hpp"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>

#include <thread>  
#include <mutex>          

using namespace cv;
using namespace std;

mutex mtx;   // might not actually need to be global ...

void fastLoopCode(Mat& frameCV) {

        //mutex mtx;   //try it in here

        //initialize your camera etc.

        Frame *frame;      //direct data from camera...

                frame = camera->GetLatestFrame();      


                        //whatever code required to turn the camera data into an openCV Mat


void viewLoopCode(Mat& frameCV) {

        namedWindow("frame", WINDOW_AUTOSIZE); 


                imshow("frame", frameCV);

int main(int argc, char** argv){

        Mat frameCV(Size(CAMERA_WIDTH, CAMERA_HEIGHT), CV_8UC1);

        thread fastLoop (fastLoopCode, frameCV);
        thread viewLoop (viewLoopCode, frameCV);

2015-01-12 22:27:26 -0500 received badge  Editor (source)
2015-01-12 22:27:26 -0500 edited question waitKey(1) timing issues causing frame rate slow down - fix?


I am using a camera that can run at 120fps, I am accessing it's frames via it's SDK and then formatting them into opencv Mat data types for display.

At 120fps I am hoping that the while(1) loop will achieve a period of 8.3ms - i.e. the camera fps will be the bottleneck. However, currently I am only achieving around a 13ms loop (~70fps)

Using a timer that gives microsecond resolution I have timed the components of the while(1) loop and see that the bottleneck is actually the 'waitKey(1);' line.

If I'm not mistaken I should expect a 1ms delay here ? Instead I see around 13ms spent on this line.

i.e. waitKey(1); is the bottle neck

Also of note is that if I try waitKey(200); I will see a 200ms delay, but anything lower than around waitKey(20); will not give a delay that reflects the waitkey input parameter:

waitkey(1) = 12ms

waitkey(5), waitkey(10) and waitkey(15) all give a 12ms delay

waitkey(20) gives 26ms

waitkey(40) gives 42ms


It would seem that everything else but the waitkey is taking around 2ms, which will give me 6ms or so for other openCV processing - all in keeping up with 120fps - this is my goal.

How can I either 'fix' or avoid waitkey? or perhaps at least understand where I am going wrong conceptually :)

The code follows hopefully it's clear considering the cameras SDK:

#include "cameralibrary.h"
#include "supportcode.h"  

#include "opencv2/opencv.hpp"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace cv;
using namespace std;
using namespace CameraLibrary;

int main(int argc, char** argv){ 

    Camera *camera = CameraManager::X().GetCamera();

    int cameraWidth  = camera->Width();
    int cameraHeight = camera->Height();

    Mat imageCV(cv::Size(cameraWidth, cameraHeight), CV_8UC1);
    const int BITSPERPIXEL = 8;

    Frame *frame;

    namedWindow("frame", WINDOW_AUTOSIZE);



        //maximum of 2 micro seconds:
        frame = camera->GetLatestFrame();   

        //maximum of 60 micro seconds:
            frame->Rasterize(cameraWidth, cameraHeight, imageCV.step, BITSPERPIXEL,;

        //maximum of 0.75 ms
        imshow("frame", imageCV); 

        // **PROBLEM HERE**  12 ms on average

        //maximum of 2 micro seconds:


    return 0;
2015-01-12 15:15:42 -0500 commented answer waitKey(1) timing issues causing frame rate slow down - fix?

I ran out of space... Also wanted to say thanks for the detailed answer ! It makes more sense now.

Although, it's I guess not the greatest of 'news', it means I can move on knowing that there wasn't some hidden and perhaps more direct solution :)

2015-01-12 15:12:09 -0500 commented answer waitKey(1) timing issues causing frame rate slow down - fix?

Thanks, it's a something I had considered. I alluded to this solution in my comment above, except it would have been %4. It would mean a hiccup in loop timing precision every %n frames, which wouldn't be too bad for my application, still though it doesn't rub so well with me, what if it did affect my application... :)

I'm thinking a solution might be in threading the processing loop to run 120 fps with no imshow/waitkey and every n frames signal to main (or vice versa) that a frame is ready for display. Got to read more about mutex and/or the timing hit for copying a frame to another memory allocation for the display function .. never done threads :)

Yes, waitkey is a misnomer (!) and not only that, but the input parameter isn't linearly mapped to the behavior it ostensibly represents

2015-01-12 01:03:24 -0500 commented question waitKey(1) timing issues causing frame rate slow down - fix?

Is it anything to the refresh rate of my screen ?? (60Hz)

I don't need the image on screen to update at 120fps - quarter rate/30fps would be fine. But I do need the while(1) loop to run at 120Hz - i.e. the full 120 images will be accessible to the code.

I don't really fancy every 4th frame forcing a 13ms period so I'm still keen to find out how to get waitkey() to happen as fast as it indicates it should (?)