Ask Your Question

synthnassizer's profile - activity

2020-11-30 15:19:07 -0600 received badge  Notable Question (source)
2019-09-08 08:02:18 -0600 received badge  Popular Question (source)
2018-07-26 14:24:54 -0600 received badge  Notable Question (source)
2016-06-23 23:18:52 -0600 received badge  Popular Question (source)
2014-05-08 02:31:52 -0600 received badge  Necromancer (source)
2014-04-11 16:51:06 -0600 commented question How processing-hungry are opencv backgroundsubtractor mog, mog2 , gmg?

hi there. indeed the camera i use outputs a 1280x960 frame. but I was somehow hoping the core i7 was a fast enough processor. probably I should harness the GPU for the subtractors...

2014-04-10 19:53:07 -0600 asked a question How processing-hungry are opencv backgroundsubtractor mog, mog2 , gmg?

Hi all, I am testing in my application opencv's backgroundSubtractors mog, mog2 and gmg. All 3 reduce the FPS of my application ALOT. With mog and mog2 I get around 17 fps and with gmg I fall down to 5!

I have re-examined my code and I know I pass around the images as references. There is 1 deep copy involved when grabbing the image from camera, but other than that I work with pointers.

Are the algorithms THAT power hungry? Note that I run only one subtraction method at a time on CPU (not gpu). The processor is a core i7 though and it seems reasonable to assumd in should be able to cope. Or am I wrong ?

2014-04-09 21:27:51 -0600 asked a question using opencv backgroundSubtractorGMG to track moving and stationary objects

hi there, I am using backgroundSubtractorGMG from opencv 2.4.8 , using c++ in linux. I am also working only with grayscale images.

The specific subtractor is aimed to be used in variable light , video art installations, which is what I want to do too.

I am tweaking the values "decisionThreshold" and "learningRate" , in an attempt to find optimum settings for what I am trying to do.

What I try to implement is a human tracking mechanism. The human will not be moving all the time though. This affects me because the GMG seems to incorporate in the background , anything that is standing still in a frame for more than 3-4 seconds. How can I increase this time?Additionally, tweaking decisionThreshold and learningRate, does not have a significant effect on altering this bit.

What is more , once the tracked object (1 blob) has come to rest , after the 3-4 seconds, it starts to fade out , (dark spots appear and grow quickly). As these grow, the contour finder starts detecting more than just one object. This is also unwanted behaviour and I wonder if there are some settings I could use to fix this.

What can avoid

2014-04-09 05:38:01 -0600 answered a question Background removal with changing light

this is abit late, but you could try the BackgroundSubtractorGMG which is supposedly tuned for light variations. See here

Opencv v3.0.0_beta documentation has info on each of the 3 implemented algorithms here

2014-03-25 10:17:48 -0600 received badge  Self-Learner (source)
2014-03-25 10:12:29 -0600 answered a question accessing element in a homography matrix

findHomography() returns a Matrix of <double> type elements, not <float>

2014-03-25 08:58:35 -0600 asked a question accessing element in a homography matrix

Hi all, I have a 3x3 homography matrix , which I computed using findHomography() function. I store it in a cv::Mat matrix.

I am trying to do element access using the following code

float cvHomography::accessElements(const cv::Mat& aCvMat) { //cout << aCvMat << endl;

const float* Mi;
for( int i = 0; i < aCvMat.rows; i++){
    Mi = aCvMat.ptr<float>(i);
    for( int j = 0; j < aCvMat.cols; j++){
        cout << Mi[j] << endl;
    }
}

}

The above does not return the correct value from the homography matrix. I have searched through documentation , tutorials and google and I honestly cannot see what I am doing wrong.

2014-03-23 20:06:55 -0600 asked a question how can I reproject points once I have the homography matrix?

Hi all! I have calculated a homography matrix using cvFindHomography() and now I would like to use this matrix to do some point reprojection.

originally, i thought I could simply do

p' = H * p, where H my is my obtained (3x3) homography matrix, p is my original point a Vec3f point (with z=0) p' is my reprojected point (again Vec3f).

apparently the compiler whines though that the "*" operator has not been defined for such multiplication.

What could be wrong? I am using the operator "*" in some wrong way?

thank you for your help.

2014-03-10 05:02:37 -0600 received badge  Editor (source)
2014-03-09 19:12:53 -0600 asked a question how do I re-project points in a camera - projector system (after calibration)

Hi all, i have seen many blog entries and videos and source coude on the internet about how to carry out camera + projector calibration using openCV, in order to produce the camera.yml, projector.yml and projectorExtrinsics.yml files.

I have yet to see anyone discussing what to do with this files afterwards. Indeed I have done a calibration myself, but I don't know what is the next step in my own application.

Say I write an application that now uses the camera - projector system that I calibrated to track objects and project something on them. I will use contourFind() to grab some points of interest from the moving objects and now I want to project these points (from the projector!) onto the objects!

what I want to do is (for example) track the centre of mass (COM) of an object and show a point on the camera view of the tracked object (at its COM). Then a point should be projected on the COM of the object in real time.

It seems that projectPoints() is the openCV function I should use after loading the yml files, but I am not sure how I will account for all the intrinsic & extrinsic calibration values of both camera and projector. Namely, projectPoints() requires as parameters the

  • vector of points to re-project (duh!)
  • rotation + translation matrices. I think I can use the projectorExtrinsics here. or I can use the composeRT() function to generate a final rotation & a final translation matrix from the projectorExtrinsics (which I have in the yml file) and the cameraExtrinsics (which I don't have. side question: should I not save them too in a file??).
  • intrinsics matrix. this tricky now. should I use the camera or the projector intrinsics matrix here?
  • distortion coefficients. again should I use the projector or the camera coefs here?
  • other params...

So If I use either projector or camera (which one??) intrinsics + coeffs in projectPoints(), then I will only be 'correcting' for one of the 2 instruments . Where / how will I use the other's instruments intrinsics ??

What else do I need to use apart from load() the yml files and projectPoints() ? (perhaps undistortion?)

ANY help on the matter is greatly appreciated . If there is a tutorial or a book (no, O'Reilly "Learning openCV" does not talk about how to use the calibration yml files either! - only about how to do the actual calibration), please point me in that direction. I don't necessarily need an exact answer :)

2014-02-24 17:49:03 -0600 received badge  Scholar (source)
2014-02-21 10:05:07 -0600 asked a question chessboard not found while calibrating camera

Hi all, I am attempting to run chessboard calibration. the image of the chessboard can be found here:

http://postimg.org/image/w0k0ymgo3/

unfortunately the findCorners function does not identify the chessboard as seen in this image.

I don't know what could be the problem.

do you have any pointers / suggestions as to what I can try? Thank you..

2013-11-23 17:28:20 -0600 asked a question copying to images side by side to create a wider image

hi all, I am trying to get frames from 2 kinect devices and merge them to a unified wider image.

so basically I need to get the 2x 640x480 frames (images) and place them in another image of resolution 1280x480..

As I am very new to opencv I am not sure what search terms to use to find even a remotely relevant result.. I tried setting the ROI of the left half of the image, assign pixel values to the ROI, then setting the right part as the ROI and setting the other frame into this region. In the end I draw the 1280x480 wide image, but I only see the frame data that I assigned last to the wide image..

what am I doing wrong? What is the proper way to do it?

thank you

2013-11-21 15:01:53 -0600 received badge  Supporter (source)
2013-11-14 09:39:04 -0600 received badge  Self-Learner (source)
2013-11-14 09:27:38 -0600 answered a question how do I save/load a cv::Rect ?

The opencv cheat sheet came to the rescue. link text

it appears saving a Rect is easy, but loading it requires a few more steps:

FileNode mr = fs["myRest"];
Rect r; 

r.x = (int)mr[0];
r.y = (int)mr[1];
r.width = (int)mr[2];
r.height = (int)mr[3];

so good enough :)

2013-11-07 18:38:34 -0600 received badge  Student (source)
2013-11-07 18:33:11 -0600 asked a question how do I save/load a cv::Rect ?

Hi all, newbiw with openCV here. I am trying to save/load a cv::Rect and was hoping it would happen in an easy way - just as is the case with cv::Mat.

ok saving works as expected:

FileStorage fs(ofToDataPath(filename.c_str()), FileStorage::APPEND);
fs << "boundRect" << aCvRect;

opening the file I'll see appended

boundRect: [ 271, 100, 56, 72 ]

but then I fail to use the "expected" way to load this variable. Ideally, this should work (as does with cv::Mat):

FileStorage fs(ofToDataPath(filename.c_str()), FileStorage::READ);
fs["boundRect"] >> loadedCvRect;

well the above does not compile saying:

../../../addons/ofxOpenCv/libs/opencv/include/opencv2/core/operations.hpp|2825|error: no matching function for call to ‘read(const cv::FileNode&, cv::Rect_<int>&, cv::Rect_<int>)’|

If saving didn't work as expected, I would simply choose another way to save the rect, but since saving works, it seems odd that the loading has not been implemented. Am I missing something here? and how can I load the Rect values?

If this is not implemented, what is the suggested way? To store x,y,w,h in a 2x2 Mat and then store that?

thank you for your help