Ask Your Question

Martin Peris's profile - activity

2019-01-25 13:02:03 -0500 received badge  Nice Answer (source)
2018-03-23 12:43:06 -0500 received badge  Good Answer (source)
2018-01-10 15:08:24 -0500 received badge  Good Answer (source)
2016-03-19 12:41:12 -0500 received badge  Good Answer (source)
2015-03-18 00:49:57 -0500 answered a question Does camera rotation help with a stereo depth map?

OpenCV assumes fixed cameras like this:

image description

When cameras are setup in this fashion, the epipolar lines become parallel, making much simpler the problem of stereo correspondence.

For the setup that you mention you would need to alter the camera calibration as you rotate the cameras.

2014-11-29 16:55:44 -0500 received badge  Nice Answer (source)
2014-11-16 23:51:38 -0500 answered a question how can I make imwrite work with a Mat video frame?

According to your error, the problem is that OpenCV doesn't know how to save files with jpg extension. Do you have libjpeg?

2014-10-29 13:02:03 -0500 received badge  Good Answer (source)
2014-10-18 14:36:30 -0500 received badge  Nice Answer (source)
2014-10-01 21:45:21 -0500 received badge  Good Answer (source)
2014-09-02 21:07:52 -0500 commented question Unable to open video from C++ in opencv. It works from python

Are you certain that the path to the video file in your C++ code is correct?

2014-08-07 20:09:04 -0500 commented answer Background substraction using OpenCV MOG from live camera feed.

I edited my answer, please check it

2014-08-05 20:58:42 -0500 answered a question Understanding OpenCV internally

If you want to know how OpenCV internally works, the best would be to take a look at the source code. It usually is well documented.

Also, if you want to know about the algorithms implemented, I would recommend to read the papers mentioned on the documentation of each algorithm (You have an example of what I mean on the documentation of the class BackgroundSubstractorMOG)

2014-08-05 20:02:51 -0500 answered a question Background substraction using OpenCV MOG from live camera feed.

You might be having memory reference problems, try to clone the frame into its own memory space.

Try this:

cv::Mat frame_clone;
// process the image to obtain a mask image.
pMOG->operator()(frame_clone, fgMaskMOG);


I tried your code with a video taken with a crappy webcam and the live feed of the same webcam. I get the same result as you: Good with video, bad with live feed.

But I observed something: the images on the recorded video are far more stable (do not change much from frame to frame) than the ones from the live feed (there is a lot of noise from frame to frame).

I think that by requesting the live stream of the webcam you get a highly compressed low quality image, that is causing the MOG algorithm to give you crappy results with the default parameters.

Possible solutions: find a way to increase the quality of the live feed or tweak the parameters of the MOG to deal with the frame-to-frame variability of the live feed.

2014-07-23 03:41:57 -0500 received badge  Nice Answer (source)
2014-07-19 09:19:17 -0500 answered a question how to insert a small size image on to a big image

Hi there,

Let me illustrate this with an example: lets say that you have a small image and you want to insert it at the point (x,y) of your "big image": you can do something like this:

cv::Mat small_image;
cv::Mat big_image;
//Somehow fill small_image and big_image with your data
small_image.copyTo(big_image(cv::Rect(x,y,small_image.cols, small_image.rows)));

With this what you are doing is to create a ROI (region of interest) on your big image located at point (x,y) and of the same size as small_image. Then you copy the small_image into that ROI.

2014-07-13 21:28:24 -0500 received badge  Citizen Patrol (source)
2014-07-11 03:32:12 -0500 commented question Why does detectMultiScale detect faces only when they are close to the centre of the frame?

Don't worry, it happened to all of us at some point ;)

2014-07-11 03:12:23 -0500 commented question Why does detectMultiScale detect faces only when they are close to the centre of the frame?

Could be helpful if you post a couple of example images

2014-05-25 19:52:09 -0500 edited question Read frame by frame VC++

How do read a particular frame from a capute file

Current Code:

cvSetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES, trackBar2); // set to value of trackBar
frame =  cvQueryFrame(capture); 
label8->Text = "Frame No.: " + (int)cvGetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES) + " trackBar2->value: " + trackBar2->Value ;

It seems the (int(cvGetCaptureProperty(capture, CV_CAP_PROP_POS_Frames) doesn't give the next or prior frame when I adjust the trackBar2 which step back or forward by 1. It seems the trackBar2-Value increment correctly by 1 or -1 (depending on the scroll) but the CV_CAP_PROP_POS_Frames a random value.

2014-05-21 04:02:03 -0500 commented answer Image Background Transparency - OpenCV

You are losing the image because you are defining colorScalar as:

Scalar colorScalar = new Scalar(125,125,200,0.6)

The alpha channel on a CV_8UC4 image is represented by a uchar ([0-255]) and you are giving it a value of 0.6 which will truncate to 0 (which means fully transparent, and hence making the whole image fully trasparent)

Try this instead:

Scalar colorScalar = new Scalar(125, 125,200, 154);

This should make your watermark appear about 60% transparent

2014-05-21 02:10:19 -0500 commented answer Image Background Transparency - OpenCV

You are setting targetMat with the same type as scaledImage which is probably not CV_8UC4. Try:

targetMat = new Mat(targetSize, CV_8UC4, colorScalar);
2014-05-21 01:30:38 -0500 commented answer Image Background Transparency - OpenCV

In C++ this would be done like this when you declare targetMat:

cv::Mat targetMat(rows, cols, CV_8UC4);
2014-05-21 00:45:26 -0500 answered a question Image Background Transparency - OpenCV

warpAffine sets the destination image to have the same type as the source. You need targetMat to be CV_8UC4

2014-05-21 00:42:30 -0500 edited question Image Background Transparency - OpenCV

Hi, I'm using this code to make image background transparent, but i'm not getting the background transparent.

Imgproc.warpAffine(targetMat, resultMat, rotImage, targetSize, Imgproc.INTER_CUBIC,        
Imgproc.BORDER_TRANSPARENT,new Scalar(255,255,255,0));

image description

2014-05-21 00:38:08 -0500 answered a question reading multiple images

You can always load the images one by one manually:

//Assuming that the images live on the same directory where you executed your  OpenCV program
cv::Mat image_0 = cv::imread("0.jpg");
cv::Mat image_1 = cv::imread("1.jpg");
cv::Mat image_2 = cv::imread("2.jpg");
cv::Mat image_3 = cv::imread("3.jpg");
cv::Mat image_4 = cv::imread("4.jpg");
cv::Mat image_5 = cv::imread("5.jpg");
cv::Mat image_6 = cv::imread("6.jpg");

You chould check that image_*.empty() is not true (that means that the image has been loaded) and then do whatever you want with them.

2014-05-20 23:58:28 -0500 commented answer How to display text on the windows (webcam windows) openCV?

Hi norzanbzura, could you please update your question with the code that you just posted? The comment section has a limited number of characters, so we can not see it all. Thanks

2014-05-20 23:21:07 -0500 answered a question How to detect multiple rectangles and rotate to vertical position

This is what I would roughly do:

I would first use findCountours using the parameter CV_RETR_EXTERNAL to detect only the external contours.

Then I would make sure that each contour contains 4 and only 4 corners (it is a square).

I would calculate the angle of each box respect to a vertical line(using for example the top-left and bottom-left corners of each detected contour) and use that angle to create a transformation matrix using getRotationMatrix2D

And finally use warpAffine and the transformation matrix that you just got to get each detected box aligned

Here are a couple of tutorials that might be useful to you:

finding contours in images

affine transformations

2014-05-20 20:48:23 -0500 answered a question How to display text on the windows (webcam windows) openCV?

The functions that you need to check out are getTextSize and putText

You have a nice code example under the documentation of getTextSize

I hope this helps you.

2014-04-23 04:27:05 -0500 received badge  Nice Answer (source)
2014-04-09 02:45:23 -0500 commented answer Unexpected result when comparing sample image with database of images opencv c++

You probably need to build the image_db path as follows (this is pseudo code): image_db = "/home/srikbaba/images/"+ent->d_name

2014-04-06 19:52:30 -0500 answered a question Unexpected result when comparing sample image with database of images opencv c++

Well, you have declared filename3 as follows:

filename3 = (char *)malloc(sizeof(char));

What you are doing there is to declare a pointer to a single char (and only one char). I guess that what you want is to allocate memory for a certain number of characters, not only one. Try something like this:

filename3 = (char *)malloc(sizeof(char)*APPROPIRATE_SIZE_FOR_FILENAME3);

Where APPROPRIATE_SIZE_FOR_FILENAME3 may be, for example, 1024. By doing this your file names will be allowed to be 1023 characters long.

Also, you didn't include the definition of img_db, but you might have the same problem there.

I hope this helps.

2014-03-30 23:02:40 -0500 commented question IMREAD Not working with windows form

We could help you better if you post some example code.

2014-03-30 19:57:25 -0500 edited question failed to make opencv with tbb42 on macosx Mavericks

I installed mac version of tbb42_20140122oss on my macbook pro which running on Mavericks. I tried to install opencv-2.4.8 with tbb enabled. (after issuing the cmake with -DWITH_TBB=ON), the report shown "with TBB YES". PS: I'm using gcc4.8

mkdir debug

However I was unable to make the opencv, it died at

In file included from /usr/local/include/tbb/combinable.h:32:0,
                 from /usr/local/include/tbb/tbb.h:49,
                 from /Users/benzene/works/OpenCV/opencv-2.4.8/modules/core/include/opencv2/core/internal.hpp:179,
                 from /Users/benzene/works/OpenCV/opencv-2.4.8/modules/highgui/src/precomp.hpp:50,
                 from /Users/benzene/works/OpenCV/opencv-2.4.8/modules/highgui/src/
/usr/local/include/tbb/enumerable_thread_specific.h: In instantiation of 'static _opaque_pthread_t* tbb::interface6::internal::ets_base<ETS_key_type>::key_of_current_thread() [with tbb::ets_key_usage_type ETS_key_type = (tbb::ets_key_usage_type)1u; tbb::interface6::internal::ets_base<ETS_key_type>::key_type = _opaque_pthread_t*]':
/usr/local/include/tbb/enumerable_thread_specific.h:173:54:   required from 'void* tbb::interface6::internal::ets_base<ETS_key_type>::table_lookup(bool&) [with tbb::ets_key_usage_type ETS_key_type = (tbb::ets_key_usage_type)1u]'
/usr/local/include/tbb/enumerable_thread_specific.h:281:36:   required from here
/usr/local/include/tbb/enumerable_thread_specific.h:98:70: warning: declaration of 'id' shadows a global declaration [-Wshadow]
                tbb::tbb_thread::id id = tbb::this_tbb_thread::get_id();
make[2]: *** [modules/highgui/CMakeFiles/opencv_highgui.dir/src/] Error 1
make[1]: *** [modules/highgui/CMakeFiles/opencv_highgui.dir/all] Error 2
make: *** [all] Error 2

could somebody helps me out ?

2014-03-27 01:21:22 -0500 answered a question Blurring issues (only 1/3 of the image is blurred)

I agree with Haris Moonamkunnu, you can use OpenCV built-in functionality to do what you want.

Besides, you are getting only 1/3 of the image blurred with your function because you are probably providing it a color image (3 channels - RGB) as input and you are looping over it as a grayscale image (1 channel - GRAY).

2014-03-26 20:47:49 -0500 commented answer imagen processing

Hi Kuki, yes, OpenCV can do what you want, but it will take you some work :) I would recommend you to use OpenCV to locate and segment the license plate, and a specialized library for OCR (although with enough effort you can get OpenCV to do it too)

2014-03-26 20:29:19 -0500 commented answer imshow() not working in pthread MacOSX 10.9

I have updated my answer, please check it out

2014-03-24 21:22:46 -0500 answered a question imshow() not working in pthread MacOSX 10.9

You might need to initialize the visualization window before creating the thread with



Also, inside the thread function you should do:


Instead of

return NULL;

To comply with POSIX threads specifications.

2014-03-24 20:13:40 -0500 answered a question imagen processing

It depends on many variables:

  • Define speeding cars. Depending on the speed of the cars and the characteristics of your camera, the license plate numbers might appear blurred and therefore the recognition impractical.

  • Is the camera static? For non-static cameras the segmentation of the license plate might be more challenging.

  • Is there illumination changes? Day/night, clouds passing by, reflections... illumination changes are the nemesis of any automatic computer vision system.

  • Do you need your system to work in real time? Real-time is always challenging as you can not use time-consuming-highly-accurate methods.

Given the right conditions and using the appropriate recipe of algorithms, OpenCV would help you locate and segment the license plate. Using an OCR library would be helpful for the character recognition, but if you don't have one you can still get OpenCV to do it for you (if you know how).

2014-03-24 19:56:36 -0500 answered a question VideoCapture serial id

Well, as far as I know this is not really an OpenCV issue. The device name of the cameras that you connect to your computer will depend on the operative system. So if, for example, you are using unbuntu/debian you should write udev rules so that every time you connect a camera with a certain serial number, it gets the same device name.

I hope this points you in the right direction.

2014-03-24 02:09:03 -0500 answered a question Is there a danger with using the same Mat object for both source and destination?

Hi there,

There is OpenCV methods for which is safe to use the same source and destination images, but you should check the documentation and make sure that you know what you are doing.

When in doubt, use separate images. IMHO it is always better to use a bit more of memory than to deal with unexpected behaviors.

2014-03-24 01:52:53 -0500 answered a question Eroding Text to from blobs

Hi Luek,

By using Mat() as the third argument on erode you are using de default 3x3 structuring element, which might be too small for your case. You should experiment with larger sizes/different shapes of structuring element until you achieve satisfying results..

You can try something like this:

int size = 6; //Play with this size until you get the results you want
erode ( quad, quad, cv::getStructuringElement ( MORPH_RECT, size ), cv::Point ( -1, -1 ), 2, 1 , 1);
2014-03-24 01:34:00 -0500 answered a question What is `kernel`?

Hi Luek,

Don't let esoteric names, such as kernel, intimidate you :) The concept behind it is really simple.

That kernel (also known as structuring element) is nothing else but a binary matrix, that is: a matrix composed by 0's and 1's. The arrangement of those 0's and 1's will determine the neighborhood of pixels over which the minimum (in the case of erode) or maximum (in the case of dilate) is taken.

The erode function will slide the kernel over the original image and all the pixel values on the original image that "fall under" a 1 value on the kernel will be considered for the calculation of the minimum value.

To make things easy, OpenCV provides the function getStructuringElement.

You can also take a look at this wikipedia article: Erosion

I hope this helps