Ask Your Question

pyro's profile - activity

2020-09-22 02:37:37 -0600 received badge  Enlightened (source)
2020-09-22 02:37:37 -0600 received badge  Good Answer (source)
2020-09-16 16:25:13 -0600 received badge  Nice Answer (source)
2015-04-02 13:47:41 -0600 received badge  Nice Answer (source)
2014-06-30 02:08:22 -0600 received badge  Student (source)
2014-06-30 02:01:03 -0600 asked a question Retrieve non-zero elements from a SparseMat while iterating through another

Suppose I have two SparseMats: sparse1 = { 1, 1, 0, 1 }, sparse2 = { 1, 1, 1, 1 }

I iterate through the sparse1 and simultaneously retrieve pairwise elements from sparse2 as follows:

const SparseMat *a = &sparse1;
const SparseMat *b = &sparse2;

SparseMatConstIterator_<double> it = a->begin<double>(),
                                it_end = a->end<double>();
for(; it != it_end; ++it)
{
    double p = *it;
    const cv::SparseMat::Node* anode = it.node();
    double q = b->value<double>(anode->idx,(size_t*)&anode->hashval);
    cout << p << " " << q << endl;
}

This gives me the following output:

1 1
1 1
1 1

Note that the 3rd element from both matrices is absent since sparse1 has a 0. Is there a way to NOT ignore the non-zero elements from the sparse2 WHILE iterating through sparse1?

Or do I have to stick to dense matrices for this scenario?

2014-02-12 21:17:24 -0600 asked a question Distance types for FLANN in OpenCV-Python

Are distance types such as Histogram intersection, Chi-Square supported for FLANN in the Python bindings? If yes, how should one specify them?

I couldn't find any information about this in the 2.4.8/3.0 docs/tutorials.

2013-12-30 18:00:20 -0600 received badge  Nice Answer (source)
2013-12-20 01:07:12 -0600 commented question How to use PCA SIFT in opencv ?

PCA-SIFT isn't available out of the box in OpenCV. If you are ok with other robust keypoint-based descriptors you may refer to SURF, ORB, and Freak (http://docs.opencv.org/modules/features2d/doc/feature_detection_and_description.html).

2013-12-20 01:00:38 -0600 commented answer adding several images

In that case, you may add the n number of frames and take an average (assuming this is your logic) using the above answer. cv::addWeighted() helps to assign different weights while adding. For example, assigning equal weights of 0.5 to two frames will result in a typical average, whereas weighing one frame higher than another (say 0.3 for Frame1 and 0.7 for Frame 2) would give you a weighted average with 70% of the content in frame 2 appearing in the result. (http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#addweighted)

2013-12-19 22:08:02 -0600 answered a question How to get the means OTSU threshold level in openCV?

The documentation (http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#threshold) states that the function cv::threshold() returns the computed threshold value.

So you can simply retrieve the value with something like: threshold_value = cv::threshold(img1, img2, 0, 255, CV_THRESH_OTSU);

2013-12-19 21:51:36 -0600 commented answer adding several images

You don't need to add frames before applying the medianBlur function. It works on each frame by assigning each pixel a value which is equal to the median of the values of the neighboring pixels. The number of neighboring pixels considered is based on the kernel size that you pass as a parameter. More details and example code here: http://docs.opencv.org/doc/tutorials/imgproc/gausian_median_blur_bilateral_filter/gausian_median_blur_bilateral_filter.html

2013-12-19 21:37:03 -0600 answered a question Face threshold in various light

An excellent but simple illumination normalization technique is presented in this paper, and applied to face detection: http://lear.inrialpes.fr/pubs/2007/TT07/Tan-amfg07a.pdf

An existing C++ implementation of the above paper's algorithm: https://github.com/bytefish/opencv/blob/master/misc/tan_triggs.cpp

Note that this is only for illumination normalization, and you may have to experiment a bit with the normalized image to obtain a binary image as per your requirement.

2013-12-18 00:53:28 -0600 answered a question Failing Building OpenCV from source for Python2.7

My solution file misses INSTALL too, not sure behind the reason for this.

But, you can easily complete the installation by going to \opencv\build\lib\Release to find the cv2.pyd file, and copying it to C:\Python27\Lib\site-packages (or your relevant Python installation path).

Note that you must add the path of your OpenCV DLL files to the PATH environment variable if you are building it as a dynamic library.

2013-12-17 07:10:31 -0600 received badge  Teacher (source)
2013-12-16 23:34:02 -0600 answered a question Extract an ellipse form from an image instead of drawing it inside

There isn't a straightforward way to extract the ellipse. But you may use a mask image to extract the elliptical patch with the rest of the region blacked out.

First, define a mask image which has a white ellipse on a black background. Then use a bitwise and operation to extract the patch.

Python code:

import cv2
import numpy as np
import matplotlib.pyplot as plt

image = cv2.imread('baboon.jpg')
# create a mask image of the same shape as input image, filled with 0s (black color)
mask = np.zeros_like(image)
rows, cols,_ = mask.shape
# create a white filled ellipse
mask=cv2.ellipse(mask, center=(rows/2, cols/2), axes=(50,100), angle=0, startAngle=0, endAngle=360, color=(255,255,255), thickness=-1)
# Bitwise AND operation to black out regions outside the mask
result = np.bitwise_and(image,mask)
# Convert from BGR to RGB for displaying correctly in matplotlib
# Note that you needn't do this for displaying using OpenCV's imshow()
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
mask_rgb = cv2.cvtColor(mask, cv2.COLOR_BGR2RGB)
result_rgb = cv2.cvtColor(result, cv2.COLOR_BGR2RGB)
# Plotting the results
plt.subplot(131)
plt.imshow(image_rgb)
plt.subplot(132)
plt.imshow(mask_rgb)
plt.subplot(133)
plt.imshow(result_rgb)
plt.show()

Plot:

image description

2013-12-16 23:08:14 -0600 commented question calcOpticalFlowPyrLK losing single user-defined tracking point

You may try median-flow tracker. It works by tracking a grid of points (not a single point) in a defined image patch. Each grid point is tracked using Lucas-Kanade optical flow, and the median of this vector field determines location of the patch in the consecutive frame. More details and Python code here: http://jayrambhia.wordpress.com/2012/06/03/median-flow-tracker-using-simplecvopencv-gsoc-week-1-and-2/. If you still need something more advanced you may consider OpenTLD (http://gnebehay.github.io/OpenTLD/) which combines tracking, online learning and detection to robustly track objects. Infact, Median flow tracker is a part of this framework. Hope this helps!

2013-12-15 19:53:39 -0600 commented question calcOpticalFlowPyrLK losing single user-defined tracking point

Does your tracker lose track of points if they are corners detected by cv2.goodFeaturesToTrack? I ask this because Lucas-Kanade optical flow generally works well with corner points. The point that you specified might be featureless (i.e. flat with same textures and no strong gradients). See here for more details: http://docs.opencv.org/trunk/doc/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.html.

2013-12-14 08:48:00 -0600 commented answer how to select a specific bounding box

@akoo regarding your previous question, since you already have the coordinates for the bounding box, you can extract the patch from the image using cv::Rect. There's an example here: http://stackoverflow.com/questions/16621983/how-opencv-c-interface-manage-roi

2013-12-14 08:44:31 -0600 commented answer how to select a specific bounding box

@akoo you can consider accepting the answer if it helped.

2013-12-10 21:24:46 -0600 commented question Displaying multiple windows on a screen

Please provide more details such as: (1) sample code of what you have tried (2) screenshots illustrating your current results, and (3) clearly stated requirement about your expected output.

For example, it is not very clear what do you mean by: "display multiple windows on a single screen".

You may also refer to forum's FAQ at http://answers.opencv.org/faq/ to understand how to formulate a good question. A well formulated question will help you in obtaining meaningful answers from many people quickly.

2013-12-10 21:01:27 -0600 commented answer how to select a specific bounding box

@akoo I don't think aspect ratio is an elegant solution. You may get a lot of false positives. Assuming that the license plate primarily contains black and white colors, you can consider using the color feature first, followed by OCR to narrow down your selection.

2013-12-10 02:31:20 -0600 answered a question how to select a specific bounding box

This is not an easy problem to give a generic solution. But for the given image and bounding boxes I can think of two approaches:

  1. Find features which can distinguish between the bounding boxes, such as color. You could calculate the color histogram (assuming you could also use a color image) for the patch in each bounding box, and make a comparison to select the one you need. Example code: http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_calculation/histogram_calculation.html

  2. Assuming that you only want to detect the bounding box with some alphanumerics inside, you could consider applying Optical Character Recognition for each patch, and select the one which has letters and numbers. Note however that, you may get false positives if there are other patches satisfying this criteria. Example Python code to get started: http://stackoverflow.com/questions/9413216/simple-digit-recognition-ocr-in-opencv-python

This is really a very specific problem, and you need to use your domain knowledge to think of features that can help to select the required patch.

2013-12-09 07:16:19 -0600 commented question alternate to opencv cv.waitkey in python or pygtk

This really isn't related to OpenCV is it? The official docs for event handling in PyGTK are here: http://www.pygtk.org/pygtk2tutorial/s... There is an example at the end of the page.

2013-11-28 02:44:40 -0600 commented question Creating Python bindings for missing functions

@berak yes that's right.

2013-11-28 00:39:55 -0600 asked a question Creating Python bindings for missing functions

What is the correct way to create Python bindings for missing functions in OpenCV? For example, considering an already wrapped simple function (say cv::calcHist() in opencv/modules/imgproc/src/histogram.cpp) as an example:

  1. Could someone explain what code changes in which files? I just expect some simple examples and further references that I can look up myself.
  2. Are there any best practices to follow when writing Python bindings for OpenCV?

Thanks!

2013-09-25 05:58:36 -0600 commented question Building opencv-master for Python

@Abid it's strange that I couldn't find "INSTALL" in my OpenCv.sln

2013-09-25 05:55:59 -0600 commented question Building opencv-master for Python

@Abid Thanks, but I already referred to the link. I was able to build, but it was dynamically linked, and I didn't add the DLL path to the PATH env variable. I fixed the problem from @berak's answer. BTW, is this the official OpenCV-Python site for the next release?

2013-09-25 05:52:02 -0600 received badge  Critic (source)
2013-09-24 22:59:37 -0600 commented answer Building opencv-master for Python

Thanks for the extra tip on building a shared library.

2013-09-24 22:58:50 -0600 received badge  Scholar (source)
2013-09-24 05:30:53 -0600 received badge  Organizer (source)
2013-09-24 04:40:37 -0600 received badge  Editor (source)
2013-09-24 04:39:37 -0600 asked a question Building opencv-master for Python

I want to build the latest version of OpenCV (dev 3.0.0) from source. I followed the instructions in official tutorial and was able to successfully complete the build.

But importing cv2.pyd in Python (after copying to \Python27\Lib\site-packages) throws an error "ImportError: DLL load failed: The specified module could not be found."

I found 5 files named cv2.* after the build:

  1. cv2.obj in \build\modules\python\opencv_python.dir\Release

  2. cv2.pyd, cv2.obj, cv2.exp, cv2.pdb in \build\lib\Release

What are these files (especially the pyd files) and why aren't they usable like in 2.4.6 prebuilt version which can just be plugged into the \Python27\Lib\site-packages folder directly?

Thanks!