Ask Your Question

blue's profile - activity

2015-03-09 15:50:21 -0600 received badge  Good Answer (source)
2015-03-09 15:50:21 -0600 received badge  Enlightened (source)
2014-10-27 00:44:11 -0600 received badge  Nice Answer (source)
2013-02-04 03:00:46 -0600 received badge  Nice Answer (source)
2012-08-04 14:10:30 -0600 received badge  Nice Answer (source)
2012-08-02 20:56:15 -0600 commented question What format does cv2.solvePnP use for points in Python?

If you run the ipython example with N=1420 does it work (it does for me)? Are you sure ipython and your application have the same PYTHONPATH? Perhaps the difference in behaviour is due to different version of cv2.

2012-08-02 11:36:30 -0600 answered a question What format does cv2.solvePnP use for points in Python?

This seems to work:

In [1]: import cv2

In [2]: objectPoints = np.random.random((10,3,1))

In [3]: imagePoints = np.random.random((10,2,1))

In [4]: cameraMatrix = np.eye(3)

In [5]: distCoeffs = np.zeros((5,1))

In [6]: cv2.solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs)

Out[6]:

(True, array([[ 0.15797798], [ 0.48352235], [-0.25566527]]), array([[ 0.36711979], [ 1.02124737], [ 1.83962556]]))

Image points of shape (N, 1, 2) and object points of shape (N,1,3) also works (your case number 2) for me. Perhaps you had a bad shape for distCoeffs?

2012-07-19 08:58:29 -0600 commented question Collision Avoidance using OpenCV on iPad

This paper describes one algorithm that might be useful: Animal-Inspired Agile Flight Using Optical Flow Sensing Good luck, this sounds like a fun project.

2012-07-18 16:06:46 -0600 commented question Tricky image segmentation in Python

Try applying adaptiveThreshold to binarize the image before passing to FindCountours.

2012-07-18 13:32:47 -0600 answered a question Extract a RotatedRect area

You could rotate the image first and then extract the rectangle, but if the rectangle is small compared to the full image it might be better tor use warpAffine.

Allocate the destination image with the size of the rectangle.

I think the affine transformation matrix is

T=[ 1 0 x0 ]
  [ 0 1 y0 ]
R=[ cos(theta) -sin(theta) 0 ]
  [ sin(theta)  cos(theta) 0 ]
  [      0           0     1 ]
M=T(x0,y0)*R(theta)
 =[ cos(theta) -sin(theta) x0 ]
  [ sin(theta)  cos(theta) y0 ]

Where x0,y0 is the coordinate of the upper-left corner of the rectangle and theta is the rotation angle. I haven't tested this, so there may be an error in this definition but it should be a good start.

2012-07-18 10:42:00 -0600 answered a question Weird result while finding angle

I'm not sure I understand the question, but here is a an example which might help you. Usually when I have an nonsense numerical result as you describe, it is due to a data type error. For example, if you are somehow casting to an integer before dividing or passing to arctan the fractional part would be discarded resulting in a zero angle.

#!/usr/bin/env python
import cv2, cv                                                                   
import numpy as np                                                               

# draw a rectangle                                                               
image = np.zeros((100,100), dtype='f4')                                          
image[25:75,30:70] = 50                                                          

# take the x and y sobel derivative                                              
dx = cv2.Sobel(image, -1, 1, 0)                                                  
dy = cv2.Sobel(image, -1, 0, 1)                                                  

# display the results with nice scaling                                          
cv2.imshow("original", image)                                                    
cv2.imshow("dx", cv2.convertScaleAbs(dx, None, 1./2, 100))                       
cv2.imshow("dy", cv2.convertScaleAbs(dy, None, 1./2, 100))                       

# convert dx, dy to magnitude and angle                                          
# angle is in radians                                                            
mag, theta = cv2.cartToPolar(dx, dy)                                             

# display in HSV so angle has a simple mapping                                   
theta_hsv = np.zeros((100,100,3), dtype='f4')                                    
# Hue is angle in degrees                                                        
theta_hsv[...,0] = np.degrees(theta)                                             
# S and V are 1.0                                                                
theta_hsv[...,1:] = 1.0                                                          
# Perform the colorspace conversion                                              
# Note that you need to import the old                                           
# cv modlue to get the conversion constants                                      
theta_bgr = cv2.cvtColor(theta_hsv, cv.CV_HSV2BGR)                               

# and show the angles                                                            
cv2.imshow("theta", theta_bgr)                                                   

# press 'q' to exit                                                              
while cv2.waitKey(10) != ord('q'):                                               
    pass
2012-07-16 15:38:53 -0600 answered a question Assertion Error in Kalman Filter python OpenCV 2.4.0

The python wrapper for this class is incomplete. Nowhere in your example did you specify any of the system matrices or noise covariances, and indeed the python wrapper does not provide a method to do this.

If you look at the sample kalman.cpp, you can see that the C++ API requires the transition, measurement, control and noise covariance matrices to be initialized after the filter object is constructed.

You might want to write up a feature request on the bug tracker for this.

2012-07-16 14:41:09 -0600 received badge  Teacher (source)
2012-07-16 14:01:04 -0600 answered a question cv2 bindings incompatible with numpy.dstack function?

The problem is the way dstack constructs the array:

In [1]: import cv2

In [2]: channel = np.arange(240 * 320, dtype=np.uint8).reshape(240, 320, 1)

In [3]: merged = cv2.merge([channel for i in range(3)])

In [4]: stacked = np.dstack([channel for i in range(3)])

In [5]: stacked.strides
Out[5]: (1, 240, 76800)

In [6]: merged.strides
Out[6]: (960, 3, 1)

In order to reshape and stack the arrays in them manner you requested, numpy only modifies the strides through the array, it does not move the data around. merge appears to actually move the data around.

I think OpenCV can represent arrays stored in this fashion, but the python wrappers for OpenCV do not perform this translation.

See here for more information about numpy's strided array storage.

2012-07-12 16:49:17 -0600 answered a question videofacerec.py example help

Looks to me like you have an empty face database. The exception thrown is probably because the argument to np.bincount has length 0. This could either be because k==0 (unlikely) or len(predictor.y)==0. This value is assigned at videofacerec.py:52 self.predictor.compute(). Check that dataset_fn (command line argument 2) is valid.