Ask Your Question

user6789's profile - activity

2020-11-06 15:20:38 -0500 received badge  Self-Learner (source)
2020-11-01 06:04:02 -0500 received badge  Popular Question (source)
2015-03-18 12:35:37 -0500 received badge  Critic (source)
2014-08-26 13:38:54 -0500 asked a question Grid Intersection points detection

I am having the following problem statement for my project. I'm using opencv and python. I need to find and return all the points of intersection and the pixel coordinateds of the intersection of any random grid which the user gives as input.The grid will be of square in shape.

You can take a look at the grid here here

Please help me .

Thanks in advance

2014-07-31 09:10:28 -0500 commented question Occlusion handling with Camshift algorithm

oh thank you..!!! i just stumbled upon it too.. any references for implementing it..??? cause i found it pretty confusing..!! thanks!

2014-07-30 13:08:11 -0500 asked a question Occlusion handling with Camshift algorithm

I am working on object tracking by camshift algorithm. For the time being i am using the inbuilt opencv code wherein i have trouble dealing with occlusion.

 hsv = cv2.cvtColor(self.frame, cv2.COLOR_BGR2HSV)
 mask = cv2.inRange(hsv, np.array((0., 60., 32.)), np.array((180., 255., 255.)))
 prob = cv2.calcBackProject([hsv], [0], self.hist, [0, 180], 1)
 cv2.imshow('prob_0',prob)
 prob &= mask
 cv2.imshow('prob',prob)
 term_crit = ( cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1 )
 track_box, self.track_window = cv2.CamShift(prob, self.track_window, term_crit)

My problem is that in this code when my object which is a red ball goes out of the field of vision of the camera or if i cover some portion of the ball with my hand , then it crashes and give the error that:

track_box, self.track_window = cv2.CamShift(prob, self.track_window, term_crit)
error: ..\..\..\..\opencv\modules\video\src\camshift.cpp:80: error: (-5) Input           
window has non-positive sizes in function cvMeanShift

This is because my parameter to cv2.Camshift -> which is "prob" is not having any values corresponding to my ball.(prob is the binary image obtained which consists of the thresholded ball)

I have one idea for dealing with occlusion in this scenario.It's that i'll store the ball matrix in a global variable and if my camera's current frame cannot obtain the ball matrix then it should use the global variable instead of it till it doesn't find and track the ball. So how to apply this logic in the given code..?

So can anyone help me how to deal with the occlusion in this ball situation. Please...Urgent...thanks.!!

2014-07-30 13:04:20 -0500 commented question Depth from reprojectImageTo3D

do u happen to know any good resource for point cloud generation during real time..??

2014-06-26 15:54:15 -0500 asked a question Depth from reprojectImageTo3D

Hi , I'm working with opencv and I have a doubt.I've ran stereorectify() and reprojectImageTo3D() methods on my images taken from a stereo pair of cameras.The output of reprojectImageTo3D() method is a matrix having 3 channels of x y and z cordinate.I have the following doubts in mind which would be a great help if solved.

  1. The output of reprojectImageTo3D() method is in which distance measuring system.?Like does it give output in meters/cm (cause of calibration) or in pixels.
  2. How to locate the origin of the stereo camera system.
2014-05-30 07:15:30 -0500 commented question Disparity Map

I've found the answer at this link

2014-05-30 07:14:29 -0500 commented question Writing camera matrix to xml/yaml file

Since i cannot answer my own question i am commenting the solution. import yaml data = {"<name>":var_name,"<name2>":var_name2...and so on} fname ="abc.yaml" with open(fname,"w") as f: yaml.dump(data,f)

2014-05-28 21:35:05 -0500 received badge  Student (source)
2014-05-28 16:23:43 -0500 asked a question Disparity Map

Can somebody explain me what exactly does a disparity map return .Cause there isint much given in the documentation and i have a few quedstions related to it.

  1. Does it return difference values of pixels wrt both images?
  2. How to use diaparity values in the formula for depth estimation i.e. Depth = FocalLength*Baseline/disparity
  3. I have read somewhere that disparity map gives a function of depth f(z) . Please explain what it means.If depth is purely an absolute value how can it be generated as a function or is it a function wrt the pixels?

Help required...Please help...Thanks in advance..

2014-05-27 04:28:59 -0500 commented question What to do after camera calibration and correcting image?

Thank you so much..!! @GilLevi

2014-05-23 17:17:48 -0500 asked a question Finding depth

Hi, Since quite a few days i'm stuck on this problem. I have calculated all the Intrinsics and Extrinsics of my cameras and also gotten the disparity map of two images captured from each.However i am not understanding how the disparity map help in depth perception.Like how can we obtain the z-coordinate (w.r.t camera) using the disparity map or do I need to do some further steps to obtain the depth. Please help..

2014-05-23 17:09:12 -0500 commented question Writing camera matrix to xml/yaml file

Thanks. eventually i did write it in yaml

2014-05-23 17:06:46 -0500 commented question What to do after camera calibration and correcting image?

Sure, I'd be happy to learn more.. @GilLevi

2014-05-21 15:12:47 -0500 received badge  Supporter (source)
2014-05-16 07:19:41 -0500 asked a question Undistortion of image returns nothing

I am using opencv and python . I have found out the camera matrix and the distortion coefficients however when i am running the undistort method on the images , nothing is being returned for one of the images.(rest all images are perfectly fine). The image file which is being created is a blank file. I have used the code provided in the documentation. Please tell me where am i going wrong. Please help..Thanks Here is the code

#######################################################################################
#undistorting of images
#removing tangential and radial disturbances from the images using
#camera matrix and distortion matrix 's

import cv2
import yaml
import numpy as np

#Extracting values from Left.yaml
f = open("Left.yaml")
dataLeft = yaml.safe_load(f)
f.close()
camera_matrix_left = np.asarray(dataLeft["camera_matrix"])
dist_coefs_left = np.asarray(dataLeft["distortion_coefficients"])
print type(camera_matrix_left)
print type(dist_coefs_left)

#Extracting values from Right.yaml
f = open("Right.yaml")
dataLeft = yaml.safe_load(f)
f.close()
camera_matrix_right = np.asarray(dataLeft["camera_matrix"]) 
dist_coefs_right = np.asarray(dataLeft["distortion_coefficients"])
print type(camera_matrix_right)
print type(dist_coefs_right)

#undistorting Left images
mtx=camera_matrix_left
dist=dist_coefs_left
img = cv2.imread('C:\Users\Administrator\Desktop\IP_Proj\snaps\Left1.jpg')
h,  w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
# undistort
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
# crop the image
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
cv2.imwrite('CalibresultLeft.png',dst)


# undistorting Right images
mtx1=camera_matrix_right
dist1=dist_coefs_right
img1 = cv2.imread('C:\Users\Administrator\Desktop\IP_Proj\snaps\Right1.jpg')
h1,  w1 = img.shape[:2]
newcameramtx1, roi1=cv2.getOptimalNewCameraMatrix(mtx1,dist1,(w1,h1),1,(w1,h1))
# undistort
dst1 = cv2.undistort(img1, mtx1, dist1, None, newcameramtx1)
# crop the image
x1,y1,w1,h1 = roi1
dst1 = dst[y1:y1+h1, x1:x1+w1]
cv2.imwrite('CalibresultRight.png',dst1)

The left image is being correctly undistorted whereas the undistortion of right image is not returning anything

2014-05-08 14:20:49 -0500 asked a question What to do after camera calibration and correcting image?

I'm attempting to recreate 3-D image by merging two images from webcams. I'm done with calibrating both my webcams and also removed the tangential and radial distortions from the images. However i'm confused on what should i do next to get my desired output.Please help . I'm new to opencv and please reply(P.S. I'm working on opencv-python)

2014-05-01 04:39:47 -0500 received badge  Editor (source)
2014-05-01 02:01:02 -0500 asked a question Image width zero in IDHR

I am using opencv and python. I am running a code for undistortion of image using the calibrated camera parameters( camera_matrix and distortion_coefficients). However out of the two images that i have clicked using two webcams , One is being undistorted and returns the result whereas while doing the other it throws the following error:

Invalid IHDR Data

libpng warning: Image width is zero in IHDR

The code for both the images is the same.can someone tell me as to what is going wrong Here is the code that i have used:

mtx=camera_matrix

dist=dist_coefs

img = cv2.imread('../snaps/Right1.jpg')

h, w = img.shape[:2]

newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))

dst = cv2.undistort(img, mtx, dist, None, newcameramtx)

x,y,w,h = roi

dst = dst[y:y+h, x:x+w]

cv2.imwrite('CalibresultR.png',dst)

2014-04-30 15:13:51 -0500 asked a question Writing camera matrix to xml/yaml file

I have calibrated my camera having the following parameters:

camera_matrix=[[ 532.80990646 ,0.0,342.49522219],[0.0,532.93344713,233.88792491],[0.0,0.0,1.0]] dist_coeff = [-2.81325798e-01,2.91150014e-02,1.21234399e-03,-1.40823665e-04,1.54861424e-01]

I am working in python.I wrote the following code to save the above into a file but the file was like a normal text file.

f = open("C:\Users\Administrator\Desktop\calibration_camera.xml","w")

f.write('Camera Matrix:\n'+str(camera_matrix))

f.write('\n')

f.write('Distortion Coefficients:\n'+str(dist_coefs))

f.write('\n')

f.close()

How can i save this data into an xml/yaml file using python commands.Please help. Thanks in advance