Ask Your Question

TAXfromDK's profile - activity

2019-10-04 22:50:35 -0600 received badge  Notable Question (source)
2017-08-25 00:18:29 -0600 received badge  Popular Question (source)
2017-01-09 15:28:48 -0600 commented answer Callibrating a bunch of cameras - Can I use a linear rail robot?

Thank you for your replies.

I will attempt calibrating and report back when I know more. Kind regards

Jesper

2017-01-09 15:28:36 -0600 commented answer Callibrating a bunch of cameras - Can I use a linear rail robot?

Thank you for your replies.

I will attempt calibrating and report back when I know more. Kind regards

Jesper

2017-01-06 11:07:45 -0600 received badge  Editor (source)
2017-01-06 11:06:24 -0600 asked a question Callibrating a bunch of cameras - Can I use a linear rail robot?

Hi Guys,

I need to calibrate a bunch of cameras, and I was wondering if the opencv camera calibration could work if I build a simple robot with a screen and a linear rail to move the camera. Something like this: image description

I want to use a robot so I can reproduce the same calibration action across multiple cameras.

My question is if it will work only to move the camera linearly relative to the checkerboard or if rotation is required as well?

Kind regards

Jesper

2016-06-15 18:09:21 -0600 commented answer Find direction from cameraMatrix and distCoeff

Munch appreciated 😀

2016-06-15 01:34:17 -0600 commented answer Find direction from cameraMatrix and distCoeff

And even more smoothly if I scale the angle inside the tan() function.

  #x *= (2 * math.atan(w/(2*camera_matrix[0][0])))/w
  #y *= (2 * math.atan(h/(2*camera_matrix[1][1])))/h
  x = (2 * math.atan(x/(2*camera_matrix[0][0])))
  y = (2 * math.atan(y/(2*camera_matrix[1][1])))

Then my worst dot product between LOS and my original world point(normalized) is 0.999900007954

Kind regards

Jesper

2016-06-15 01:23:19 -0600 commented answer Find direction from cameraMatrix and distCoeff

It works smoothly!!!

My dot product for a wide range of testpoints are above 98pct :)

2016-06-15 01:22:24 -0600 received badge  Supporter (source)
2016-06-14 16:22:47 -0600 received badge  Critic (source)
2016-06-14 16:02:59 -0600 commented answer Find direction from cameraMatrix and distCoeff

The more i read the x *= ... line, i suspect there is a mistake? Atan computes the horizontal FOV and multiplies it linearly to x. I would think that needed to be done inside the atan funktion as it is not linear. (One degree in the center is less pixels in the center of the image than One degree at the edge of the image.) Im really unsure, so please comment. 😀

2016-06-14 14:54:11 -0600 commented answer Find direction from cameraMatrix and distCoeff

This looks interresting. I will try it out when i get access to my opencv box tomorrow.

Meanwhile, could you add some comments to the code as I have a hard time following the atan() parts. I assume az and el is azimuth and elevation, but I cant follow the atan with the focal lengths. I also assume LOS is line of sight?

Kind regards

Jesper

2016-06-13 15:54:11 -0600 asked a question Find direction from cameraMatrix and distCoeff

Hi Guys,

I have calibrated my camera by detecting checkerboard patterns and running calibrateCamera, retrieving the cameraMatrix and distortion coefficients. These I can plug into project points alongside 3D positions in the cameras space and retrieve the UV where the point is projected into the imperfect camera.

Im using a 3D point that projects into a point near my top left image coordinate corner.

This is all fine, but now I want to go the other way and convert a point in my distorted U,V coordinate into a directional vector pointing at all the points that would be projected into this UV coordinate.

I have tried playing around with the undistortPoints function, to find the ideal points U,V and from those use the cameraMatrix to find a point somewhere along the line, picking values from the cameraMatrix.

X = (U-C_x)/f_x Y = (U-C_y)/f_y Z = 1

But I can't seem to hit a direction that is pointing very close to the 3D point i started from.

Any idea what I might be doing wrong?

kind regards

Jesper Taxbøl

2015-08-11 10:53:53 -0600 answered a question Open Gopro hero wifi live streaming with VS C++

I have been using GoPro cameras with grabbers and by recording to the SD cards. They are fine cameras.

If you are calibrating them you need to set a flag that enables more calibration parameters than the default setup has. That made my calibration of GoPro 4+ much better.

I have no experience eith the web streaming. I guess the latency is high and quality is low.

2015-08-09 13:43:36 -0600 asked a question What precision do you get with stereoCalibrate?

Hi Guys,

I have a stereo rig that I am trying to calibrate but I am having trouble with my precision. I would therefore ask what level of precision is normal when working with stereo calibrate. I feel that my results are constantly 10 degrees off directionwise and a couple of centimetres translation wise. Im using a 6*8 checkerboard with squares around 22mm. The dataset I am working from is around 50 pictures.

What results have you achieved using stereo calibrate?

I am using an external model for the lens distortion (Ocam_calib) so I am passing in idealised coordinates and fixing the intrinsics matrix. This might be the source of my problem, but I am seeking other sources as well.

Any ideas?

Kind regards

Jesper

2015-07-29 15:49:52 -0600 commented answer Assertion while trying to calibrate stereo system

Could you describe how you did the conversion?

2015-07-29 14:06:25 -0600 asked a question Is there a solvePnP function for the fisheye Camera model?

I did an exercise recently where I used solvePnp to estimate the position of a camera. That used the normal camera model.

Does there exist a solvePnP() function for the fisheye model as well?

Kind regards

Jesper

2015-07-28 15:01:27 -0600 commented answer Find center and radius of fisheye image

Thank you very much!!! This is awesome :)

2015-07-28 15:01:00 -0600 received badge  Scholar (source)
2015-07-28 01:37:19 -0600 commented question opencv 3.0 fisheye calibration

Did yuu figre it out?

2015-07-28 01:35:28 -0600 asked a question Find center and radius of fisheye image

Hi Guys,

I have a video from a fisheye camera with an image like this. Dont mind the cutout of the person. :)

image description

I would like to find the center and radius of the circular fisheye image mage. I have tried just fitting a circle to the image, but the edge is very warped and reflections tend to bleed from the sides of the image so centering is difficult.

I therefore need something better.

I was therefore wondering if there was any way to find the contrast in the image andperhaps accumulate it over several video frames, to thereby detect what pixels contain world information and what contain bleeding.

Kind regards

Jesper

2015-07-22 05:16:35 -0600 commented question Unable to calibrate fisheye image from Python

Perhaps someone has made it work with C++?

2015-07-17 10:03:29 -0600 asked a question Unable to calibrate fisheye image from Python

Hi Guys,

I am trying to calibrate a fisheye camera using the cv2.fisheye.calibration function, but I am unable to succeed.

I have succeded using the regular cv2.calibrateCamera() function in the past so I was hoping the fisheye version

I keep getting errors about the dimensions of the object and image points being passed into the function. Some magic happens between python and the c++ so I have a hard time controlling how the point lists are passed to the function.

The error is this:

OpenCV Error: Assertion failed (objectPoints.type() == CV_32FC3 || objectPoints.type() == CV_64FC3) in calibrate, file /Users/jesper/opencv3/opencv/modules/calib3d/src/fisheye.cpp, line 695
Traceback (most recent call last):
  File "extract.py", line 71, in <module>
    print cv2.fisheye.calibrate(obj_points, img_points, (w, h), None, None)
cv2.error: /Users/jesper/opencv3/opencv/modules/calib3d/src/fisheye.cpp:695: error: (-215) objectPoints.type() == CV_32FC3 || objectPoints.type() == CV_64FC3 in function calibrate

I feel that it has something to do about how the arrays are being casted before passed to c++. But I also suspect that there is something fishy about the function requiring CV_64FC3 types the object points are of the format [[x,y,z],[x,y,z],[x,y,z],...]

The file I am trying to calibrate against is uploaded here: https://youtu.be/uFvRySgiXpY

I am on version 3.0.0-dev

A working example would be highly appreciated.

My code is:

#!/usr/local/bin/python

import cv2
import sys
import numpy as np
import glob
import sys
import pickle

print "OpenCV version: ", cv2.__version__

video_file = sys.argv[1]

pattern_size = (8, 6)
square_size = float(1.0)
pattern_points = np.zeros( (np.prod(pattern_size), 3), np.float32 )
pattern_points[:,:2] = np.indices(pattern_size).T.reshape(-1, 2)
pattern_points *= square_size

obj_points = []
img_points = []
h, w = 0, 0

cap = cv2.VideoCapture(video_file)
interleave = int(cap.get(cv2.CAP_PROP_FRAME_COUNT) / 65.0)

cv2.waitKey(1000)
while not cap.isOpened():
    cap = cv2.VideoCapture(video_file)
    cv2.waitKey(1000)
    print "Wait for the header"

pos_frame = cap.get(cv2.CAP_PROP_POS_FRAMES)
print "analyzing video"
nextframe = 0
while True:
    flag, frame = cap.read()
    if flag:
        # The frame is ready and already captured
        pos_frame = cap.get(cv2.CAP_PROP_POS_FRAMES)
        if pos_frame >= nextframe:
            cv2.waitKey(10)
            img = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
            print "frame: ", pos_frame
            h, w = img.shape[:2]
            found, corners = cv2.findChessboardCorners(img, pattern_size)
            nextframe += interleave
            if found:
                cv2.drawChessboardCorners(frame, pattern_size, corners, found) 
                sml = cv2.resize(frame, (w/2,h/2))
                cv2.imshow('video', sml)
                term = ( cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_COUNT, 30, 0.1 )
                cv2.cornerSubPix(img, corners, (5, 5), (-1, -1), term)

            if not found:
                print 'chessboard not found'
                continue
            img_points.append(corners.reshape(-1, 2))
            obj_points.append(pattern_points)
    else:
        cap.set(cv2.CAP_PROP_POS_FRAMES, pos_frame-1)
        print "frame is not ready"
        cv2.waitKey(3000)


    if cap.get(cv2.CAP_PROP_POS_FRAMES) == cap.get(cv2.CAP_PROP_FRAME_COUNT):
        break

cv2.destroyAllWindows()


print cv2.fisheye.calibrate(obj_points, img_points, (w, h), None, None)
2015-07-17 08:59:17 -0600 commented question Fisheye undistortion with python error

I tried adding a bracket around my image_points and object_points to change their imensions. That gave another error.

I would really like to see a working python-opencv3.0 example of fisheye calibration.

2015-07-16 14:00:49 -0600 received badge  Enthusiast
2015-07-11 10:25:21 -0600 asked a question Convert pixel position to world direction?

I have calibrated my camera with a checkerboard and achieved the distortion parameters and Intrinsic matrix of my camera.

Using these I have estimated the camera position and orientation using solvePnP against a known known set of reference points.

Now I want to find the world direction, from the camera toward a blob I am detecting inside my image. So I want to convert a pixel position to a 3D vector in that direction.

I want the direction so I can determine where in the world a ball is located. I have two cameras. If both of them can see the blob I will find the point nearest both lines and if only one can observe the ball I will just use the directions crossing with a the ground plane.

I am using blob detection in HSV colorspace to find the ball.

Any ideas on how to continue?

Kind regards

Jesper

2015-07-02 23:42:40 -0600 received badge  Student (source)
2015-07-02 22:37:31 -0600 asked a question Calibrating fisheye lenses above 180 degrees

Hi Guys,

I am working on a fisheye lens that has a extreme fov around 220 degrees. An example can be seen here:

image description

I have been looking at lens correction in opencv before, but the models as I know them does not make sense when the fov crosses 180 degrees, as the image can not be represented as a plane anymore.

I have currently made a radial model, where i measure pixel distance from center while following a point in the horizon. This is a tedious task, and I hope a better method is around. Hopefully something with a checkerboard or something.

I am therefore looking to see if OpenCV has any tools in that direction?

Kind regards

Jesper