OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 27 Nov 2020 07:16:16 -0600Converting Mat of point cloud mesh to vector<cv::Point3d>http://answers.opencv.org/question/238355/converting-mat-of-point-cloud-mesh-to-vectorcvpoint3d/My task is to project a point cloud of a scene to a plane image. I have 3D point cloud of scene represented as PLY mesh.
Here is header an d few data lines in this PLY file:
ply
format ascii 1.0
comment PCL generated
element vertex 180768
property float x
property float y
property float z
property uchar red
property uchar green
property uchar blue
property float nx
property float ny
property float nz
property float curvature
element camera 1
property float view_px
property float view_py
property float view_pz
property float x_axisx
property float x_axisy
property float x_axisz
property float y_axisx
property float y_axisy
property float y_axisz
property float z_axisx
property float z_axisy
property float z_axisz
property float focal
property float scalex
property float scaley
property float centerx
property float centery
property int viewportx
property int viewporty
property float k1
property float k2
end_header
0.021613657 0.60601699 -1.5027865 120 89 71 -0.92790836 -0.26353598 0.26369458 0.00016434079
-1.1746287 -1.7522405 -1.4859273 193 128 72 0.093781963 -0.043701902 0.99463314 0.048953384
I have found a way to project point cloud to image [here](https://stackoverflow.com/questions/40677116/conversion-of-cloud-data-into-2d-image-using-opencv).
So far I load PLY model to cv::Mat using function loadPLYSimple from ppf_match_3d.
Mat scene_mat = loadPLYSimple("a_cloud.ply", 1);
I need to convert Mat representing the scene clout to vector<cv::Point3d>. My Mat scene_mat has size: [6 x 180768]. How I can do it?sigmoid90Fri, 27 Nov 2020 07:16:16 -0600http://answers.opencv.org/question/238355/3D to 2D Points using cv::projectPointshttp://answers.opencv.org/question/237672/3d-to-2d-points-using-cvprojectpoints/I have an issue using cv::projectPoints in opencv 4.5 projecting 3D Lidar Points into a 2D image.
* There is no roll/pitch/yaw so the rvec is 0.
* Points are already in world space and only have to be transformed to camera space with tvec.
* There is no camera lens distortion
I tested and ran the code below for an image resolution of 785x785 which works fine.
Projected Points are on the correct position in the image.
After I've changed the resolution to 1600x1200 the code below does not work correctly anymore. Projected 2D Points are approx 30px off (+ ~30px in direction on top).
I don't really understand whats the issue. Has anyone an idea? I furthermore checked by setting the resolution to 1200x1200 which works correctly again.
So the issue comes from not having the same width and height.
My guess is that there might be an issue with cmat.
cv::Mat rvec, tvec, cmat;
rvec.create(1, 3, cv::DataType<float>::type);
rvec.at<float>(0) = 0;
rvec.at<float>(1) = 0;
rvec.at<float>(2) = 0;
tvec.create(3, 1, cv::DataType<float>::type);
tvec.at<float>(0) = camera.opencv_origin.x;
tvec.at<float>(1) = -camera.opencv_origin.y; // In coordinate system y and z axis are inverted
tvec.at<float>(2) = -camera.opencv_origin.z; // In coordinate system y and z axis are inverted
cmat.create(3, 3, cv::DataType<float>::type);
cmat.at<float>(0, 0) = camera.image_width / 2;
cmat.at<float>(1, 1) = camera.image_height / 2;
cmat.at<float>(0, 2) = camera.image_width / 2;
cmat.at<float>(1, 2) = camera.image_height / 2;
cmat.at<float>(2, 2) = 1;
std::vector<cv::Point2f> points_image;
cv::projectPoints(points_world, rvec, tvec, cmat, cv::noArray(), points_image);
for (const auto& p : points_image) {
cv::circle(image, p, 2, cv::Scalar(0, 0, 255), -1);
}NextarWed, 11 Nov 2020 05:27:07 -0600http://answers.opencv.org/question/237672/How to get real world projection coordinatehttp://answers.opencv.org/question/225422/how-to-get-real-world-projection-coordinate/I have a TV screen (dimension of TV is known say width, w and height, h) and I have a Camera somewhere nearby and the physical distance between camera and TV screen's center is known say (Δx,Δy,Δz). The camera and TV screen might be facing in different angles, the vertical angle and horizontal angle, that both make with each other say, θv and θh is also known.
Now the camera has recorded the gaze of a person in terms of yaw, pitch (and roll too, but roll is not needed in this case). Also, the person's real world distance from camera is known z-dist, x-dist and y-dist.
How to project this person's gaze on the TV's plane and get whether the gaze intersect the TV's screen, given TV's physical dimension, if yes, then know relative position of the plane's intersection with the gaze.KafanTue, 28 Jan 2020 08:25:40 -0600http://answers.opencv.org/question/225422/Confusing 3d to 2d point projectionhttp://answers.opencv.org/question/214720/confusing-3d-to-2d-point-projection/Im getting confused with a case of 3d to 2d projection.
Say i have a camera located at the position x=0, y=-50, z=100 and rotation (PI,0,0) with some intrinsic camera matrix and distortion coefficients.
if i project the point (0,0,0) into the camera plane i should see it somewhere on the topside of the screen, as the camera is shifted away from the origin. If i nowchange the orientation of the camera to (PI,0,PI) i would expect to see the point projected to the other side of the screen, as i have now rotated 180. However, using cv::projectPoint this does not seem to be the case. In fact, no matter what Z Angle i select im getting the point at the same 2d position.
Am I doing something wrong or is this how it's supposed to work? Why is the point not rotating around when i rotate my camera (by rotating the extrinsics)
![C:\fakepath\rotation.png](/upfiles/15614549278473608.png)
EDIT:
This is not working as expected because the R/t convention of the extrinsics correspond to a rotation followed by a translation. https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
This is different than applying a homogenous transform which first translates then rotates.
To get the effect i desire i must first apply a 4x4 matrix transform T to the point (0,0,0,1) transforming it into the cameras coordinate system. Then i can use cv::projectPoint with the intrinsic matrix and a Rvec(0,0,0) and tvec(0,0,0) to project the point into pixel coordinates.
my tests seem to work,but can someone confirm this to me?
Regards
ClaudeclaudeTue, 25 Jun 2019 04:30:41 -0500http://answers.opencv.org/question/214720/Calculate robot coordinates from measured chessbaord corners ( hand-eye calibration)http://answers.opencv.org/question/204910/calculate-robot-coordinates-from-measured-chessbaord-corners-hand-eye-calibration/Hello guys,
for my project I am trying to calibrate my depth camera with my robot-arm (hand-eye calibration). Therefore, I measured 8 corners of the cheassboard and got 8 pixel vectors and their corresponding robot 3D coordinates (using the gripper to point exactly at the corner). Then, I tried to calculate the transformation matrix. But the result is way off. Where's the mistake? Thank you very much!
Edit: Added the camera matrix
----------
img_pts = np.array(((237., 325.),
(242., 245.),
(248., 164.),
(318., 330.),
(322., 250.),
(328., 170.),
(398., 336.),
(403., 255.)), dtype=np.float32)
objectPoints = np.array(((-437., -491., -259.),
(-402., -457., -259.),
(-360., -421., -259.),
(-393., -534., -259.),
(-362., -504., -259.),
(-332., -476., -259.),
(-298., -511., -259.),
(-334., -546., -259.)), dtype=np.float32)
camera_matrix = np.array(((542.5860286838929, 0., 310.4749867256299),
(0., 544.2396851407029, 246.7577402755397),
(0., 0., 1.)), dtype=np.float32)
distortion = np.array([0.03056762860315778,
-0.1824835906443329,
-0.0007936359893356936,
-0.001735795279032343,
0])
ret, rvec, tvec = cv2.solvePnP(objectPoints, img_pts, camera_matrix, distortion)
rmtx, _ = cv2.Rodrigues(rvec)
T_gripper_cam = np.array([[rmtx[0][0], rmtx[0][1], rmtx[0][2], tvec[0]],
[rmtx[1][0], rmtx[1][1], rmtx[1][2], tvec[1]],
[rmtx[2][0], rmtx[2][1], rmtx[2][2], tvec[2]],
[0.0, 0.0, 0.0, 1.0]])
T_cam_gripper = np.linalg.inv(T_gripper_cam)
print(T_cam_gripper)
p = np.array((237., 325., 0.), dtype=np.float32)
p_new = np.array([p[0], p[1], p[2], 1])
objectPoint = np.dot(p_new, T_gripper_cam)[:3]
print(objectPoint)
# should be (-437., -491., -259.)
# --> actual = [ 33.57 -395.62 -64.46]
------------------------------------- Edit ----------------------------------------------------------
I think I might be on to something. According to this:
[Stackoverflow](https://stackoverflow.com/questions/12299870/computing-x-y-coordinate-3d-from-image-point)
I have to calculate the scaling factor. So i tried to do this in python:
uv = np.array(([237.], [325.], [1.]), dtype=np.float32)
rotInv = np.linalg.inv(rmtx)
camMatInv = np.linalg.inv(camera_matrix)
leftSideMat = np.dot(rotInv, np.dot(camMatInv, uv))
rightSideMat = np.dot(rotInv, tvec)
s = (295 + leftSideMat[2:] / rightSideMat[2:])
temp = (s * np.dot(camMatInv, uv))
tempSub = np.array(temp - tvec)
print(np.dot(rotInv, tempSub))
# should be (-437., -491., -259.)
# --> actual = [ 437.2 -501.3 -266.2]
Looks like I am pretty close. But its still way off in the Y direction. Is this the correct approach? Thank you!Mysterion46Fri, 07 Dec 2018 06:35:46 -0600http://answers.opencv.org/question/204910/Erroneous projection matrix due to stereoRectify() ?http://answers.opencv.org/question/202800/erroneous-projection-matrix-due-to-stereorectify/I have the following situation:
- 2x cameras with 2048x1024 px resolution of the same type
- Some hardware black box calculating a depth image from the input with a resolution of 960x440
I have a valid calibration for the camera system with given cameraMatrix1, cameraMatrix2, distCoeffs1 and distCoeffs2. Moreover I have a rectification matrix rect, a rotation matrix R and a translation vector T. The box works very vell, the disparities are very good, hence I don't doubt on this data. (It is originally calculated with Matlab and obtained from a stereoParams object.)
To reproject disparity image points into 3D world points I wanted to use the ROS `stereo_image_proc/point_cloud2` node, which needs the projection matrices of both cameras. Hence I crated a python script to calculate it:
import cv2
import numpy as np
if __name__ == '__main__':
print('Rectify with opencv2')
cameraMatrix1 = np.matrix([[2565.031253, 0, 919.413174],
[0, 2544.873175, 697.398311],
[0,0,1]])
cameraMatrix2 = np.matrix([[2531.349645, 0, 869.425085],
[0, 2515.573530, 632.055025],
[0,0,1]])
distCoeffs1 = np.matrix([0.009251, -0.015378, -0.004205, -0.020355, -0.624842]).transpose()
distCoeffs2 = np.matrix([0.024439, -0.317383, -0.001931, -0.022877, 0.316663]).transpose()
imageSize = (2048, 1024)
R = np.matrix([[0.999618864,-0.007622217,-0.026533520],
[0.006909877,0.999615975,-0.026835735],
[0.026727879,0.026642164,0.999287654]])
T = np.matrix([497.838267224, -33.349483489, 23.913797009]).transpose()
camera_name = "zynq_101"
alpha = -1.0;
R1 = np.zeros([3,3])
R2 = np.zeros([3,3])
P1 = np.zeros([3,4])
P2 = np.zeros([3,4])
Q = np.zeros ([4,4])
newImageSize = (2048, 1024)
R1, R2, P1, P2, Q, validPixROI1, validPixROI2 = cv2.stereoRectify(cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, imageSize, R, T, R1, R2, P1, P2, Q, 0, alpha, newImageSize)
This results in the following projection matrices
left: [2515.57353, 0.0, 807.6421051025391, 0.0,
0.0, 2515.57353, 648.6131038665771, 0.0,
0.0, 0.0, 1.0, 0.0]
right: [2515.57353, 0.0, 681.3331298828125, 1256596.330467894,
0.0, 2515.57353, 648.6131038665771, 0.0,
0.0, 0.0, 1.0, 0.0]
So where is the poblem: I cannot explain the right[1,4] c_x value of 1256596.330467894. In my eyes this value makes no sense. Does anyone see my mistake or can explain why this value might be correct anyway?
Using this projection matrices with the referenced point_cliud2 node of ROS shows that there must be something wrong with my values, the resulting PCL is obviously wrong, the X,Y,Z values are way to large.
mh.herrmannFri, 09 Nov 2018 06:58:36 -0600http://answers.opencv.org/question/202800/How to count peaks in a Binary image? - Vertical Projectionhttp://answers.opencv.org/question/189022/how-to-count-peaks-in-a-binary-image-vertical-projection/I am trying to count the number of riders on a motorbike as given here -> **https://youtu.be/f_wImzqho9s?t=53**
![image description](/upfiles/15233026357374758.png)
I am trying for a week now , I am not getting what is **Vertical Projection** of binary image is ?
Is there any opencv api for doing something like that ? Any other methods ?
Thanks in advance :)vishwaprakashMon, 09 Apr 2018 14:44:10 -0500http://answers.opencv.org/question/189022/projectPoints fails with points behind the camerahttp://answers.opencv.org/question/20138/projectpoints-fails-with-points-behind-the-camera/I'm using the pojectPoints opencv function to get the projection of a 3d point in a camera image plane.
cv::projectPoints(inputPoint, rvec, tvec, fundamentalMatrix, distCoeffsMat, outputPoint);
The problem I'm facing is when Z (in camera local frame) is negative, instead of returning a point out of the image boundaries, it returns me the symmetric (Z positive) instead. I was expecting that function to check for positive Z values...
I can check this manually by myself, but is there a better way?
Thanks!Josep BoschWed, 04 Sep 2013 02:34:51 -0500http://answers.opencv.org/question/20138/Can spherical ball get distorted to ellipse on image plane in a pin hole camera modelhttp://answers.opencv.org/question/182684/can-spherical-ball-get-distorted-to-ellipse-on-image-plane-in-a-pin-hole-camera-model/I have a ball captured in an image. The ball is detected as circle when it is at the center of the image. when it moves to the corner of the image it is detected as an ellipse.
We use a fish eye/wide angle lens and we are not correcting the image. We do the circle and ellipse detection on the original image.
I want to know if this is a phenomenon of **perspective distortion** or due to the **fish eye/lens distortion** ? or anything else.
I did some reading around it and things are confusing me.
https://books.google.com.sg/books?id=SFgfgFrdB_oC&pg=PA35&lpg=PA35&dq=sphere+becomes+eliptic+camera&source=bl&ots=dRUkJecnyW&sig=YItExPSKQOGa0TkNRFO332Mh-iU&hl=en&sa=X&ved=0ahUKEwjsluWRwODYAhWMwI8KHXZeD6YQ6AEIOzAG#v=onepage&q=sphere%20becomes%20eliptic%20camera&f=false
Any help or knowledge would be appreciated.
Sriram KumarWed, 17 Jan 2018 21:35:15 -0600http://answers.opencv.org/question/182684/Convert Cubemap pixel coordinates to equivalents in Equirectangularhttp://answers.opencv.org/question/180430/convert-cubemap-pixel-coordinates-to-equivalents-in-equirectangular/I have a set of coordinates of a 6-image Cubemap (Front, Back, Left, Right, Top, Bottom) as follows:
[ [160, 314], Front; [253, 231], Front; [345, 273], Left; [347, 92], Bottom; ... ]
Each image is 500x500p, being [0, 0] the top-left corner.
I want to convert these coordinates to their equivalents in equirectangular, for a 2500x1250p image.
I don't need to convert the whole image, just the set of coordinates. Is there any straight-forward conversion for a specific pixel?
**EDIT:**
First of all, I want to insist on the fact that **I don't have any image available**, neither the 6 input images from the cubemap, nor the output equirectangular pano. What I have available is a set of pixel coordinates from 6 images that shape a skybox [cubemap](https://upload.wikimedia.org/wikipedia/commons/b/b4/Skybox_example.png). I add a graphic example using 250x250p images:
Front image:
![image description](/upfiles/15133311532718516.png)
Right image:
![image description](/upfiles/15133311587454492.png)
And so on for the other 4 images (Back, Left, Top, Bottom).
I have set in red some points, these points will be **my input**. Those points would have their equivalents in an equirectangular panorama:
![image description](/upfiles/1513331258676679.png)
I have used a 1000x500p pano in this case.
So the **input** is the pixel coordinates [x,y] of the red points in the cubemap, together with the image they belong to. The [0,0] pixel is the top-left corner of each image:
{ Lamp1: [194,175], front; Chair: [151,234], front; TV: [31,81], right; Door: [107,152], back; Lamp2: [125,190], left }
And the **output** I want to obtain is the pixel coordinates of the red points in the equirectangular panorama:
{ Lamp1: [831,304]; Chair: [784,362]; TV: [898,206]; Door: [228,283]; Lamp2: [500,326] }
I would like to know how to map from one set of coordinates to the other:
CUBEMAP [194,175], front -> ? -> [831,304] EQUIRECTANGULARFinfa811Thu, 14 Dec 2017 12:17:40 -0600http://answers.opencv.org/question/180430/How to compute the distance between two elements if we know the extrinsic and intrinsic properties of the camera?http://answers.opencv.org/question/177157/how-to-compute-the-distance-between-two-elements-if-we-know-the-extrinsic-and-intrinsic-properties-of-the-camera/ ![image description](/upfiles/15092677437317681.jpg)
Lets assume we know the value of K (intrinsic properties) and the value of R (rotation) and t (transformation between world and camera planes). How would we compute the distance between two contiguous pieces of wood (going towards the horizon)? It is assumed that their distances are the same.
danoc93Sun, 29 Oct 2017 04:04:56 -0500http://answers.opencv.org/question/177157/OpenGL Camera to OpenCV Projection matrixhttp://answers.opencv.org/question/148781/opengl-camera-to-opencv-projection-matrix/ Hi!
I am trying to retrieve an OpenGL camera's parameters and use them for image processing.
I have my 4x4 Modelview from which I extract the camera's position and orientation as such:
Matrix3 R = glModelview.to3x3()
R.rotate(1, 0, 0, M_PI) // flip z axis
Vector3 camera_pos(glModelview[0][3], glModelview[1][3], glModelview[2][3]); // world position in camera's coordinates
I retrieve the focal length and principal point from the Projection matrix as follow:
glP = glProjectionMatrix;
double w = d_imageSize.x();
double h = d_imageSize.y();
double fx = 0.5 * w * glP[0][0];
double fy = 0.5 * h * (1.0 + glP[1][1]);
double cx = 0.5 * w * (1.0 - glP[0][3]);
double cy = 0.5 * h * (1.0 + glP[1][3]);
I now have my intrinsic and extrinsic parameters, and just compose the projection matrix.
Though it seems that I am missing something, as depending on the size of my OpenGL window (viewport) I get different results.
How could I integrate the viewport in my projection matrix?
Thanks in advance :)lagarkaneTue, 16 May 2017 05:50:38 -0500http://answers.opencv.org/question/148781/Using cv reduce in Pythonhttp://answers.opencv.org/question/117498/using-cv-reduce-in-python/I'm trying to use cv reduce to get the projection of an image onto the x and y axis.
I used:
x_sum = cv2.reduce(img, 0, cv2.cv.CV_REDUCE_SUM, cv2.CV_32S)
I get this error:
OpenCV Error: Unsupported format or combination of formats (Unsupported combination of input and output array formats) in reduce.
I can't find any more detailed documentation on how to use reduce in Python. Does anyone know where I've gone wrong?
Alternatively, is there another method I could use? calcHist() seems to only find the colour histogram of the image. taminaWed, 07 Dec 2016 18:08:29 -0600http://answers.opencv.org/question/117498/Camera projection matrix from fundamentalhttp://answers.opencv.org/question/89418/camera-projection-matrix-from-fundamental/I'm pretty new to OpenCV and trying to puzzle together a monocular AR application **getting structure from motion**. I've got a tracker up and running which tracks points pretty well as the optical flow looks good. It needs to work on uncalibrated cameras.
From the point correspondences I get the fundamental matrix from findFundamentalMat, but I'm lost at how to get the camera projection matrix. Matrix math is not my strong suit, and for all my google foo all I can find are examples using pre-calibrated cameras.
1. Find fundamental matrix using findFundamentalMat (check!)
2. Find epilines with computeCorrespondEpilines (check!)
3. **Extract projection matrix P and P1** (????)
P is identity matrix for the uncalibrated case, but **how do I get P1**?
menneskeSat, 05 Mar 2016 05:03:55 -0600http://answers.opencv.org/question/89418/360 imaging techniques bookhttp://answers.opencv.org/question/85650/360-imaging-techniques-book/Hi,
are there any online resources for mathematical side of 360 phtography and imaging?
I need to know:
- used projections (360x360 image is not rectangular, what projections are used to store it)?
- problems that arise
- file formats
- math & stuff
Regards,
PeterpietrkoWed, 27 Jan 2016 08:01:36 -0600http://answers.opencv.org/question/85650/compose a homography matrix from euler angelshttp://answers.opencv.org/question/63394/compose-a-homography-matrix-from-euler-angels/ I'm trying to do panorama stitching. Instead of computing the homography matrix via feature matching, I wan to use known camera rotations (yaw, pitch, roll between photos).
Ive already tried the rotation matrices for R3:
![sfsdf](http://upload.wikimedia.org/math/2/8/5/2851c9dc2031127e6dacfb84b96446d8.png)
which is somehow not what I'm looking for.
Maybe someone can help me on that.
thengineerSat, 06 Jun 2015 11:44:00 -0500http://answers.opencv.org/question/63394/Calculating distance to an unknown object with single camerahttp://answers.opencv.org/question/62788/calculating-distance-to-an-unknown-object-with-single-camera/Hi all,
The title says it all, I have a camera mounted on a moving boat and I want to know the distance of the detected targets. Here is the scenario:
- I have the GPS information.
- I do know my velocity and direction.
- I have a single camera and an algorithm to detect the targets.
- I am moving, changing my location all the time. However the camera is mounted.
- Targets are moving, they are mostly boats sailing around.
- My camera is calibrated already.
- Edit: Targets have different sizes. They could be kayaks, boats, huge container ships, sailing boats, etc.
Give these, is there a way to find how far is the detected target from me?
Any help is appreciated.
Thanks
frageDEThu, 28 May 2015 06:46:47 -0500http://answers.opencv.org/question/62788/Capability of OpenCVhttp://answers.opencv.org/question/62474/capability-of-opencv/Say you have a camera pointing at an object in real life. The computer knows the location and orientation of the camera relative to the object
Then say you have a 3D model of that object in memory.
Is it possible with OpenCV to project the camera image onto the 3D model of the object, returning a texture/image for each face of the object?
If OpenCV can't do this, does anyone know any other program or package that can do this, in a way I can call with code or a script and an input file?ajs138Sat, 23 May 2015 04:52:38 -0500http://answers.opencv.org/question/62474/Advices about finding pointing of a bottle with opencvhttp://answers.opencv.org/question/60173/advices-about-finding-pointing-of-a-bottle-with-opencv/Hello everyone, I am relatively new in OpenCV (Already checked some tutorials and I am learning on some additional programs how it works) and I know some things, however I have an assignment to work with in opencv (spin the bottle game) which goes somehow a little out of the scope of what I know.
Based on the direction that a bottle (Yes, a common bottle like a beer bottle) points in a 2d plane (a photo of a bottle) I need to create an opencv method to draw a line where the bottle points at. I heard something about Houghlines and I am currently working on that, however if you have some advices or know about other Opencv library functions that I can use, then I would be very thankful to you.
Pd: I have already decomposed the image into its edges (in case this can help to give any idea)
![image description](/upfiles/14295209032340436.png)
dialgopMon, 20 Apr 2015 04:08:58 -0500http://answers.opencv.org/question/60173/OpenCV Assertion failed when using perspective transformhttp://answers.opencv.org/question/59555/opencv-assertion-failed-when-using-perspective-transform/Hey all!
I am using Aruco AR library to make a project.
My project is based on the following concept: I would like do define the camera position in world coordinates system using AR markers. The markers positions are known in world coordinate system. If I could define the distance of each markers relating to my camera I could define the world coordinates of my camera. For this purpose I would need to define minimum two markers coordinates. In this way I would get two result coordinates of the camera, so for the proper solution I would need three detected markers with the distance estimations.
I have successfully done the marker detection, and I also can draw f.e a cube on them, so the the camera calibration is valid, and everything works fine.
I have the following problem. When I try to use the cv::projectPoints function, i got the following errors:
OpenCV Error: Assertion failed (npoints >= 0 && (depth == CV_32F || depth == CV_64F)) in projectPoints, file /home/mirind4/DiplomaMSC/OpenCV/opencv-2.4.10/modules/calib3d/src/calibration.cpp, line 3349
Exception :/home/mirind4/DiplomaMSC/OpenCV/opencv-2.4.10/modules/calib3d/src/calibration.cpp:3349: error: (-215) npoints >= 0 && (depth == CV_32F || depth == CV_64F) in function projectPoints
I tried to find solution for this error, and I have found a seemingly good one [Link to the question](http://answers.opencv.org/question/18252/opencv-assertion-failed-for-perspective-transform/), but in my code I do not know how to define the type of datas to "CvType.CV_32FC2"
I would like to ask two question:
What would you advise for this error?
Do you think that my concept of the camera pose estimation is good?
Thanks in advance!
My source code is the following: (I am using the Aruco with python, so that's why you can see the "BOOST_PYTHON_MODULE" part at the end of it.
#include <iostream>
#include <aruco/aruco.h>
#include <aruco/cvdrawingutils.h>
#include <opencv2/highgui/highgui.hpp>
#include <fstream>
#include <sstream>
#include <boost/python.hpp>
using namespace cv;
using namespace aruco;
string TheInputVideo;
string TheIntrinsicFile;
float TheMarkerSize=45;
int ThePyrDownLevel;
MarkerDetector MDetector;
VideoCapture TheVideoCapturer;
vector<Marker> TheMarkers;
//projected points vector
std::vector<cv::Point2f> projectedPoints;
Mat TheInputImage,TheInputImageCopy;
CameraParameters TheCameraParameters;
void cvTackBarEvents(int pos,void*);
bool readCameraParameters(string TheIntrinsicFile,CameraParameters &CP,Size size);
pair<double,double> AvrgTime(0,0) ;//determines the average time required for detection
double ThresParam1,ThresParam2;
int iThresParam1,iThresParam2;
int waitTime=0;
void Init(){
int vIdx = 1;
cout<<"Opening camera index "<<vIdx<<endl;
TheVideoCapturer.open(vIdx);
waitTime=10;
TheVideoCapturer>>TheInputImage;
//Camera paramteres
TheCameraParameters.readFromXMLFile("camera.xml");
TheCameraParameters.resize(TheInputImage.size());
cout<<"Camera paramteres valid? : " << TheCameraParameters.isValid()<<endl<<std::flush;
//Create gui
cv::namedWindow("thres",1);
cv::namedWindow("in",1);
MDetector.getThresholdParams( ThresParam1,ThresParam2);
MDetector.setCornerRefinementMethod(MarkerDetector::SUBPIX);
iThresParam1=ThresParam1;
iThresParam2=ThresParam2;
cv::createTrackbar("ThresParam1", "in",&iThresParam1, 13, cvTackBarEvents);
cv::createTrackbar("ThresParam2", "in",&iThresParam2, 13, cvTackBarEvents);
}
int yay()
{
try{
char key=0;
//capture until press ESC or until the end of the video
do{
TheVideoCapturer.retrieve( TheInputImage);
//Detection of markers in the image passed
MDetector.detect(TheInputImage,TheMarkers,TheCameraParameters,TheMarkerSize);
//print marker info and draw the markers in image
TheInputImage.copyTo(TheInputImageCopy);
for (unsigned int i=0;i<TheMarkers.size();i++) {
TheMarkers[i].draw(TheInputImageCopy,Scalar(0,0,255),1);
//cout<<TheMarkers[i].id<<std::flush;
cout<<"Tvec : "<<TheMarkers[i].Tvec<<endl;
cout<<"Rvec : "<<TheMarkers[i].Rvec<<endl;
//CvDrawingUtils::draw3dCube(TheInputImageCopy,TheMarkers[i],TheCameraParameters);
cv::projectPoints(TheMarkers[i],TheMarkers[i].Rvec,TheMarkers[i].Tvec,TheCameraParameters.CameraMatrix,TheCameraParameters.Distorsion,projectedPoints);
}
;
cv::imshow("in",TheInputImageCopy);
cv::imshow("thres",MDetector.getThresholdedImage());
key=cv::waitKey(waitTime);//wait for key to be pressed
}while(key!=27 && TheVideoCapturer.grab());
cv::destroyAllWindows();
return TheMarkers.size();
}
catch (std::exception &ex){
cout<<"Exception :"<<ex.what()<<endl;
cv::destroyAllWindows();
return 0;
}
}
void cvTackBarEvents(int pos,void*)
{
if (iThresParam1<3) iThresParam1=3;
if (iThresParam1%2!=1) iThresParam1++;
if (ThresParam2<1) ThresParam2=1;
ThresParam1=iThresParam1;
ThresParam2=iThresParam2;
MDetector.setThresholdParams(ThresParam1,ThresParam2);
//recompute
MDetector.detect(TheInputImage,TheMarkers,TheCameraParameters);
TheInputImage.copyTo(TheInputImageCopy);
for (unsigned int i=0;i<TheMarkers.size();i++) TheMarkers[i].draw(TheInputImageCopy,Scalar(0,0,255),1);
cv::imshow("in",TheInputImageCopy);
cv::imshow("thres",MDetector.getThresholdedImage());
}
BOOST_PYTHON_MODULE(libTestPyAruco)
{
using namespace boost::python;
def("yay", yay);
def("Init", Init);
}
The "running" python file:
import libTestPyAruco
libTestPyAruco.Init()
print libTestPyAruco.yay()
mirind4Sat, 11 Apr 2015 08:51:31 -0500http://answers.opencv.org/question/59555/Fit a cylinder given a set of points and a easy visualization for 3d axishttp://answers.opencv.org/question/55513/fit-a-cylinder-given-a-set-of-points-and-a-easy-visualization-for-3d-axis/ Hi I have set of points that approximate a cylinder (from a feature detection)
std::vector<cv::Point3f> objectPoints;
and I would like to know how I can get the representation in space of this cylinder.
What I really like is the projection of the vector of the axes in simulated 3D like this:
I have vertices of a generic three-dimensional space:
std::vector<cv::Point3f> verts(4);
verts[0] = cvPoint3D32f(0, 0, 0);
verts[1] = cvPoint3D32f(0, 1, 0);
verts[2] = cvPoint3D32f(1, 0, 0);
verts[3] = cvPoint3D32f(0, 0, 1);
and edges connecting the verts
std::vector<cv::Point2d> edges(3);
edges[0] = cvPoint2D32f(0, 1);
edges[1] = cvPoint2D32f(0, 2);
edges[2] = cvPoint2D32f(0, 3);
cv::projectPoints(verts, rvec, tvec, cameraMatrix, distCoeffs, projectedVertPoints);
for (int i = 0; i<edges.size(); i++) {
cv::Point2d vertA,vertB;
vertA = projectedVertPoints[edges[i].x];
vertB = projectedVertPoints[edges[i].y];
// Here you can play with the colors and give each axis its classic color, red green and blue
cv::line(src, vertA, vertB,cv::Scalar(0,255,255));
}
I took inspiration from the demo project in python plane_ar.py
LivingSparksTue, 17 Feb 2015 10:39:57 -0600http://answers.opencv.org/question/55513/georectifying oblique digital images can be done in opencv ?http://answers.opencv.org/question/54377/georectifying-oblique-digital-images-can-be-done-in-opencv/georectifying oblique digital images can be done in opencv ?
i don't know the theory and opencv but i need image rectification as in ARGUS imagery video technique for monitoring coasts.(or other coastal monitoring software like http://ci.wrl.unsw.edu.au/about-coastal-imaging/image-analysis/ "merge and rectify the images from multiple cameras to create a single 180 degree view of the coastline."). i just start to read about opencv but I do not have enough time to study ... help ? thx.
i have image from camera, camera position (h30m + gps ) and few ground control points (n points -> not rectangular) . How can i use opencv to generate then final ortho rectif image ? ( the view of coastal line)
edit 01: install opencv and eclipse on ubuntu 12.04, compile few examples, read introduction stuff about opencv, done camera calibration (i hope) using chessboard (20 images 12x12)
edit 02: trying to read "Multiple View Geometry in Computer Vision" ... (have to go to math courses... ;-( )
edit 03: oblique image to orthorectified image using OpenCV? (I know, need more math ...) how to proceed ?
edit04: I apologize for the lack of knowledge in the field...
edit06:
source image ( image to be georectified )
![image description](/upfiles/14236632538315468.jpg)
topview from yahoo maps
![image description](/upfiles/1423665856134212.jpg)
yellow dots = approximate position (not the real GCPs)elfMon, 02 Feb 2015 16:06:59 -0600http://answers.opencv.org/question/54377/Compute P from K,D,R,T using StereoRectifyhttp://answers.opencv.org/question/52604/compute-p-from-kdrt-using-stereorectify/ Hello,
I have the following stereo calibration data, which is provided from a file:
Left camera:
K1 = [500.68330 0 318.57250
0 500.25257 247.11452
0 0 1]
D1 = [ -0.22225 0.10381 0.0000 0 ] --> Distortion parameters
R1 = [ 1 0 0
0 1 0
0 0 1]
T1 = [ 0 0 0 ]
Right camera:
K2 = [ 500.80696 0 307.20573
0 500.39406 233.52064
0 0 1]
D2 = [ -0.22738 0.11823 0 0 ]
R2 = [ 0.994584465424537 0.008137584213299 0.103612358622704
-0.008733820371460 0.999947802720680 0.005302095416238
-0.103563804091524 -0.006178313463662 0.994603623020164]
T2 = [ -399.96195 -22.45476 0.66913 ]
I use the following code in order the get the P1 and P2 matrices:
cv.StereoRectify(K1, K2, D1, D2, (height, width), R2, T2, R1, R2, P1, P2, np.zeros((4, 4), float)), 0)
The obtained output is the following, but when I apply this calibration I can observe it is not good:
R1 [[ 0.99270276 0.06418606 0.10208517]
[-0.06433084 0.99792686 -0.00187686]
[-0.10199401 -0.00470406 0.99477389]]
R2 [[ 0.99842635 0.05605389 -0.00167035]
[-0.0560562 0.99842668 -0.00136989]
[ 0.00159094 0.00146137 0.99999767]]
P1 [[ 427.68975971 0. 255.76326752 0. ]
[ 0. 427.68975971 237.85540771 0. ]
[ 0. 0. 1. 0. ]]
P2 [[ 4.27689760e+02 0.00000000e+00 3.00858452e+02 -1.71329243e+05]
[ 0.00000000e+00 4.27689760e+02 2.37855408e+02 0.00000000e+00]
[ 0.00000000e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00]]
Can anyone notice what am I doing wrong? Thanks in advance.
NéstorThu, 08 Jan 2015 08:51:35 -0600http://answers.opencv.org/question/52604/ring projectionhttp://answers.opencv.org/question/39749/ring-projection/is there any function in openCV that help me calculate ring projection of an image or i have to write it myself? If i have to write it, can some one help me on writing the code that take less computation-intensive.And the algorithm to calculate ring projection is in here : http://machinevision.iem.yzu.edu.tw/vision/tech/Pattern2-color.pdfAcanusTue, 19 Aug 2014 07:20:08 -0500http://answers.opencv.org/question/39749/How can I detect the mid-field line over the soccer field images ?http://answers.opencv.org/question/36068/how-can-i-detect-the-mid-field-line-over-the-soccer-field-images/![image description](/upfiles/14042387571439594.jpg)
Given the similar soccer field image, First, I need to detect the midfield line and more the merrier, I would like to learn the image projection for making the midfield line orthogonal evading the perfective distorion as if I am directly looking to the mid-field.
How can I apply those with OpenCV library?erogolTue, 01 Jul 2014 13:25:57 -0500http://answers.opencv.org/question/36068/relation between A, [R|t] and Qhttp://answers.opencv.org/question/35363/relation-between-a-rt-and-q/In the [documentation](http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html) a 3D-point is multiplied by A and [R|t] to receive the coordninates on the image plane.
From stereoRectify one can receive a matrix Q. This matrix allows to receive 3D-points from image coordinates. How are Q and A, [R|t] related to each other ( I can't figure it out from the description of Q).alejiThu, 19 Jun 2014 14:39:25 -0500http://answers.opencv.org/question/35363/Texture projection, compute UV coordinateshttp://answers.opencv.org/question/34829/texture-projection-compute-uv-coordinates/Hi everybody, I’m trying to project an image to the plane. I was using this approach [from opencv](http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#void%20reprojectImageTo3D(InputArray%20disparity,%20OutputArray%20_3dImage,%20InputArray%20Q,%20bool%20handleMissingValues,%20int%20ddepth)
and also read this [post](http://math.stackexchange.com/questions/681376/texture-mapping-from-a-camera-image-knowing-the-camera-pose).
The question is how do I get a UV coordinates for the image.
Here is my scenario: plane 16x12 ( blue) and image with the translation at 0,4,0 (red cross) and the rotation matrix (
100
010
001
)
![image description](/upfiles/14024302011330159.png)
I was using this, but I get really wrong UV coordinates
i was using this formulas [from opencv](http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#void%20reprojectImageTo3D(InputArray%20disparity,%20OutputArray%20_3dImage,%20InputArray%20Q,%20bool%20handleMissingValues,%20int%20ddepth) article:
![image description](/upfiles/14024304695694968.png)
Result for example for the point ( -8, 0 , -6 ) -> (1.33, -0.66)
and i expect it to be -> (0,0)
Do you have any ideas, what I do wrong and how it should be done?
Thanks!AlexeyISTue, 10 Jun 2014 15:07:50 -0500http://answers.opencv.org/question/34829/Find point coordinate in a projected rectanglehttp://answers.opencv.org/question/23443/find-point-coordinate-in-a-projected-rectangle/Hello OpenCV users.
I'm in front of a thought-it-was-simple problem, and as I am quite rusty concerning geometry and mathematics, I would like to request your help.
On the image below you can see a picture representing the problem![image description](/upfiles/13833106476560783.png)
This entire image represent the picture I get from my camera.
In this image I suceed to detect 4 points (A,B,C,D), forming a rectangle.
-I know the coordinates(in pixels) of these 4 points in my camera view.
-I also know the real dimensions of the rectangle I'm trying to detect.
Considering a point F in my camera view, I would like to calculate its coordinate in the frame formed by A,B,C,D.
How to do this with openCV ?
ArnoGFri, 01 Nov 2013 08:40:09 -0500http://answers.opencv.org/question/23443/heurisc algorithm to find the correct peak. After vertical projectionhttp://answers.opencv.org/question/21013/heurisc-algorithm-to-find-the-correct-peak-after-vertical-projection/Hi People,
I am working on developing a ANPR software to recognize license plate numbers using openCV. I have studied the JavaANPR project. YOu can check more on this link
http://javaanpr.sourceforge.net/anpr.pdf
This project uses the vertical and horizontal projection after make the image gray to find out the band where the plate is.
However when using it on my car images sometimes I get more than one "peak" for the vertical projection. I wanted to ask if there's a way to use some heuristics algorithms to find out which "peak" is the correct one.
Can someone help me ? Thank YougabrielpasvThu, 19 Sep 2013 13:20:00 -0500http://answers.opencv.org/question/21013/Inverse Perspective Mapping -> When to undistort?http://answers.opencv.org/question/15526/inverse-perspective-mapping-when-to-undistort/BACKGROUND:
I have a a camera mounted on a car facing forward and I want to find the roadmarks. Hence I'm trying to transform the image into a birds eye view image, as viewed from a virtual camera placed 15m in front of the camera and 20m above the ground. I implemented a prototype that uses OpenCV's warpPerspective function. The perspective transformation matrix is got by defining a region of interest on the road and by calculating where the 4 corners of the ROI are projected in both the front and the bird's eye view cameras. I then use these two sets of 4 points and use getPerspectiveTransform function to compute the matrix. This successfully transforms the image into top view.
QUESTION:
When should I undistort the front facing camera image? Should I first undistort and then do this transform or should I first transform and then undistort.
If you are suggesting the first case, then what camera matrix should I use to project the points onto the bird's eye view camera. Currently I use the same raw camera matrix for both the projections.
Please ask more details if my description is confusing!Ashok ElluswamyThu, 20 Jun 2013 19:33:57 -0500http://answers.opencv.org/question/15526/