OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Thu, 05 Sep 2019 10:47:25 -0500Get Rotation and Translation Matrixhttp://answers.opencv.org/question/217954/get-rotation-and-translation-matrix/I'm programming one Asus Xtion depth camera, wich instead of an RGB image, it gives me the Depth information.
I already have the Camera and Distortion Matrices of the depth camera, but now, I want to calibrate the vision system, by getting the rotation and translation matrices.
I already have the 3D local coordinates of the points, from the camera perspective, but now I need to convert them to world/global coordinates. Since this camera has only depth information I was thinking: is it possible to calibrate this vision system by saying where is the ground plane? How shoul I proceed to put the blue plane as the ground plane of my vison system?
![image description](/upfiles/15676982772248991.jpg)
(note, in addition to the ground plane there's also an object on the plane)
I already tried using the solvePnP to get the rotation and translation matrices, but with no luck. Thanks in advance.dbots94Thu, 05 Sep 2019 10:47:25 -0500http://answers.opencv.org/question/217954/3D reconstruction (SFM) with multi-lens camera system (instead of pinhole camera model)http://answers.opencv.org/question/173406/3d-reconstruction-sfm-with-multi-lens-camera-system-instead-of-pinhole-camera-model/3D reconstruction (especially SFM algorithms) are often related with pinhole camera models.
The state-of-the-art of these SFM techniques is to look where the rays of 2D-3D correspondences in two different cameras intersect in object space.
This enforces that the camera model is a pinhole model (where the 2D-3D ray is just a straight line).
But often in real world there are multiple lens system used, where you can't really figure out the ray of 2D-3D correspondence.
**My question is:** *How does the SFM technique works with such multiple lens camera systems?*mirnyyFri, 01 Sep 2017 06:24:13 -0500http://answers.opencv.org/question/173406/Estimation of ball position (3d reconstruction)http://answers.opencv.org/question/157326/estimation-of-ball-position-3d-reconstruction/ I have a white small ball in camera view. The ball is in front of camera but higher than camera (Yball > Ycam). View is compensated by undistort(). I try to calculate X and Y position of the ball relative to camera position. What I know:<br>
- know exact distance from the optical center of the camera to the ball,<br>
- know shift dX,dY of my ball in pixels from center of cam view,<br>
- know focal length in pixels (fx and fy from camera matrix determined by camera calibration).<br>
<br>
I try calculate physical X and Y position relative to camera using dx,dy,fx,fy,distance. If the position of the ball is near the center of the screen then I can accurately calculate the position using fx and fy. But I noticed, that if the ball is on the side, near the edge of the view (but physicial Y position and distance from camera are the same as previous), then the ball in image is higher. Here's an example:
1) Ball X coordinate is in center<br>
<a href='http://wstaw.org/w/4vQf/'><img src='http://wstaw.org/m/2017/06/07/point_x_center_jpg_300x300_q85.jpg'></a>
<a href='http://wstaw.org/w/4vQk/'><img src='http://wstaw.org/m/2017/06/07/point_x_center_top_1_jpg_300x300_q85.jpg'></a><br>
<br>
2) Ball X coordinate high (near right side of image)<br>
<a href='http://wstaw.org/w/4vQh/'><img src='http://wstaw.org/m/2017/06/07/point_x_side_jpg_300x300_q85.jpg'></a>
<a href='http://wstaw.org/w/4vQl/'><img src='http://wstaw.org/m/2017/06/07/point_x_side_top_1_jpg_300x300_q85.jpg'></a><br>
<br>
On the second example, 'top' coordinate changed from 433px to 396px, but ball is on the same physical Y as in previous example and at the same distance from the optical center. So if I use 'fy' (focal length y) to calculate Y position, my estimated position will be different.<br>
Could You please help me, what am I doing wrong? How to calculate the position of the ball, what parameter did not take into account?
MarcinWed, 07 Jun 2017 01:56:07 -0500http://answers.opencv.org/question/157326/How do I calculate disparity and pointcloudhttp://answers.opencv.org/question/147741/how-do-i-calculate-disparity-and-pointcloud/ Hello.
I tried different ways to calculate disparity map, but can't get a good result.
my images: can be found here:
http://imgur.com/ViixWKN
http://imgur.com/8NrA61K
my calibration : https://pastebin.com/LY73x9NA
Unfortunatelly I can't get a good point cloud, with my code :
https://pastebin.com/xT7KXVig
Maybe my calibration is not good enough, or the angle between cameras is too small.
If my focal length is 35 mm and pixel size 5.5 x 5.5 μm2 , so the focal length should be around 6363 and i get 6700, which means that my lens is not exactly 35mm, but still I hope that my calibration is correct. rms error is 0.9.
During the calibration I use "chessboard rectangle size" what does it influence on? It seems that my calibration is the same, if I use rectangleSize 0.032 or 32.
Do I need to dothe following code?
Mat img1r, img2r;
remap(img1, img1r, map11, map12, INTER_LINEAR);
remap(img2, img2r, map21, map22, INTER_LINEAR);
img1 = img1r;
img2 = img2r;
It seems that it kills all the disparity.
Do I need to normalize the disparity map?
How does stereoRectify influence my pre calibrated system?
was looking at this problem http://answers.opencv.org/question/60134/getting-point-cloud-from-disparity/, but still can't get a good point cloud. hagorThu, 11 May 2017 10:06:15 -0500http://answers.opencv.org/question/147741/Object detection using a 3D-TOF camerahttp://answers.opencv.org/question/102301/object-detection-using-a-3d-tof-camera/Hello everyone,
I am trying to connect a 3D TOF camera using opencv on Visual Studio.
This camera generates four output streams as mentioned below.
- Intensity
- Confidence
- Range/Depth data
- 3D PCL
1) How I could verify if this camera is supported on opencv ?
I tried to connect using the default but it didnt work.
*VideoCapture cap(0); // open the default camera*
2) Which one of the stream is suitable for the detection of an object as given on the link below.
http://www.logistik-xtra.de/klt-rahmenwagen#
3) Which object detection technique available on opencv plateform is suitable for the detection of object mentioned in the link above.
helping comments would be highly appreciated
Thanks in advance. Jack16Wed, 14 Sep 2016 09:14:59 -0500http://answers.opencv.org/question/102301/Shape from motion or shape from silhouette?http://answers.opencv.org/question/72786/shape-from-motion-or-shape-from-silhouette/Hi everyone,
I would like to make an application where I can make a 3D model out of an object and make measurements on that object. I have done quite a lot of research on both shape from motion and shape from silhouette. However I am not sure which way I should go. Which would be cumbersome after a while, and which would be the most effective?
Also if you know some great implementations for these methods, I would appreciate that too.
Thanks for your help in advance!
bendesign55Sat, 10 Oct 2015 04:52:38 -0500http://answers.opencv.org/question/72786/Building a simple 3d model : Using build3dmodel.cpphttp://answers.opencv.org/question/30713/building-a-simple-3d-model-using-build3dmodelcpp/I have just started working with CV. I am working on a project which creates a 3D model from a series of 2D images. Before I start going deep into this subject I just want to have a look at how these code samples give outputs.
But the problem here is I don't understand what is meant by the argument model_name in this sample. I obtained the camera intrinsic parameter xml file by using the calibration sample.
I really need help with this. Can someone guide me with what this parameter really means.
Thank you in advance.adhilhazariThu, 27 Mar 2014 05:48:31 -0500http://answers.opencv.org/question/30713/Camera with auto-focus and 3D reconstructionhttp://answers.opencv.org/question/7278/camera-with-auto-focus-and-3d-reconstruction/Hi,
I'm using some very simple web cam, during the chessboard calibration I got every time very **different intrinsic matrix**(especially the part with focal lengths), is it because the camera has auto-focus? If I take the pictures of multiple chessboard position the undistorted image is afterward **more distorted then the original**, how can it be? Is it possible the auto-focus is disturb somehow the distortion parameters calculation? When I want to calculate projection matrix I need **non-variable focus length**, needn't I?
But I don't understand how such a camera can have auto-focus, when the there is need to screw the lens to make the picture sharp? I thought auto-focus is moving some lens to focus??
And second question is if I want to make a laser scanner. I need to somehow calculate the homography to laser plane is it right? So probably I can directly find the laser line on the chessboard during the calibration. But do I need to measure the distance of the chessboard or can I somehow calculate the distance from the chessboard? **Do I need chessboard 3D coordinates to calculate the extrinsic matrix**?
Thanks for your time
Regards
Martin
MartinTue, 12 Feb 2013 02:02:17 -0600http://answers.opencv.org/question/7278/open a 3D image in opencvhttp://answers.opencv.org/question/5668/open-a-3d-image-in-opencv/Project: 3D Data Processing
Input: wrl file
Detailed Description: I have CASIA 3D face database (link). I want to manually mark the facial feature points like nose tip, eye centers, lip contour, et cetera.
Can anyone suggest a GUI so that if I place cursor on say nose, the GUI should show the x,y,z coordinates of the point; like what GIMP provides for 2D images.
How to load, save, manipulate the wrl 3D file data like what is done for 2D images - image operations in OpenCV (imread, imshow, imwrite, ....)? C/C++ platform is preferred.
UserOpenCVThu, 03 Jan 2013 00:17:29 -0600http://answers.opencv.org/question/5668/Algorithms for 3D face reconstructionhttp://answers.opencv.org/question/5508/algorithms-for-3d-face-reconstruction/Project: 3D face reconstruction
Input: 2D frontal face Image
Output: 3D face Reconstruction and expression simulation
Platform: Matlab or Opencv cpp.
I found out after study that 3D Morphable Models (3DMM) algorithm is a good starting point for my project. But I don't have the Basel Face Model (3D face Database) to implement the algorithm.
However, I have downloaded GavabDB from http://gavab.escet.urjc.es/recursos_en.html.
Can I develop 3DMM using the GavabDB for 3D face reconstruction from frontal image?
After reading the dataset description doc, I observed that Gavab doesn't provide Texture data of the 3D scans; is texture data compulsory?
Does the output quality depend on the 3D database used for modeling?
UserOpenCVThu, 27 Dec 2012 03:49:26 -0600http://answers.opencv.org/question/5508/Help Recovering Structure From Motionhttp://answers.opencv.org/question/5248/help-recovering-structure-from-motion/Afternoon, all!
I have been banging my head against the problem of building a 3D structure from a set of sequential images intently for the past week or so and cannot seem to get a decent result out of it. I would greatly appreciate someone taking the time to go over my steps and let me know if they seem correct. I feel like I am missing something small but fundamental.
1. Build camera calibration matrix K and distortion coefficients from the calibration data of the chessboard provided (using findChessboardCorners(), cornerSubPix(), and calibrateCamera()).
2. Pull in the first and third images from the sequence and undistort them using K and the distortion coefficients.
3. Find features to track in the first image (using goodFeaturesToTrack() with a mask to mask off the sides of the image).
4. Track the features in the new image (using calcOpticalFlowPyrLK()).
At this point, I have a set of point correspondences in image i0 and image i2.
5. Generate the fundamental matrix F from the point correspondences (using the RANSAC flag in findFundamentalMat()).
6. Correct the matches of the point correspondences I found earlier using the new F (using correctMatches()).
From here, I can generate the essential matrix from F and K and extract candidate projection matrices for the second camera.
7. Generate the essential matrix E using E = K^T * F * K per HZ
8. Use SVD on E to get U, S, and V, which then allow me to build the two candidate rotations and two candidate translations.
9. For each candidate rotation, check to ensure the rotation is right-handed by checking sign of determinant. If <0, multiply through by -1.
Now that I have the 4 candidate projection matrices, I want to figure out which one is the correct one.
10. Normalize the corrected matches for images i0 and i2
11. For each candidate matrix:<pre>
11.1. Triangulate the normalized correspondences using P1 = [ I | 0 ]
and P2 = candidate matrix using triangulatePoints().
11.2. Convert the triangulated 3D points out of homogeneous coordinates.
11.3. Select a test 3D point from the list and apply a perspective
transformation to it using P2 (converted to a 4x4 matrix instead of 3x4 where
the last row is [0,0,0,1]) using perspectiveTransform().
11.4. Check if the depth of the 3D point and the Z-component of the
perspectively transformed homogeneous point are both positive. If so,
use this candidate matrix as P2. Else, continue.</pre>
12. If none of the candidate matrices generate a good P2, go back to step 5.
Now I should have two valid projection matrices P1 = [ I | 0 ] and P2 derived from E. I want to then use these matrices to triangulate the point correspondences I found back in step 4.
13. Triangulate the the normalized correspondence points using P1 and P2
14. Convert from homogeneous coordinates to get the real 3D points.
I already have encountered a problem here in that the 3D points I triangulate NEVER seem to correspond to the original structure. From the mug, they don't seem to form a clear surface, and from the statue, they're either scattered or on some line that goes off towards [-∞, -∞, 0] or similar. I am using Matplotlib's Axes3D scatter() method to plot them and see the same results with Matlab, so I assume it's not an issue with the visualization so much as the points. Any advice or insight just at this point alone would be hugely appreciated.
Moving forward though, it gets a little fuzzy in that I am not completely sure how to go about adding the additional frames. Below is my algorithm so far:
1. Store image i2 as the previous image, the image points from i2 as the previous image points, the triangulated 3D points as the corresponding real points, and the projection matrix P2 as the previous P for the loop below.
2. For each next frame iNext:<pre>
2.1. Undistort iNext using K and the distortion coefficients
2.2. Track the points from the previous image
(in the first loop iteration, I use the points from i2)
in the new image to get correspondences.
2.3. Normalize the newly tracked points.
2.4. Use the PerspectiveNPlace algorithm from OpenCV
(solvePnPRansac()) with the previous 3D points I found before
and the normalized points I tracked in the new frame to get
the rotation and translation vector of the new camera position
relative to the previous one along with a set of inliers.
2.5. Store the inlier 3D points and image points from iNext
2.6. Find new features to track in the previous image
2.7. Track the new features into the current image to get a
new set of correspondences
2.8. Correct and normalize the correspondences
2.9. Triangulate the corrected and normalized correspondences
to get a new set of 3D points (I do this to account for issues where
the original 3D points from the first triangulation in step 14 become
occluded).
2.10. Add the list of new 3D and 2D points to the inlier 3D and
2D points from step 2.5.
2.11. Repeat</pre>
3. After all of this, I will have built up a listing of 3D points found from the first triangulation between i0 and i2 and from the inliers of solvePnPRansac().
Unfortunately, the 3D points show nothing in the way of any structure, so I feel like this process of adding new images is wrong...
Any insight would be greatly appreciated, but thanks for taking the time to look over this email either way.
-CodycbuntainSun, 16 Dec 2012 13:08:47 -0600http://answers.opencv.org/question/5248/What is the price of 3D face dataset? Can anyone suggest a good 3D face dataset?http://answers.opencv.org/question/5031/what-is-the-price-of-3d-face-dataset-can-anyone-suggest-a-good-3d-face-dataset/
Project Description:
Input:
Frontal Face Image
Expression
Angle
Size
Details:
I have to convert the input frontal face image into 3D face and simulate the given expression (smile, cry, neutral, ..); rotate by the given angle and re-size to the given size. Again convert back into 2D face after the changes.
After googling, I found that Morphable models is a good algorithm to start with; I want a 3D face database to start & complete the implementation.
I found some 3D databases list on www.face-rec.org. Can anyone help me with buying (pricing, ....) the database, and how to choose the database?
Thank you in advance..
UserOpenCVMon, 10 Dec 2012 04:19:45 -0600http://answers.opencv.org/question/5031/