OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Thu, 28 Sep 2017 12:27:55 -0500What is solvePnP() exactly for? (while i already have the projection matrices P)http://answers.opencv.org/question/175301/what-is-solvepnp-exactly-for-while-i-already-have-the-projection-matrices-p/I already know that solvePnP() finds the position (rotation and translation) of the camera using the 2d point coordinates and corresponding 3d point coordinates, but i don't really understand why i have to use it after i triangulated some 3d points with 2 cameras and their corresponding 2d points.
Because while triangulating a new 3D point, i already have (need) the projection matrices P1 and P2 of the two cameras (which contains of the R1, R2 and t1, t2 rotation and translations and is already the location of the cameras w.r.t. the new triangulating 3D point).
*My workflow is:*
1. Get 2D-correspondences from 2 images.
2. Get Essential Matrix E using these 2D-correspondences.
3. Get relative orientation (R, t) of the 2 images from the Essential Matrix E.
4. Set Projection Matrix P1 of camera1 to
P1 = (1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 0);
and set Projection Matrix P2 of camera2 to
P2 = (R.at<double>(0, 0), R.at<double>(0, 1), R.at<double>(0, 2), t.at<double>(0),
R.at<double>(1, 0), R.at<double>(1, 1), R.at<double>(1, 2), t.at<double>(1),
R.at<double>(2, 0), R.at<double>(2, 1), R.at<double>(2, 2), t.at<double>(2));
5. Solve least squares problem
P1 * X = x1
P2 * X = x2
(solving for X = 3D Point). and so on.....
After that i get a triangulated 3D Point X from these Projection Matrices P1 and P2 and the x1 and x2 2D Point correspondences.
***My question is now again:
Why i need to use now solvePnP() to get the camera location?
Because I already have P1 and P2 which should be already the locations of the cameras (w.r.t. the triangulated 3D points).***mirnyyThu, 28 Sep 2017 12:27:55 -0500http://answers.opencv.org/question/175301/Unexpected result with RGB Histogram Backprojection in Pythonhttp://answers.opencv.org/question/120583/unexpected-result-with-rgb-histogram-backprojection-in-python/I am using OpenCv 2.4.13 in Python 2.7.12 and I noticed that when I apply cv2.calcBackProject on the pixel [b,v,r], it returns the backprojection on the pixel [b,v,0], the third channel's value is ignored while my histogram looks fine.
Here is my code :
channels=[0,1,2]
histSize = [8,8,8]
ranges=[0,256, 0,256, 0,256]
#image is in BGR color
bgr_split = cv2.split(roi_img)
#Compute image bgr histogram
hist = cv2.calcHist(bgr_split, channels, mask, histSize, ranges)
cv2.normalize(hist, hist, 0, 255, cv2.NORM_MINMAX)
#Compute histogram backprojection
dst = cv2.calcBackProject([img],channels,hist,ranges,1)
Can someone confirm if it is a bug?Mai KarThu, 29 Dec 2016 10:46:12 -0600http://answers.opencv.org/question/120583/CalBackProject implementhttp://answers.opencv.org/question/111551/calbackproject-implement/Hi everyone,
I want to make my own function calbackProject in c++. Can you help to give some instructions to do?
Thank you! greenworldTue, 08 Nov 2016 17:07:26 -0600http://answers.opencv.org/question/111551/What mathematically is back projection?http://answers.opencv.org/question/59021/what-mathematically-is-back-projection/I'm able to use openCV backprojection and I'm also able to implement it myself. However, I don't really understand why it works.
On the obvious side it is just building up a histogram of a target image, creating a probability distribution with it and then applying that pdf to a new image. I believe this is done in the hope that the new back projected image will only show the target information with high probability in the backprojected image.
However, in the docs page ([http://docs.opencv.org/doc/tutorials/imgproc/histograms/back_projection/back_projection.html]) it says:
"In terms of statistics, the values stored in BackProjection represent the probability that a pixel in Test Image belongs to a skin area, based on the model histogram that we use."
I'm really struggling to interpret this and in particular the "represent the probability". There must be some formula that specifies it like:
prob("Pixel is from test image" | "New image pixel") = ?????
I just can't get my head around it though. Does anybody have any links or a good explanation of what the terms in the equation are?
Many thanksricor29Thu, 02 Apr 2015 13:42:04 -0500http://answers.opencv.org/question/59021/calcBackProjectPatch not supported?http://answers.opencv.org/question/10603/calcbackprojectpatch-not-supported/I grabbed the code from the server and wanted to use calcBackProjectPatch, however it was commented out in the imgproc.hpp file. Is there a reason this function is commented out? Is there a version of openCV I need to grab so that I can use this function? Has this function been deprecated?hamidTue, 02 Apr 2013 23:09:46 -0500http://answers.opencv.org/question/10603/How can I get Back Projection matrix including numbers over 255 or decimal by calcBackProject?http://answers.opencv.org/question/6546/how-can-i-get-back-projection-matrix-including-numbers-over-255-or-decimal-by-calcbackproject/Hi, I need to use *calcBackProject* and then display the exact number.
for ( int i = 0; i < backProj.rows; ++i )
{
for ( int j = 0; j < backProj.cols; ++j )
{
cout << int(backProj.at< uchar >( i, j )) << " ";
}
cout << endl;
}
But its max value is 255 because of "uchar".
I tried to use
Mat backProj( slid_.rows, slid_.cols, CV_64FC1 );
After using *calcBackProject*, display it
cout << backProj.at< double >( i, j );
but it does not work.
I really need the exact numbers which are bigger than 255. I don't want to use *normalize* before. Can I make it by *calcBackProject*?
If I try to **scale it down**, can this Back Projection matrix includes decimal? Because I don't want that 0 exists in this matrix.
Thank you.Hongbo MiaoSun, 27 Jan 2013 08:30:08 -0600http://answers.opencv.org/question/6546/how can I do back-projection?http://answers.opencv.org/question/4862/how-can-i-do-back-projection/I calibrated my single camera using Camera Calibration Toolbox for Matlab(http://www.vision.caltech.edu/bouguetj/calib_doc/), and already have the intrinsic and extrinsic parameters.
Now given a pixel of one image, how can i do back-projection from 2D pixel to 3D ray? That is, how can i calculate the equation of ray connecting the camera center and the pixel point on the image sensor plane? And how can i know the equation of physical image sensor plane in world reference frame?
thx very much.gslshbsMon, 03 Dec 2012 20:56:50 -0600http://answers.opencv.org/question/4862/