OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 20 Jan 2017 17:52:21 -0600Aruco: Z-Axis flipping perspectivehttp://answers.opencv.org/question/123375/aruco-z-axis-flipping-perspective/ I am trying to do some simple AR with Aruco tags and I am having trouble determining the correct perspective.
The problem occurs when it is unclear which side of the tag is closer to the camera.
For example, in the image, the two codes are on the same plane pointing the same direction, but the z-axes are pointed in different directions (The code on the bottom is showing the correct orientation):
**Image is posted in comments, I don't have high enough karma for links yet.**
I am not doing anything fancy, just a simple `detectMarkers` with `drawAxis` call for the results.
What can be done to ensure I don't get these false perspective reads?MrZanderFri, 20 Jan 2017 17:52:21 -0600http://answers.opencv.org/question/123375/Calculate surface normals from depth image using neighboring pixels cross producthttp://answers.opencv.org/question/82453/calculate-surface-normals-from-depth-image-using-neighboring-pixels-cross-product/As the title says I want to calculate the surface normals of a given depth image by using the cross product of neighboring pixels. However, I do not really understand the procedure. Does anyone have any experience?
Lets say that we have the following image:
![image description](/upfiles/14520928396087819.png)
what are the steps to follow?
---------------------------
**Update:**
I am trying to translate the following pseudocode from [this answer](http://stackoverflow.com/a/34644939/1476932) to opencv.
dzdx=(z(x+1,y)-z(x-1,y))/2.0;
dzdy=(z(x,y+1)-z(x,y-1))/2.0;
direction=(-dxdz,-dydz,1.0)
magnitude=sqrt(direction.x**2 + direction.y**2 + direction.z**2)
normal=direction/magnitude
where z(x,y) is my depth image. However, the output of the following does not seem correct to me:
for(int x = 0; x < depth.rows; ++x)
{
for(int y = 0; y < depth.cols; ++y)
{
double dzdx = (depth.at<double>(x+1, y) - depth.at<double>(x-1, y)) / 2.0;
double dzdy = (depth.at<double>(x, y+1) - depth.at<double>(x, y-1)) / 2.0;
Vec3d d = (dzdx, dzdy, 1.0);
Vec3d n = normalize(d);
}
}
-------------------------------------------
**Update2:**
Ok I think I am close:
Mat3d normals(depth.size(), CV_32FC3);
for(int x = 0; x < depth.rows; ++x)
{
for(int y = 0; y < depth.cols; ++y)
{
double dzdx = (depth.at<double>(x+1, y) - depth.at<double>(x-1, y)) / 2.0;
double dzdy = (depth.at<double>(x, y+1) - depth.at<double>(x, y-1)) / 2.0;
Vec3d d;
d[0] = -dzdx;
d[1] = -dzdy;
d[2] = 1.0;
Vec3d n = normalize(d);
normals.at<Vec3d>(x, y)[0] = n[0];
normals.at<Vec3d>(x, y)[1] = n[1];
normals.at<Vec3d>(x, y)[2] = n[2];
}
}
which gives me the following image:
![image description](/upfiles/14521779877493766.png)
----------------------------------------------------------
**Update 3:**
following @berak's approach:
depth.convertTo(depth, CV_64FC1); // I do not know why it is needed to be transformed to 64bit image my input is 32bit
Mat nor(depth.size(), CV_64FC3);
for(int x = 1; x < depth.cols - 1; ++x)
{
for(int y = 1; y < depth.rows - 1; ++y)
{
/*double dzdx = (depth(y, x+1) - depth(y, x-1)) / 2.0;
double dzdy = (depth(y+1, x) - depth(y-1, x)) / 2.0;
Vec3d d = (-dzdx, -dzdy, 1.0);*/
Vec3d t(x,y-1,depth.at<double>(y-1, x)/*depth(y-1,x)*/);
Vec3d l(x-1,y,depth.at<double>(y, x-1)/*depth(y,x-1)*/);
Vec3d c(x,y,depth.at<double>(y, x)/*depth(y,x)*/);
Vec3d d = (l-c).cross(t-c);
Vec3d n = normalize(d);
nor.at<Vec3d>(y,x) = n;
}
}
imshow("normals", nor);
I get this one:
![image description](/upfiles/14521783318279546.png)
which seems quite ok. However, if I use a 32bit image instead of 64bit the image is corrupted:
![image description](/upfiles/14521784289883825.png)theodoreWed, 06 Jan 2016 09:07:32 -0600http://answers.opencv.org/question/82453/