Hey guys,
I want to display 3D-Object Points into an image. For that reason I use projectpoints. I think i've done it the right way but somehow theres a little deviation. So I want to display a distance-point from the camera (which is positioned in a car) into the image.
The Way i've done it: The intrinsic and extrinsic parameters related to the rear axle are given. Because there are different named coordinate axles in a car-coordinate-system and an image-coordinate-system, i'm going to translate them. For example in car the x-axis = depth, y-axis = horizontal and z-axis = vertical. In the image-coordinate system the x-axis is horizontal and y-axis is vertical and z-axis is depth. Short example -> angle_camera_depthZ = angle_car_depthX. The same im going to do with all other angles and translations. After that i build the rotation-matrices and multiply them to use the rodrigues to get the rotation-vector. After that im going to build the translation vector. Then i use the project points methood to get the pixel-coordinates.
For some reason there is a little difference between the calculated pixel and the real difference (the pixel is about 10-11 Metres when he should draw the distance for 9 Metres).
For better understanding i'm adding the code as a attachment. Info: At first the setter are gonna called, then msgRcv which calls the transformCoordinates method.
Please help me i'm about to get crazy.
Bryan
void cTransformation::transformCoord(){
crtRotationMatr();
crtRotationVec();
crtTranslVec();
crtCamMat();
crtDistVec();
calculate2D();
}
//Told in the Thread -> Translate the Name of the different coordinate Systems
void cTransformation::crtRotationMatr(){
rotationX = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, cos(this->pitch + this->pitchOffset), -sin(this->pitch + this->pitchOffset), 0, sin(this->pitch + this->pitchOffset), cos(this->pitch + this->pitchOffset));
rotationY = (cv::Mat_<double>(3, 3) << cos(this->yaw + this->yawOffset), 0, sin(this->yaw + this->yawOffset), 0, 1, 0, -sin(this->yaw + this->yawOffset), 0, cos(this->yaw + this->yawOffset));
rotationZ = (cv::Mat_<double>(3, 3) << cos(this->roll + this->rollOffset), -sin(this->roll + this->rollOffset), 0, sin(this->roll + this->rollOffset), cos(this->roll + this->rollOffset), 0, 0, 0, 1);
rotation = rotationX * rotationY * rotationZ;
}
void cTransformation::crtRotationVec(){
//Creates an rotation Vector of rotation matrix
cv::Rodrigues(rotation, rotationVec);
}
//Camera Vector = (X-horizontal,Y-vertikal,Z-depth)
void cTransformation::crtTranslVec()
{
//1.87m Offset to rear axle
translVec = (cv::Mat_<double>(3, 1) << this->posX, this->posZ, this->posY - 1.87);
}
void cTransformation::crtCamMat()
{
camMat = (cv::Mat_<double>(3, 3) << this->focalLengthX, 0, this->mainPointX, 0, this->focalLengthY, this->mainPointY, 0, 0, 1);
}
void cTransformation::crtDistVec()
{
this->DistVec = (cv::Mat_<double>(5, 1) << this->DistoArr[0], this->DistoArr[1], this->DistoArr[2], 0 , 0);
}
void cTransformation::calculate2D()
{
cv::Mat worldAxes(1, 1, CV_64FC3);
worldAxes.at<cv::Point3d>(0, 0) = cv::Point3d(0, 0, this->x1);
cv::Mat imagePoints(1, 1, CV_64FC2);
cv::projectPoints(worldAxes, this->rotationVec, this->translVec, this->camMat, this->DistVec, imagePoints);
cv::Point2d pxlTTC1 = imagePoints.at<cv::Point2d>(0, 0);
this->pxlTTC1X = pxlTTC1.y; //PIXEL RESULT
worldAxes.at<cv::Point3d>(0, 0) = cv::Point3d(0, 0, 9);
cv::projectPoints(worldAxes, this->rotationVec, this->translVec, this->camMat, this->DistVec, imagePoints);
cv::Point2d pxlTTC2 = imagePoints.at<cv::Point2d>(0, 0);
this->pxlTTC2X = pxlTTC2.y; //PIXEL RESULT
}