OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Thu, 30 May 2019 00:10:21 -0500How to calculate the actual length of the black portion in image attached after getting actual contour of that.http://answers.opencv.org/question/213692/how-to-calculate-the-actual-length-of-the-black-portion-in-image-attached-after-getting-actual-contour-of-that/I got a middle portion of laser which is captured by angled camera. How to calculate the actual length of the black portion in image attached after getting actual contour of that
import cv2
import numpy as np
from skimage import morphology, color
import matplotlib.pyplot as plt
from scipy.spatial import distance as dist
from imutils import perspective
from imutils import contours
import argparse
import imutils
def midpoint(ptA, ptB):
return ((ptA[0] + ptB[0]) * 0.5, (ptA[1] + ptB[1]) * 0.5)
img = cv2.imread('F:\\Pycode\\ADAP_ANALYZER\\kk.jpg')
lowerb = np.array([0, 0, 120])
upperb = np.array([200, 100, 255])
red_line = cv2.inRange(img, lowerb, upperb)
red_line = cv2.GaussianBlur(red_line, (5, 5), 0)
ret, red_line = cv2.threshold(red_line, 45, 255, cv2.THRESH_BINARY)
red_line = cv2.dilate(red_line, None, iterations=1)
kernel = np.ones((10,10),np.uint8)
red_line = cv2.erode(red_line, kernel, iterations=1)
cv2.imwrite("F:\\Pycode\\ADAP_ANALYZER\\yy.jpg",red_line)
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(red_line, connectivity=8)
sizes = stats[1:, -1]; nb_components = nb_components - 1
min_size = 1800
img2 = np.zeros((output.shape))
for i in range(0, nb_components):
if sizes[i] >= min_size:
img2[output == i + 1] = 255
cv2.imwrite("F:\\Pycode\\ADAP_ANALYZER\\xx.jpg",img2)
cv2.imshow('red', img2)
cv2.waitKey(0)
image = cv2.imread("F:\\Pycode\\ADAP_ANALYZER\\xx.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
f,thresh = cv2.threshold(gray, 70, 255, cv2.THRESH_BINARY)
thresh = cv2.erode(thresh, None, iterations=1)
thresh = cv2.dilate(thresh, None, iterations=1)
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)
for var in c:
with open('c:\\your_file.txt', 'a') as f:
f.write(str(var) + "\n")
print(var)
for contour in cnts:
perimeter = cv2.arcLength(contour, True)
print(perimeter/2)
# determine the most extreme points along the contour
extLeft = tuple(c[c[:, :, 0].argmin()][0])
extRight = tuple(c[c[:, :, 0].argmax()][0])
extTop = tuple(c[c[:, :, 1].argmin()][0])
extBot = tuple(c[c[:, :, 1].argmax()][0])
green, top-most is blue, and bottom-most is teal
cv2.drawContours(image, [c], -1, (0, 255, 255), 2)
cv2.imshow("Image", image)
cv2.waitKey(0)RaghunathThu, 30 May 2019 00:10:21 -0500http://answers.opencv.org/question/213692/solvePnP and problems with perspectivehttp://answers.opencv.org/question/201261/solvepnp-and-problems-with-perspective/I feed 8 corners of the cube, but after reprojection sometimes (not for all images) the nearest point becomes the farthest and vice versa.
![image description](/upfiles/15397170586303682.jpg)
Can anybody explain the reason and how to cope with it?
An example of correct case:
![image description](/upfiles/15397753139440974.jpg)
The code:
float box_side_x = 6; //Centimetres
float box_side_y = 6;
float box_side_z = 6;
vector<Point3f> boxPoints;
//Fill the array of corners in object coordinates. x to right(view from camera), y down, z from camera.
vector<Vec3d> boxCorners(8);
Vec3d boxCorner;
float x, y, z;
for (int h = 0; h < 2; ++h) {
for (int j = 0; j < 2; ++j) {
for (int i = 0; i < 2; ++i) {
x = box_side_x * i;
y = box_side_y * j;
z = box_side_z * h;
boxPoints.push_back(Point3f(x, y, z)); //For solvePnP()
boxCorners[i + 2 * j + 4 * h] = {x, y, z}; //For calculating output
}
}
}
solvePnP(boxPoints, pointBuf, cameraMatrix, distCoeffs, rvec, tvec, false);
Mat rmat;
Rodrigues(rvec, rmat);
Mat Result;
float S = 1;
for (int h = 0; h < 2; ++h) {
for (int j = 0; j < 2; ++j) {
for (int i = 0; i < 2; ++i) {
boxCorner = boxCorners[i+2*j+4*h]; //In centimetres
Result = S * (rmat * Mat(boxCorner) + tvec);
ObjPoints[i + 2 * j + 4 * h] = (Vec3f)Result; //In centimetres
}
}
}
In different file ObjPoints -> BoxCorners
vector<float> X(8), Y(8), Z(8);
for (int i = 0; i < 8; i++) {
BoxCorner = BoxCorners[i]; //In centimetres
X[i] = K * (BoxCorner[0] + Lx);
Y[i] = K * (BoxCorner[1] + Ly);
Z[i] = K * (BoxCorner[2] + Lz);
}
Scalar color = CV_RGB(0, 0, 200), back_color = CV_RGB(0, 0, 100);
int thickness = 2;
namedWindow("Projection", WINDOW_AUTOSIZE);
Point pt1 = Point(0, 0), pt2 = Point(0, 0);
pt1 = Point(X[7], Y[7]), pt2 = Point(X[6], Y[6]);
line(drawing, pt1, pt2, back_color, thickness);
pt1 = Point(X[7], Y[7]); pt2 = Point(X[5], Y[5]);
line(drawing, pt1, pt2, back_color, thickness);
pt1 = Point(X[7], Y[7]); pt2 = Point(X[3], Y[3]);
line(drawing, pt1, pt2, back_color, thickness);
pt1 = Point(X[0], Y[0]), pt2 = Point(X[1], Y[1]);
line(drawing, pt1, pt2, color, thickness);
pt1 = Point(X[0], Y[0]); pt2 = Point(X[2], Y[2]);
line(drawing, pt1, pt2, color, thickness);
pt1 = Point(X[0], Y[0]), pt2 = Point(X[4], Y[4]);
line(drawing, pt1, pt2, color, thickness);
pt1 = Point(X[1], Y[1]); pt2 = Point(X[3], Y[3]);
line(drawing, pt1, pt2, color, thickness);
pt1 = Point(X[3], Y[3]); pt2 = Point(X[2], Y[2]);
line(drawing, pt1, pt2, color, thickness);
pt1 = Point(X[2], Y[2]); pt2 = Point(X[6], Y[6]);
line(drawing, pt1, pt2, color, thickness);
pt1 = Point(X[6], Y[6]); pt2 = Point(X[4], Y[4]);
line(drawing, pt1, pt2, color, thickness);
pt1 = Point(X[4], Y[4]); pt2 = Point(X[5], Y[5]);
line(drawing, pt1, pt2, color, thickness);
pt1 = Point(X[5], Y[5]); pt2 = Point(X[1], Y[1]);
line(drawing, pt1, pt2, color, thickness);
imshow("Projection", drawing);ya_ocv_userTue, 16 Oct 2018 14:13:19 -0500http://answers.opencv.org/question/201261/point outside image with getPerspectiveTransformhttp://answers.opencv.org/question/196359/point-outside-image-with-getperspectivetransform/ Hi all,
I try to make a perspective transform, all works fine if I use 4 points and if this 4 points are in the picture, but I have a question, is it possible to give one or two point outside image?
for example with this image :
![image description](/upfiles/15325117269331368.jpg)
I would like to do somthing like this when I take coordinates point:
![image description](/upfiles/1532511905957457.jpg)
Like you can see on the above image P1 and P4 are outside image, if I want to have a good quadrilateral. Is there solution to solve this problem ?
And I have another question, what is the maximum angle (between camera and plane, it's call FOV I'm not sure...) before opencv makes a bad image ?simon884Wed, 25 Jul 2018 04:54:27 -0500http://answers.opencv.org/question/196359/derivation for perspective transformation matrix (Q)http://answers.opencv.org/question/187734/derivation-for-perspective-transformation-matrix-q/Hi,
Opencv uses a perpective transformation matrix `Q` to convert pixels with disparity value into the corresponding `[x, y, z]` using the [reprojectImageTo3D](https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#reprojectimageto3d) function. After searching on this site for a bit I found out that the matrix Q is as follows:
Q = |1 0 0 -Cx
|0 1 0 -Cy
|0 0 0 f
|0 0 -1/Tx (Cx - Cx')/Tx
I looked for equations to derive this but couldn't find any. I know about these matrix equations:
![image description](/upfiles/15220802288666572.png)
Is there a way to work back/invert this to get the matrix form of `Q` or am I missing something?
edit:
projection matrices are the follows:
Pright = |F skew Cx F*Tx
|0 Fy Cy 0
|0 0 1 0
and a similar one for Pleft without the Tx factor. I guess what I'm looking for is a derivation from the projection matrix `Pright` to the reprojection matrix `Q`. I would assume there's an inversion or something to get from one to the other.
Thank you2ros0Mon, 26 Mar 2018 11:05:49 -0500http://answers.opencv.org/question/187734/bounding box around the detected Object in real time videohttp://answers.opencv.org/question/186851/bounding-box-around-the-detected-object-in-real-time-video/link:[followed the same code]
(https://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html)
During real time video, the bounding box is becoming a point at the joining of the montage as marked in red shape, even though the matches are good.
![image description](/upfiles/15211527795478438.png)kommsThu, 15 Mar 2018 17:28:53 -0500http://answers.opencv.org/question/186851/Finding the real-world distance of object from pixel coordinateshttp://answers.opencv.org/question/187033/finding-the-real-world-distance-of-object-from-pixel-coordinates/I have a picture of a supermarket shelf, with its top-most and bottom most row detected (as blue lines).
I know the height (say 2.5 meters) of the shelf, and that it is a fixed value throughout the entire shelf (this also implies that the 2 blue lines are always parallel in the real world). The pixel coordinates of the blue lines are known.
I have marked out a point (in green), with pixel coordinates only. This point will always be in between the top - bottom most rows.
![image description](/upfiles/15213686129903905.jpg)
In this case, given the above information, is there a way to calculate the distance (in meters) of the green point (in the real environment) to the top-most shelf?
I am thinking of using information such as the vanishing point, but I can't figure out how to do that.charles1208Sun, 18 Mar 2018 05:26:34 -0500http://answers.opencv.org/question/187033/Different results of code language conversionhttp://answers.opencv.org/question/184784/different-results-of-code-language-conversion/Hello everyone, I'm developing an OMR Application in Java/Android, I get a sample to this but that was in C++, so I started to convert the code to Java, when I finish it, the result was completely different that was apresented at the sample and I don't know why.
**Expected result:**
![image description](https://i0.wp.com/1.bp.blogspot.com/-khEZ4vZp5mU/UUpfK9RfLLI/AAAAAAAAAWY/CRxwyy6DbgA/s1600/circles.jpg)
**The received result:**
![image description](/upfiles/15187393975112017.png)
**The C++ code:**
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
#include <algorithm>
//g++ main.cpp -o main -I /usr/local/include/opencv -lopencv_core -lopencv_imgproc -lopencv_highgui
using namespace cv;
using namespace std;
cv::Point2f computeIntersect(cv::Vec4i a, cv::Vec4i b)
{
int x1 = a[0], y1 = a[1], x2 = a[2], y2 = a[3];
int x3 = b[0], y3 = b[1], x4 = b[2], y4 = b[3];
if (float d = ((float)(x1-x2) * (y3-y4)) - ((y1-y2) * (x3-x4)))
{
cv::Point2f pt;
pt.x = ((x1*y2 - y1*x2) * (x3-x4) - (x1-x2) * (x3*y4 - y3*x4)) / d;
pt.y = ((x1*y2 - y1*x2) * (y3-y4) - (y1-y2) * (x3*y4 - y3*x4)) / d;
return pt;
}
else
return cv::Point2f(-1, -1);
}
bool comparator2(double a,double b){
return a<b;
}
bool comparator3(Vec3f a,Vec3f b){
return a[0]<b[0];
}
bool comparator(Point2f a,Point2f b){
return a.x<b.x;
}
void sortCorners(std::vector<cv::Point2f>& corners, cv::Point2f center)
{
std::vector<cv::Point2f> top, bot;
for (int i = 0; i < corners.size(); i++)
{
if (corners[i].y < center.y)
top.push_back(corners[i]);
else
bot.push_back(corners[i]);
}
sort(top.begin(),top.end(),comparator);
sort(bot.begin(),bot.end(),comparator);
cv::Point2f tl = top[0];
cv::Point2f tr = top[top.size()-1];
cv::Point2f bl = bot[0];
cv::Point2f br = bot[bot.size()-1];
corners.clear();
corners.push_back(tl);
corners.push_back(tr);
corners.push_back(br);
corners.push_back(bl);
}
int main(int argc, char* argv[]){
Mat img = imread("example.jpg",0);
cv::Size size(3,3);
cv::GaussianBlur(img,img,size,0);
adaptiveThreshold(img, img,255,CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY,75,10);
cv::bitwise_not(img, img);
cv::Mat img2;
cvtColor(img,img2, CV_GRAY2RGB);
cv::Mat img3;
cvtColor(img,img3, CV_GRAY2RGB);
vector<Vec4i> lines;
HoughLinesP(img, lines, 1, CV_PI/180, 80, 400, 10);
for( size_t i = 0; i < lines.size(); i++ )
{
Vec4i l = lines[i];
line( img2, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, CV_AA);
}
imshow("example",img2);
std::vector<cv::Point2f> corners;
for (int i = 0; i < lines.size(); i++)
{
for (int j = i+1; j < lines.size(); j++)
{
cv::Point2f pt = computeIntersect(lines[i], lines[j]);
if (pt.x >= 0 && pt.y >= 0 && pt.x < img.cols && pt.y < img.rows)
corners.push_back(pt);
}
}
// Get mass center
cv::Point2f center(0,0);
for (int i = 0; i < corners.size(); i++)
center += corners[i];
center *= (1. / corners.size());
sortCorners(corners, center);
Rect r = boundingRect(corners);
cout<<r<<endl;
cv::Mat quad = cv::Mat::zeros(r.height, r.width, CV_8UC3);
// Corners of the destination image
std::vector<cv::Point2f> quad_pts;
quad_pts.push_back(cv::Point2f(0, 0));
quad_pts.push_back(cv::Point2f(quad.cols, 0));
quad_pts.push_back(cv::Point2f(quad.cols, quad.rows));
quad_pts.push_back(cv::Point2f(0, quad.rows));
// Get transformation matrix
cv::Mat transmtx = cv::getPerspectiveTransform(corners, quad_pts);
// Apply perspective transformation
cv::warpPerspective(img3, quad, transmtx, quad.size());
imshow("example2",quad);
Mat cimg;
cvtColor(quad,cimg, CV_BGR2GRAY);
vector<Vec3f> circles;
HoughCircles(cimg, circles, CV_HOUGH_GRADIENT, 1, img.rows/8, 100, 75, 0, 0 );
for( size_t i = 0; i < circles.size(); i++ ){
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
// circle center
circle( quad, center, 3, Scalar(0,255,0), -1, 8, 0 );
}
imshow("example4",quad);
waitKey();
double averR = 0;
vector<double> row;
vector<double> col;
//Find rows and columns of circles for interpolation
for(int i=0;i<circles.size();i++){
bool found = false;
int r = cvRound(circles[i][2]);
averR += r;
int x = cvRound(circles[i][0]);
int y = cvRound(circles[i][1]);
for(int j=0;j<row.size();j++){
double y2 = row[j];
if(y - r < y2 && y + r > y2){
found = true;
break;
}
}
if(!found){
row.push_back(y);
}
found = false;
for(int j=0;j<col.size();j++){
double x2 = col[j];
if(x - r < x2 && x + r > x2){
found = true;
break;
}
}
if(!found){
col.push_back(x);
}
}
averR /= circles.size();
sort(row.begin(),row.end(),comparator2);
sort(col.begin(),col.end(),comparator2);
for(int i=0;i<row.size();i++){
double max = 0;
double y = row[i];
int ind = -1;
for(int j=0;j<col.size();j++){
double x = col[j];
Point c(x,y);
//Use an actual circle if it exists
for(int k=0;k<circles.size();k++){
double x2 = circles[k][0];
double y2 = circles[k][1];
if(abs(y2-y)<averR && abs(x2-x)<averR){
x = x2;
y = y2;
}
}
// circle outline
circle( quad, c, averR, Scalar(0,0,255), 3, 8, 0 );
Rect rect(x-averR,y-averR,2*averR,2*averR);
Mat submat = cimg(rect);
double p =(double)countNonZero(submat)/(submat.size().width*submat.size().height);
if(p>=0.3 && p>max){
max = p;
ind = j;
}
}
if(ind==-1)printf("%d:-",i+1);
else printf("%d:%c",i+1,'A'+ind);
cout<<endl;
}
// circle outline*/
imshow("example3",quad);
waitKey();
return 0;
}
**My translated code in Java:**
private void scanImage(){
Mat img = Imgcodecs.imread(mediaStorageDir().getPath() + "/" + "test2.jpg", Imgcodecs.CV_LOAD_IMAGE_GRAYSCALE);
Log.e("[CANAIS]", String.valueOf(img.channels()));
Size sz = new Size(3,3);
Imgproc.GaussianBlur(img, img, sz, 0);
Imgproc.adaptiveThreshold(img, img, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 75, 10);
Core.bitwise_not(img, img);
Mat img2 = new Mat();
Imgproc.cvtColor(img, img2, Imgproc.COLOR_GRAY2RGB);
Mat img3 = new Mat();
Imgproc.cvtColor(img, img3, Imgproc.COLOR_GRAY2RGB);
MatOfInt4 lines = new MatOfInt4();
Log.e("[ORIGINAL IMAGE]", "" + img.total() + "||" + img.rows() + "||" + img.cols());
Imgproc.HoughLinesP(img, lines, 1, Math.PI/180,80,200,10);
Log.e("[LINES IMAGE]", "" + lines.total() + "||" + lines.rows() + "||" + lines.cols());
for(int i = 0; i < lines.total(); i++){
MatOfInt4 l = new MatOfInt4();
l.put(i, 0, lines.get(i, 0));
Imgproc.line(img2, new Point(l.get(0, 0)), new Point(l.get(1, 0)), new Scalar(0,0,255), 3, Imgproc.LINE_AA, 0);
/*Point pr = new Point();
Point ps = new Point();
pr.x = Double.valueOf(l.get(0, 0).toString());
pr.y = Double.valueOf(l.get(1, 0).toString());
ps.x = Double.valueOf(l.get(2, 0).toString());
ps.y = Double.valueOf(l.get(3,0).toString());
Scalar scalar = new Scalar(0,0,255);
Imgproc.line(img2, pr, ps, scalar, 3, Imgproc.LINE_AA, 0);*/
}
showImage(img2);
MatOfInt4 mt4 = new MatOfInt4(lines);
LinkedList<Point> corners = new LinkedList<>();
for(int i = 0; i < lines.total(); i++){
for(int x = i + 1; x <lines.total(); x++){
MatOfInt4 gen = new MatOfInt4();
MatOfInt4 gen1 = new MatOfInt4();
gen1.put(x, 0, lines.get(x, 0));
gen.put(i, 0, lines.get(i, 0));
Log.e("[MAT4]", "" + gen.total() + "||" + gen1.total());
Point pt = computeIntersect(lines.get(i, 0), lines.get(x,0));
Log.e("[PT]", "" + pt.x + "||" + pt.y);
if(pt.x >= 0 && pt.y >= 0 && pt.x < img.cols() && pt.y < img.rows()){
corners.addLast(pt);
}
}
}
Point center = new Point(0,0);
//MatOfPoint mtp = new MatOfPoint(center);
for(int i = 0; i < corners.size(); i++){
center.x += corners.get(i).x;
center.y += corners.get(i).y;
}
center.x *= 1./ corners.size();
center.y *= 1./ corners.size();
sortCorners(corners, center);
MatOfPoint mtp = new MatOfPoint(Converters.vector_Point_to_Mat(corners));
Rect r = Imgproc.boundingRect(mtp);
Log.e("[RECT]", r.toString());
Mat quad = Mat.zeros(r.height, r.width, CvType.CV_8UC3);
LinkedList<Point> quad_pts = new LinkedList<>();
quad_pts.addLast(new Point(0,0));
quad_pts.addLast(new Point(quad.cols(),0));
quad_pts.addLast(new Point(quad.cols(),quad.rows()));
quad_pts.addLast(new Point(0,quad.rows()));
Mat transmtx = Imgproc.getPerspectiveTransform(Converters.vector_Point2f_to_Mat(corners), Converters.vector_Point2f_to_Mat(quad_pts));
Imgproc.warpPerspective(img3, quad, transmtx, quad.size());
Mat cimg = new Mat();
Imgproc.cvtColor(quad, cimg, Imgproc.COLOR_BGR2GRAY);
Mat circles = new Mat();
Log.e("[CIMG]", "" + cimg.total() + "||" + cimg.rows() + "||" + cimg.cols());
Imgproc.HoughCircles(cimg, circles, Imgproc.CV_HOUGH_GRADIENT, 1, img.rows()/16, 100, 60, 0,0);
Log.e("[CIRCLES]", "" + circles.total() + "||" + circles.rows() + "||" + circles.cols());
Log.e("[CIRCLES ARRAY]", "" + circles.get(0,1));
for(int i=0; i < circles.total(); i++){
Point center1 = new Point(Math.round(circles.get(0, i)[0]), Math.round(circles.get(0, i)[1]));
Imgproc.circle(quad, center1, 3, new Scalar(0,255,0), -1,8,0);
}
int averR = 0;
LinkedList<Double> row = new LinkedList<>();
LinkedList<Double> col = new LinkedList<>();
for(int i=0; i < circles.total(); i++){
boolean found = false;
String rr = String.valueOf(Math.round(circles.get(0, i)[2]));
int rrr = Integer.valueOf(rr);
averR += rrr;
String x = String.valueOf(Math.round(circles.get(0, i)[0]));
String y = String.valueOf(Math.round(circles.get(0, i)[1]));
int xx = Integer.valueOf(x);
int yy = Integer.valueOf(y);
for(int j=0; j < row.size(); j++){
double y2 = row.get(j);
if(yy - rrr < y2 && yy + rrr > y2){
found = true;
break;
}
}
if(!found){
row.addLast(Double.valueOf(yy));
}
found = false;
for(int j=0; j < col.size(); j++){
double x2 = col.get(j);
if(xx - rrr < x2 && xx + rrr > x2){
found = true;
break;
}
}
if(!found){
col.addLast(Double.valueOf(xx));
}
}
averR /= circles.total();
Collections.sort(row, new Comparator<Double>() {
@Override
public int compare(Double o1, Double o2) {
return Double.compare(o1, o2);
}
});
Collections.sort(col, new Comparator<Double>() {
@Override
public int compare(Double o1, Double o2) {
return Double.compare(o1, o2);
}
});
for(int i=0;i<row.size();i++){
double max = 0;
double y = row.get(i);
int ind = -1;
for(int j=0;j<col.size();j++){
double x = col.get(i);
Point c = new Point(x,y);
//Use an actual circle if it exists
for(int k=0;k<circles.total();k++){
double x2 = circles.get(0,k)[0];
double y2 = circles.get(0, k)[1];
if(abs(y2-y)<averR && abs(x2-x)<averR){
x = x2;
y = y2;
}
}
// circle outline
//Imgproc.circle(quad, c, );
Imgproc.circle( quad, c, averR, new Scalar(0,0,255), 3, 8, 0 );
Rect rect = new Rect(Integer.valueOf(String.valueOf(Math.round(Double.valueOf(String.valueOf(x-averR))))),Integer.valueOf(String.valueOf(Math.round(Double.valueOf(String.valueOf(y-averR))))),Integer.valueOf(String.valueOf(Math.round(Double.valueOf(String.valueOf(2*averR))))),Integer.valueOf(String.valueOf(Math.round(Double.valueOf(String.valueOf(2*averR))))));
Mat submat = cimg.adjustROI(rect.width, rect.width, rect.height, rect.height);
double p =(double)countNonZero(submat)/(submat.size().width*submat.size().height);
if(p>=0.3 && p>max){
max = p;
ind = j;
}
}
if(ind==-1)
Log.e("[N SEI]", "" + i+1);
else
Log.e("[NSEI]", "" + i+1 + "A" + ind);
}
showImage(quad);
}
private Point computeIntersect(double[] a, double[] b){
Point generc = new Point();
generc.x = -1;
generc.y = -1;
double x1 = a[0], y1 = a[1], x2 = a[2], y2 = a[3];
double x3 = b[0], y3 = b[1], x4 = b[2], y4 = b[3];
double d= ((x1 - x2) * (y3 - y4)) - ((y1 - y2) * (x3 - x4));
if(d != 0){
Point pt = new Point();
pt.x = ((x1*y2 - y1*x2) * (x3-x4) - (x1-x2) * (x3*y4 - y3*x4)) / d;
pt.y = ((x1*y2 - y1*x2) * (y3-y4) - (y1-y2) * (x3*y4 - y3*x4)) / d;
return pt;
}
else
return generc;
}
private void sortCorners(LinkedList<Point> corners, Point center){
LinkedList<Point> top = new LinkedList<>();
LinkedList<Point> bot = new LinkedList<>();
Log.e("[CORNERS SIZE]", String.valueOf(corners.size()));
for(int i = 0; i < corners.size(); i++){
if(corners.get(i).y < center.y)
top.addLast(corners.get(i));
else
bot.addLast(corners.get(i));
}
Collections.sort(top, new Comparator<Point>() {
public int compare(Point a, Point b) {
int xComp = Double.compare(a.x, b.x);
if(xComp == 0)
return Double.compare(a.y, b.y);
else
return xComp;
}
});
Collections.sort(bot, new Comparator<Point>() {
public int compare(Point a, Point b) {
int xComp = Double.compare(a.x, b.x);
if(xComp == 0)
return Double.compare(a.y, b.y);
else
return xComp;
}
});
/*Collections.sort(bot, new Comparator<Point>() {
@Override
public int compare(Point o1, Point o2) {
return Collator.getInstance().compare(o1, o2);
}
});*/
Log.e("[TOP SIZE]", String.valueOf(top.size()));
Point t1 = top.get(0);
Point tr = top.get(top.size() - 1);
Point b1 = bot.get(0);
Point br = bot.get(bot.size() - 1);
corners.clear();
corners.addLast(t1);
corners.addLast(tr);
corners.addLast(br);
corners.addLast(b1);
}
The original tutorial, with the C++ code, can be found [here.](http://blog.ayoungprogrammer.com/2013/03/tutorial-creating-multiple-choice.html/)
If someone could tell me where is the problem, I will be gratified.
Thanks.Murilo PereiraThu, 15 Feb 2018 18:14:05 -0600http://answers.opencv.org/question/184784/Birds eye view perspectivetransform from camera calibrationhttp://answers.opencv.org/question/183753/birds-eye-view-perspectivetransform-from-camera-calibration/I am trying to get the bird's eye view perspective transform from camera intrinsic, extrinsic matrices and distortion coefficients.
I tried using the answer from [this][1] question.
The image used is the sample image left02.jpg from the opencv official github repo
[![The image to be prospectively un-distored left02.jpg image from opencv sample images i.e get the bird's eye view of the image][2]][2]
I calibrated the camera and found the intrinsic, extrinsic matrices and the distortion co-efficients.
I undistored the image and found the pose. To check if the params are right.
[![Image after un-distortion and visualising pose][3]][3]
The equations I used to find the perspective transformation matrix are (Refer the above link):
`Hr = K * R.inv() * K.inv()` where R is rotational matrix (from cv2.Rodrigues()) and K is obtained from cv2.getoptimalnewcameramatrix()
[ 1 0 | ]
Ht = [ 0 1 | -K*C/Cz ]
[ 0 0 | ]
Where `C=-R.inv()*T` Where T is translational vector from `cv2.solvePnP()`
and Cz is the 3rd component of the C vector
The required transformation is: `H = Ht * Hr`
The code I used to construct the above equation is:
K = newcameramtx # from cv2.getoptimalnewcameramatrix()
ret,rvec,tvec = cv2.solvePnP(world_points,corners2,K,dist)
R,_ = cv2.Rodrigues(rvec)
_,R_inv = cv2.invert(R)
_,K_inv = cv2.invert(K)
Hr = np.matmul(K,np.matmul(R_inv,K_inv))
C = np.matmul(-R_inv,tvec)
Cz = C[2]
temp_vector = np.matmul(-K,C/Cz)
Ht = np.identity(3)
for i,val in enumerate(temp_vector):
Ht[i][2] = val
homography = np.matmul(Ht,Hr)
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]))
# where img is the above undistored image with visualized pose
The resulting warped image is not correct.
[![With homographic matrix = Ht*Hr][4]][4]
If I remove the translation from the homography by using the below code
homography = Hr.copy()
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]))
I am getting the following image
[![With homographic matrix = Hr][5]][5]
I think the above image shows that my rotational part is correct but my translation is wrong.
Since the translational matrix (Ht) is an augmented matrix am unsure whether my construction of the above matrix is correct.
I specifically want to figure out the bird's eye perspective transformation from the camera calibration.
So, How do I correct the above equations so that I am getting the perfect bird's eye view of the chessboard image
Could anyone also please explain the math on how the above equations for Ht and Hr are derived? I don't have much exposure to Linear algebra so these equations are not very obvious to me.
**UPDATE:**
homography = np.matmul(Ht,Hr)
warped_img =cv2.warpPerspective(img,homography,(img.shape[1],img.shape[0]),flags=cv2.WARP_INVERSE_MAP)
cv2.WARP_INVERSE_MAP flag gave me a different result
[![][6]][6]
Still not the result I am looking for!
[1]: https://stackoverflow.com/questions/23275877/opencv-get-perspective-matrix-from-translation-rotation
[2]: https://i.stack.imgur.com/vUmcl.png
[3]: https://i.stack.imgur.com/mNLBy.png
[4]: https://i.stack.imgur.com/PnO3L.png
[5]: https://i.stack.imgur.com/bLlYD.png
[6]: https://i.stack.imgur.com/Y4GqK.pngabhijitThu, 01 Feb 2018 23:15:21 -0600http://answers.opencv.org/question/183753/Can spherical ball get distorted to ellipse on image plane in a pin hole camera modelhttp://answers.opencv.org/question/182684/can-spherical-ball-get-distorted-to-ellipse-on-image-plane-in-a-pin-hole-camera-model/I have a ball captured in an image. The ball is detected as circle when it is at the center of the image. when it moves to the corner of the image it is detected as an ellipse.
We use a fish eye/wide angle lens and we are not correcting the image. We do the circle and ellipse detection on the original image.
I want to know if this is a phenomenon of **perspective distortion** or due to the **fish eye/lens distortion** ? or anything else.
I did some reading around it and things are confusing me.
https://books.google.com.sg/books?id=SFgfgFrdB_oC&pg=PA35&lpg=PA35&dq=sphere+becomes+eliptic+camera&source=bl&ots=dRUkJecnyW&sig=YItExPSKQOGa0TkNRFO332Mh-iU&hl=en&sa=X&ved=0ahUKEwjsluWRwODYAhWMwI8KHXZeD6YQ6AEIOzAG#v=onepage&q=sphere%20becomes%20eliptic%20camera&f=false
Any help or knowledge would be appreciated.
Sriram KumarWed, 17 Jan 2018 21:35:15 -0600http://answers.opencv.org/question/182684/OpenCV function to transform one image taken from a camera to the image from another camera's viewpointhttp://answers.opencv.org/question/176400/opencv-function-to-transform-one-image-taken-from-a-camera-to-the-image-from-another-cameras-viewpoint/Is it possible to transform one 2D image from one camera (camera1) to the image from another camera (camera2, which is a virtual camera)'s viewpoint under the condition that I know both camera's poses? I looked up some techniques including homography transformation, but it looks not help.
Here is the information I have and I don't have. - Known: Camera1 pose, camera2 pose (= transformation matrix between two cameras), camera parameters for both cameras - Unkown: Object pose
If the object 3D pose in the original image is known, the conversion is easy. However, you can't suppose to get the 3D pose (depth) information in my setting.
I believe there is a way because it's already used in the car navigation (www.mdpi.com/1424-8220/12/4/4431/pdf), but I'm curious the general way to realize this transformation and how to do this type of image processing in OpenCV.kangarooMon, 16 Oct 2017 00:15:02 -0500http://answers.opencv.org/question/176400/I've a bitmap image which contain both transparent area and non-transparent area. I've to find out non-transparent co-ordinates and change its perspective in android.http://answers.opencv.org/question/161085/ive-a-bitmap-image-which-contain-both-transparent-area-and-non-transparent-area-ive-to-find-out-non-transparent-co-ordinates-and-change-its/![image description](/upfiles/1497855809554155.png)
**I've to find this angle of the image and change its perspective as per its angle.**
**I find non-transparent coordinate pixels of the above image using following**
Bitmap CropBitmapTransparency(Bitmap sourceBitmap)
{
sourceBitmap.setHasAlpha(true);
startWidth = sourceBitmap.getWidth();// int minX
startHeight= sourceBitmap.getHeight();// int minY
endWidth= -1;// int maxX
endHeight= -1;// int maxY
for(int y = 0; y < sourceBitmap.getHeight(); y++)
{
for(int x = 0; x < sourceBitmap.getWidth(); x++)
{
int alpha = ((sourceBitmap.getPixel(x, y) & 0xff000000) >> 24);
//
if(alpha != 0) // pixel is not 100% transparent
{
// Log.d("Alpha",alpha+" ");
if(x < startWidth)
startWidth = x;
if(x > endWidth)
endWidth = x;
if(y < startHeight)
startHeight = y;
if(y > endHeight)
endHeight = y;
}
}
}
if((endWidth < startWidth) || (endHeight < startHeight))
return null; // Bitmap is entirely transparent
Log.w("Startwidh = ",startWidth+" ");
Log.w("StartHeight = ",startHeight+" ");
Log.w("Endwidh = ",endWidth+" ");
Log.w("End Height = ",endHeight+" ");
// angle=getAngle(startWidth,startHeight,endWidth,endHeight);
// crop bitmap to non-transparent area and return:
return Bitmap.createBitmap(sourceBitmap, startWidth, startHeight, (endWidth - startWidth) + 1, (endHeight - startHeight) + 1);
}
**Based on the non-transparent part of the image i've change the perspective using following code**
public Bitmap perspectiveBitmap(Bitmap sourceBitmap)
{
Bitmap temp=CropBitmapTransparency(sourceBitmap);
Bitmap resultBitmap=Bitmap.createBitmap(sourceBitmap.getWidth(),sourceBitmap.getHeight(),Bitmap.Config.ARGB_8888);
Mat inputMat = new Mat();
Mat outputMat = new Mat();
Mat outputMat1 = new Mat();
Utils.bitmapToMat(sourceBitmap, inputMat);
Mat src_mat=new Mat(4,1,CvType.CV_32FC2);
Mat dest_mat=new Mat(4,1,CvType.CV_32FC2);
src_mat.put(0,0,startWidth,startHeight,endWidth,startHeight,startWidth,endHeight,endWidth,endHeight);
dest_mat.put(0,0,0.0,0.0,endWidth,0.0,0.0,endHeight,endWidth,endHeight);
Mat perspectiveTransform=Imgproc.getPerspectiveTransform(src_mat,dest_mat);
Mat dst=inputMat.clone();
Size size = new Size(sourceBitmap.getWidth(), sourceBitmap.getHeight());
Imgproc.warpPerspective(inputMat, dst, perspectiveTransform, size,Imgproc.INTER_CUBIC);
Log.e("1=",""+inputMat.cols()+" "+inputMat.rows());
Log.e("outmat.."," "+outputMat.cols()+" "+outputMat.rows());
Utils.matToBitmap(dst, resultBitmap);
//Utils.matToBitmap(tmp, b);
return resultBitmap;
}
**But i'm not getting the perspective change**
**PLease help me as soon as possible.
Thanks in advance**
AadhiMon, 19 Jun 2017 02:09:02 -0500http://answers.opencv.org/question/161085/Fisheye distortion correction using Omnidir namespace - change of perspectivehttp://answers.opencv.org/question/143514/fisheye-distortion-correction-using-omnidir-namespace-change-of-perspective/I have been able to correctly calibrate a 180 degrees FOV camera (fisheye), i.e. I have been able to extract the distortion and camera matrices using the omnidirectional model, so I used the `omnidir::calibrate()` function to extract the matrices and used the `omnidir::undistortImage()` function for undistorting the images.
Everything works well, but I would like to change the angle at which the undistort is done, i.e. the angle at which the camera is viewing when the image is undistorted. To get a better idea of what I am after please check this link: paulbourke.net/dome/fish2/ (4th image down - looking right by 40 degrees) something similar to that. I have tried changing the `cx` value in the `Knew` matrix but that doesn't have the desired effect.
Any help would be greatly appreciated.JackGold1Mon, 24 Apr 2017 10:30:31 -0500http://answers.opencv.org/question/143514/Aruco: Z-Axis flipping perspectivehttp://answers.opencv.org/question/123375/aruco-z-axis-flipping-perspective/ I am trying to do some simple AR with Aruco tags and I am having trouble determining the correct perspective.
The problem occurs when it is unclear which side of the tag is closer to the camera.
For example, in the image, the two codes are on the same plane pointing the same direction, but the z-axes are pointed in different directions (The code on the bottom is showing the correct orientation):
**Image is posted in comments, I don't have high enough karma for links yet.**
I am not doing anything fancy, just a simple `detectMarkers` with `drawAxis` call for the results.
What can be done to ensure I don't get these false perspective reads?MrZanderFri, 20 Jan 2017 17:52:21 -0600http://answers.opencv.org/question/123375/Scale-rotation-skew invariant template matchinghttp://answers.opencv.org/question/99438/scale-rotation-skew-invariant-template-matching/ Hi all,
I need to find the target in an image. The target can assume whichever orientation, and can be scaled. Moreover, the image can be acquired from different camera's angulations.
Do you know if a template matching algorithm that is rotation, scale and skew invariant already exists in OpenCV?
In industrial robotics applications, those kinds of algorithms already exist, and they work pretty well.
Otherwise, what approach could work?
An example of the images that I would use is attached to the post.
Thanks everybody for the help.
Nicola
[C:\fakepath\img13.png](/upfiles/14701731106391941.png)
[C:\fakepath\target.png](/upfiles/14701731258379517.png)
P.S.: I'm sorry to have edited this question different times, I know it is a little confusing.nico_laudaTue, 02 Aug 2016 16:27:08 -0500http://answers.opencv.org/question/99438/Contour perspective warphttp://answers.opencv.org/question/91725/contour-perspective-warp/I want to automate a process where text is recognized from a specific card type:
![image description](/upfiles/14596150505143125.jpg)
I will first explain what I dit so far. First I converted the image to a grayscale one:
![image description](/upfiles/1459615252838436.jpg)
Then I applied `Canny`:
![image description](/upfiles/14596152822749715.jpg)
By the result of this, it was easy to find the largest contour:
![image description](/upfiles/1459615331991507.jpg)
So far so good. The problem is: I have a shape here which fits in rectangle. However, the perspective is not top-down. Which implies that it is not possible to use `boundingRect`. Basically I'm in need of an algorithm to find the orientation of this contour.
I have no idea how to do this. Is there something I am missing? How would you do this?
**Edit:**
Note that the above photo is not a good example, let me add another test case:
![image description](/upfiles/1459675889263133.jpg)TimKSat, 02 Apr 2016 11:46:22 -0500http://answers.opencv.org/question/91725/Calibration of images to obtain a top-view for points that lie on a same planehttp://answers.opencv.org/question/87297/calibration-of-images-to-obtain-a-top-view-for-points-that-lie-on-a-same-plane/ Same question:
http://stackoverflow.com/questions/34461821/calibration-of-images-to-obtain-a-top-view-for-points-that-lie-on-a-same-plane
Though, the below code is from Matlab. I am open to any solutions using opencv.
**Calibration:**
I have calibrated the camera using this vision toolbox in Matlab. I used checkerboard images to do so. After calibration I get the following:
>> cameraParams
cameraParams =
cameraParameters with properties:
Camera Intrinsics
IntrinsicMatrix: [3x3 double]
FocalLength: [1.0446e+03 1.0428e+03]
PrincipalPoint: [604.1474 359.7477]
Skew: 3.5436
Lens Distortion
RadialDistortion: [0.0397 0.0798 -0.2034]
TangentialDistortion: [-0.0063 -0.0165]
Camera Extrinsics
RotationMatrices: [3x3x18 double]
TranslationVectors: [18x3 double]
Accuracy of Estimation
MeanReprojectionError: 0.1269
ReprojectionErrors: [48x2x18 double]
ReprojectedPoints: [48x2x18 double]
Calibration Settings
NumPatterns: 18
WorldPoints: [48x2 double]
WorldUnits: 'mm'
EstimateSkew: 1
NumRadialDistortionCoefficients: 3
EstimateTangentialDistortion: 1
Extrinstic:
[![enter image description here][1]][1]
**Aim:**
I have recorded trajectories of some objects in motion using this camera. Each object corresponds to a single point in a frame. Now, I want to project the points such that I get a top-view.
**Data sample:**
K>> [xcor_i,ycor_i ]
ans =
-101.7000 -77.4040
-102.4200 -77.4040
-103.6600 -77.4040
-103.9300 -76.6720
-103.9900 -76.5130
-104.0000 -76.4780
-105.0800 -76.4710
-106.0400 -77.5660
-106.2500 -77.8050
-106.2900 -77.8570
-106.3000 -77.8680
-106.3000 -77.8710
-107.7500 -78.9680
-108.0600 -79.2070
-108.1200 -79.2590
-109.9500 -80.3680
-111.4200 -80.6090
-112.8200 -81.7590
-113.8500 -82.3750
-115.1500 -83.2410
-116.1500 -83.4290
-116.3700 -83.8360
-117.5000 -84.2910
-117.7400 -84.3890
-118.8800 -84.7770
-119.8400 -85.2270
-121.1400 -85.3250
-123.2200 -84.9800
-125.4700 -85.2710
-127.0400 -85.7000
-128.8200 -85.7930
-130.6500 -85.8130
-132.4900 -85.8180
-134.3300 -86.5500
-136.1700 -87.0760
-137.6500 -86.0920
-138.6900 -86.9760
-140.3600 -87.9000
-142.1600 -88.4660
-144.7200 -89.3210
Code(Ref:http://stackoverflow.com/a/27260492/3646408):
load('C:\Users\sony\Dropbox\calibration_images\matlab_calibration_data.mat');
R = cameraParams.RotationMatrices(:,:,1);
t = cameraParams.TranslationVectors(1, :);
% combine rotation and translation into one matrix:
R(3, :) = t;
%Now compute the homography between the checkerboard and the image plane:
H = R * cameraParams.IntrinsicMatrix;
%Transform the image using the inverse of the homography:
I=imread('C:\Users\sony\Dropbox\calibration_images\Images\exp_0.jpg');
J = imwarp(I, projective2d(inv(H)));
imshow(J);
How can I do the same for points?
Edit 1:
Quoting text from OReilly Learning OpenCV Pg 412:
"Once we have the homography matrix and the height parameter set as we wish, we could
then remove the chessboard and drive the cart around, making a bird’s-eye view video
of the path..."
This what I essentially wish to achieve.
***New info:***
1. Note all these points(in data sample) are the on the same plane.
2. Also, this plane is perpendicular to one of images of checkerboard used for calibration. For that image(below), I know the height of origin of the checkerboard of from ground(193.040 cm).
[![Figure 1][2]][2]
Edit 2:
Two questions where I am stuck right now:
1. Do I need to calibrate all the images or just the image shown above in which the board is perpendicular to the board.
2. Using the code given in http://stackoverflow.com/a/27260492/3646408 I can calibrate and get the bird's eye view if the they lie on same plane. But how to do if they are perpendicular.
[1]: http://i.stack.imgur.com/Nef2L.jpg
[2]: http://i.stack.imgur.com/ZhnAG.jpgabhigenie92Wed, 10 Feb 2016 13:03:37 -0600http://answers.opencv.org/question/87297/Inverse Perspective Mapping (newbie)http://answers.opencv.org/question/77262/inverse-perspective-mapping-newbie/Hello all,
I have a picture containing two geometric shapes on the same plane. The picure is taken from some unknown point of view. One shape is a square of know size, the other is unknown. Is it possible to revert the perspective transform, and measure the size of the unknown shapes? I am new to OpenCV, and I've only understood that this has to do with Inverse Perspective Mapping. What is the sequence of function calls?
![image description](/upfiles/14484690065764264.jpg)
Thank you
I've tryed both affine and perspective transform, but the result is not what I want. The arch is still distorted, even if the square is not.
ORIGINAL
![image description](/upfiles/144856172568003.jpg)
AFFINE
![image description](/upfiles/14485617768689178.jpg)
PERSPECTIVE
![image description](/upfiles/1448561791961170.jpg)
Any idea?
johnfulgorWed, 25 Nov 2015 02:44:21 -0600http://answers.opencv.org/question/77262/Perspective transformation between 3D pointshttp://answers.opencv.org/question/73555/perspective-transformation-between-3d-points/ Hello,
I need to find a transformation from one camera to another (stereo), so i think i need a perspective transformation in 3D. Am i right? How can i find such transformation?
Thanks a lot,
OlegOleg_kSun, 18 Oct 2015 19:41:01 -0500http://answers.opencv.org/question/73555/perspective transformation with given camera posehttp://answers.opencv.org/question/72020/perspective-transformation-with-given-camera-pose/Hi everyone!
I'm trying to create a program, that I will use to perform some tests.
In this program an 2D image is being displayed in 3D space in the cv:viz window, so user can change camera (viewer) position and orientation.
![image description](/upfiles/1443709792833003.jpg)
After that, program stores camera pose and takes the snaphot of the current view (without coordinates axes):
![image description](/upfiles/14437098062513117.jpg)
An here is the goal:
I have the **snaphot** (perspective view of undetermined plane or part of the plane), **camera pose** (especially its orientation) and **camera parameters**. Using these given values I would like to **perform perspective transformation to compute an ortographic view of this given image** (or its visible part).
I can get the camera object and compute its projection matrix:
camera.computeProjectionMatrix(projectionMatrix);
and then decompose projection matrix:
decomposeProjectionMatrix(subProjMatrix,cameraMatrix, rotMatrix, transVect, rotMatX, rotMatY, rotMatZ);
And what should I do next?
Notice, that I can't use chessboard cornersbecause the image is undetermined (it may be any image) and I can't use the corner points of the image, because user can zoom and translate the camera, so there is posibility, that no image corner point will be visible...
Thanks for any help in advance!pawsThu, 01 Oct 2015 09:41:43 -0500http://answers.opencv.org/question/72020/PNG transparent channel lost after perspective transformation.http://answers.opencv.org/question/71641/png-transparent-channel-lost-after-perspective-transformation/ I am perspective transform a RGBA PNG image (depth:8, channel: 4, background: transparent), but the transparent background turns into black after transformation. What's wrong with the following code?
I am a new guy to OpenCV, thank you!
void cv_test(void) {
int i, t, ret;
CvPoint2D32f srcQuad[4];
CvPoint2D32f dstQuad[4];
CvMat *warp_matrix;
IplImage *src, *dst;
src = cvLoadImage("c:\\aqxj.png", CV_LOAD_IMAGE_UNCHANGED);
if(!src) {
fprintf(stderr, "src:%p", src);
return;
}
dst = cvCreateImage(cvSize(src->width, src->height), 8, 4);
//dst->origin = src->origin;
ret = cvSaveImage("/home/metalwood/s.png", dst);
cvZero(dst);
fprintf(stderr, "dst width:%d, height:%d.\n", dst->width, dst->height);
srcQuad[0].x = 0; /* Top left. */
srcQuad[0].y = 0;
srcQuad[1].x = src->width - 1; /* Top right. */
srcQuad[1].y = 0;
srcQuad[2].x = 0; /* Bottom left. */
srcQuad[2].y = src->height - 1;
srcQuad[3].x = src->width - 1; /* Bottom right. */
srcQuad[3].y = src->height - 1;
dstQuad[0].x = src->width * 0.05;
dstQuad[0].y = src->height * 0.33;
dstQuad[1].x = src->width * 0.9;
dstQuad[1].y = src->height * 0.25;
dstQuad[2].x = src->width * 0.2;
dstQuad[2].y = src->height * 0.7;
dstQuad[3].x = src->width * 0.8;
dstQuad[3].y = src->height * 0.9;
warp_matrix = cvCreateMat(3, 3, CV_32FC1);
cvGetPerspectiveTransform(srcQuad, dstQuad, warp_matrix);
cvWarpPerspective(src, dst, warp_matrix);
cvNamedWindow("Perspective_Warp", 1);
cvShowImage("Perspective_Warp", dst);
cvWaitKey();
//Save disk
ret = cvSaveImage("c:\\save.png", src);
fprintf(stderr, "ret:%d.\n", ret);
cvReleaseImage(&dst);
cvReleaseMat(&warp_matrix);
}realkillFri, 25 Sep 2015 00:30:43 -0500http://answers.opencv.org/question/71641/camera rotation and translation based on two imageshttp://answers.opencv.org/question/68023/camera-rotation-and-translation-based-on-two-images/Hello,
I'm just starting my little project in OpenCV and I need your help :)
I would like to calculate rotation and translation values of the camera basing on two views of the same planar, square object.
I have already found functions such as: getPerspectiveTransform, decomposeEssentialMat, decomposeHomographyMat. Plenty of tools, but I'm not sure which of them to use in my case.
I have a square object of known real-world dimensions [meters]. After simple image processing I can extract pixel values of the vertices and the center of the square.
Now I would like to calculate the relative rotation and translation of the camera which led to obtain the second of two images:<br>
"Reference view" and "View #n"<br>
(please see below).
Any suggestions will be appreciated :)
1. Reference view:<br>
![image description](/upfiles/1438854857209.png)
<br>(center of the object is on the optical axis of camera, the camera-object distance is known)
2. View #1:<br>
![image description](/upfiles/14388548769288926.png)
3. View #2:<br>
![image description](/upfiles/14388548834324958.png)
4. View #3:<br>
![image description](/upfiles/1438854889587757.png)
AliceThu, 06 Aug 2015 05:40:19 -0500http://answers.opencv.org/question/68023/Perspective Transform using Chessboardhttp://answers.opencv.org/question/62956/perspective-transform-using-chessboard/ Hey,
I need some help with this problem:
I have a camera that takes a picture of something on a horizontal plane with a specific angle.
That creates a perspective transform of this "something". And I would like to get this picture as if I would look down from top of it.
What I did already and one thing that I don't know how to do:
1. I placed a chessboard there.
2. I find the corners of the chessboard.
3. ???
4. cvGetPerspectiveTransform
5. cvWarpPerspective
My problem is point 3.
I have to find out Source and Destination Points which depend on the corners of the chessboard and the width of the picture that was taken, because they show the transformation.
Source is easy: (0,0), (Width, 0), (0,Height) and (Width,Height), because I want the whole picture to be transformed.
Destination however is difficult for me. I don't know how to find those points.
I want that the whole picture (Not just the part with the chessboard inside) is transformed within a single step.
Like in the picture below.
I would appreciate any help.
Greetings and my thanks in advance,
Phanta
![image description](/upfiles/14331120304698347.png)PhantaSun, 31 May 2015 07:20:48 -0500http://answers.opencv.org/question/62956/Problem with perspectivehttp://answers.opencv.org/question/55298/problem-with-perspective/Hello evrybody
I'm trying to apply a perspective transform on an image but I doesn't worklike I would like. Since code and pictures are easier to understand than a long speech (and because my english is quite poor) here is the code (in C):
#include <stdio.h>
#include "opencv/highgui.h"
#include "opencv/cv.h"
int main() {
int flags;
CvScalar fillval;
CvPoint2D32f P[3], Q[3];
IplImage *image, *resultat;
CvMat* matrice = cvCreateMat(3,3,CV_32FC1);
cvNamedWindow("Entrée", CV_WINDOW_AUTOSIZE);
cvNamedWindow("Résultat", CV_WINDOW_AUTOSIZE);
image = cvLoadImage("C:\\Users\\Julien\\Pictures\\tennis-3d.jpg", CV_LOAD_IMAGE_COLOR);
resultat=cvCloneImage(image);
cvShowImage( "Entrée", image);
cvShowImage( "Résultat", resultat);
P[0].x=266;
P[0].y=101;
P[1].x=534;
P[1].y=101;
P[2].x=667;
P[2].y=394;
P[3].x=133;
P[3].y=393;
Q[0].x=0;
Q[0].y=0;
Q[1].x=image->width;
Q[1].y=0;
Q[2].x=image->width;
Q[2].y=image->height;
Q[3].x=0;
Q[3].y=image->height;
cvGetPerspectiveTransform(P,Q,matrice);
cvWarpPerspective(image,resultat, matrice,flags=CV_WARP_FILL_OUTLIERS,fillval=cvScalarAll(0) );
cvShowImage( "Résultat", resultat);
cvWaitKey(1000000);
return 0;
}
And the result :
![image description](/upfiles/1424004391362331.png)
The goal is to have a "top view" of the court
Thank you for your help
SparrowSparrowSun, 15 Feb 2015 06:50:22 -0600http://answers.opencv.org/question/55298/How to discover my coordinate (0,0) after to apply warpperspectivehttp://answers.opencv.org/question/54823/how-to-discover-my-coordinate-00-after-to-apply-warpperspective/Hi everyone,
How can i discover my coordinate (0,0) after to apply warpperspective in my image ?
I want to come back for my original image.
Help me!
Thank you!Diego MoreiraMon, 09 Feb 2015 20:50:25 -0600http://answers.opencv.org/question/54823/Meaning of perspective transformation matrix (Q) valueshttp://answers.opencv.org/question/38629/meaning-of-perspective-transformation-matrix-q-values/I did stereo camera calibration with stereo_calib.cpp sample and got the intrinsics.yml and extrinsics.yml files which contain also the Q matrix. That is the meaning of its values? [This answer](http://answers.opencv.org/question/4379/from-3d-point-cloud-to-disparity-map/?answer=4433#post-id-4433) shows following items in the matrix: Cx, Cy, f, a and b. I guess the f is focal length but I am not sure on the other values.
Also, I need following data for my further coding:
- focal length in pixels
- principal point (u-coordinate) in pixels
- principal point (v-coordinate) in pixels
- baseline in meters
These are the parameters needed for libviso2 visual odometry.
Thanks for help in advance!KozuchSun, 03 Aug 2014 13:23:53 -0500http://answers.opencv.org/question/38629/How to build lookup table for inverse perspective mapping?http://answers.opencv.org/question/11379/how-to-build-lookup-table-for-inverse-perspective-mapping/Hi, I want to build a lookup table to use with inverse perspective mapping.
Instead of applying warpPerspective with the transform matrix on each frame, I want to use a lookup table (LUT).
Right now I use the following code to generate the transformation matrix
m = new Mat(3, 3, CvType.CV_32FC1);
m = Imgproc.getPerspectiveTransform(src, dst);
In the onCameraFrame() I apply the warpPerspective() function. How can I build a LUT knowing some input pixels on the original frame and their correspondences in the output frame, and knowing the transfromation matrix?? razvanThu, 11 Apr 2013 13:18:18 -0500http://answers.opencv.org/question/11379/Inverse Perspective Mapping with Known Rotation and Translationhttp://answers.opencv.org/question/33267/inverse-perspective-mapping-with-known-rotation-and-translation/Hi,
I need to obtain a new view of an image from a desired point of view (a general case of bird's eye view).
Imagine we change the camera's position with a **known rotation and transformation**. what would be the new image of the same scene?
We may put it in another way: how can we compute **homography matrix** by having the rotation and translation matrices?
I really appreciate any help!gozariTue, 13 May 2014 11:25:31 -0500http://answers.opencv.org/question/33267/non-linear transformationhttp://answers.opencv.org/question/30716/non-linear-transformation/
Hello, I'm still relatively new to Opencv. I am trying to produce what is turning out to be a rather difficult transformation.
I have a 4096x4096 black and white image. The area of interest is a circular ring which has a width of 756 pixels, making it occupy the majority of the image.
What I need to do is to cut through the ring at a certain radius and stretch the image straight. stretching everything inside the ring. The end result would be an image that was 756 pixels high and
2*PI*( outer radius of ring).
Can this be done with an affine transformation? Or do I need to use something else.
Thanks,
Danmreff555Thu, 27 Mar 2014 07:18:57 -0500http://answers.opencv.org/question/30716/Perspective Compensation when Measuring Distances on an Image with a known reference distancehttp://answers.opencv.org/question/29815/perspective-compensation-when-measuring-distances-on-an-image-with-a-known-reference-distance/I am trying to calculate the real world distance of an arbitrary line drawn along the field of view from a one point perspective, single camera setup.
I will have a known distance running parallel. How can I find the compensation factor I need to apply to the pixel length of the measuring line?
![][1]
Do I have to take into account the distance from the vanishing point, as the length per pixel increases the nearer you get to the vanishing point? Do I need to use the gradient of the known line to give me a rate of change?
I have been reading up on cross ratios but I don't understand if they are applicable in this scenario as I seem to be measuring in the opposite direction in respect to the vanishing point to apply this.
[1]: http://i.stack.imgur.com/Wjp30.jpgDigbySwiftTue, 11 Mar 2014 12:24:13 -0500http://answers.opencv.org/question/29815/Math behind getPerspectiveTransformhttp://answers.opencv.org/question/26910/math-behind-getperspectivetransform/Where can I read about the math behind that function?
If I have 4 points, source and destination. How I can find the perspective matrix?Spas HristovTue, 21 Jan 2014 10:09:20 -0600http://answers.opencv.org/question/26910/