OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Tue, 25 Aug 2020 13:16:56 -0500result coordinate typehttp://answers.opencv.org/question/234253/result-coordinate-type/ I have been making intersection point calculations between parametric line eqs and planes in opencv. I am using raw imaging point data for the math calculations below. What units of measurement are my results? I haven't done a opencv Rodriquez, transformation or other function on this data set yet, otherwise the question would be settled.
I've also tried "converting" to U, V coordinates but am unsure if its correct.
double numer = plane.get_numer(d, n, Ro); // origin point - p - see diagram
double denom = plane.get_denom(n, Rd);
double t = numer / denom;
Vec3d IP = plane.get_IP(Ro, Rd, t); // IP = 'intersection point'
double U = CV_FX * IP[0] + CV_CX;
double V = CV_FY * IP[1] + CV_CY;
std::cout << "U,V = " << U << "," << V << std::endl;superflyTue, 25 Aug 2020 13:16:56 -0500http://answers.opencv.org/question/234253/Dot product on Vec2s returning incorrect resultshttp://answers.opencv.org/question/228744/dot-product-on-vec2s-returning-incorrect-results/The dot product seems to be returning incorrect results for the case below (using Vec2s), and I'm not sure if it's a bug or I'm using it incorrectly:
Vec2s a(-495, 584), b(101, 8);
auto c = a.dot(b);
cout << c << "\n";
cout << a[0]*b[0] + a[1]*b[1];
In the output below, the first result is clearly incorrect:
20213
-45323h4k1mWed, 08 Apr 2020 18:06:14 -0500http://answers.opencv.org/question/228744/How to use the output of cv2.fitLine()http://answers.opencv.org/question/188415/how-to-use-the-output-of-cv2fitline/I am basically trying to fit 2 lines to 2 sets of points (each has 100 points) and find the normal distance between the lines, I am using cv2.fitLine() to fit the line in **python**
From the [documentation](https://docs.opencv.org/3.4.1/d3/dc0/group__imgproc__shape.html#gaf849da1fdafa67ee84b1e9a23b93f91f), fitLine returns a vector containing: (vx, vy, x0, y0), Where vx,vy are normalized vector collinear to the line and x0,y0 is a point on the line. I am confused on how to get the equation of the line from these values so that I can find the normal distance between the two lines.
abhijitMon, 02 Apr 2018 23:14:07 -0500http://answers.opencv.org/question/188415/C++ OPENCV std::out_of_range errorhttp://answers.opencv.org/question/145383/c-opencv-stdout_of_range-error/ Hello everyone,
I've a project with OPENCV/C++ which is about rectangle detection and Wrap Transformation. So far, I had good results. But I have some problems efficiency.
Firs of all I found contours with FindContours. And then I've found the corner points with approxPolyDP. And I've sorted the each corner by Amplitude and Angle. I determine the minimum and maximum amplitudes and minimum and maximum angles. so I've found the 4 corners of the image. and then I used the WrapTransformation. So I had a proper rectangle like this:
Here is the Original Image and thresholded in colors and Wrapped that I took from webcam: colors and Wrapped that I took from webcam:
![OPENCV](https://i.stack.imgur.com/6aUN9.jpg)
**But when I put my hand to front of pattern I get "std::out_of_range" error.**
> *terminate called after throwing an instance of 'std::out_of_range'
what(): vector::_M_range_check: __n (which is 3) >= this->size() (which is 3)*
Well, what could be the problem? Is it something related with adding data to vectors?
Here is the full code for this process:
Mat gray, tresh, blur;
medianBlur(input, blur, 5);
cvtColor(blur, gray, COLOR_BGR2GRAY);
threshold(gray, tresh, min, max, THRESH_BINARY);
vector<vector<Point> > contours;
vector<vector<Point> > approxPol;
vector<Vec4i> hierarchy;
findContours(tresh, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
int size = contours.size();
vector<vector<vector<double> > > corns(size, vector<vector<double> >(4, vector<double>(4)));
corns.resize(size, vector<vector<double> >(4, vector<double>(4)));
Mat out;
double amplitude, angle;
if(size == 16){
for (int k = 0; k < size; k++) {
approxPolyDP(contours[k], contours[k], 10, true);
for (int i = 0; i < 4; i++) {
amplitude = sqrt(pow(contours[k].at(i).x, 2) + pow(contours[k].at(i).y, 2));
angle = atan2(contours[k].at(i).y, contours[k].at(i).x) * 180 / CV_PI;
string coord_x = intToString(contours[k].at(i).x);
string coord_y = intToString(contours[k].at(i).y);
corns[k][i][0] = contours[k].at(i).x;
corns[k][i][1] = contours[k].at(i).y;
corns[k][i][2] = amplitude;
corns[k][i][3] = angle;
}
}
contours.clear();
imshow("circles", input);
int count = corns.size();
if (count == 16) {
vector<vector<double> > dots;
dots.resize(64, vector<double>(4));
int index = 0;
for (int i = 0; i < count; i++) {
for (int k = 0; k < 4; k++) {
for (int t = 0; t < 4; t++) {
dots[index][t] = corns[i][k][t];
}
index++;
}
}
Point2f p[4];
std::sort(dots.begin(), dots.end(), &Amplitude);
int last = dots.size() - 1, first = 0;;
// min amp
p[0].x = dots[first][0];
p[0].y = dots[first][1];
// max amp
p[3].x = dots[last][0];
p[3].y = dots[last][1];
std::sort(dots.begin(), dots.end(), &Angles);
// min angle
p[1].x = dots[first][0];
p[1].y = dots[first][1];
// max angle
p[2].x = dots[last][0];
p[2].y = dots[last][1];
dots.clear();
Mat getted = input.clone();
float sizes = 500;
Mat transform_matrix;
Point2f d[4] = { { 0,0 },{ sizes,0 },{ 0,sizes },{ sizes,sizes } };
transform_matrix = getPerspectiveTransform(p, d);
cv::warpPerspective(getted, out, transform_matrix, Size(sizes, sizes));
cv::imshow("Wrapped", out);
cout << size << endl << count << endl;
}
else {
contours.clear();
corns.clear();
out.release();
}
}else {
contours.clear();
out.release();
}
zatendeTue, 02 May 2017 13:39:21 -0500http://answers.opencv.org/question/145383/Removing certain pixels with thresholdhttp://answers.opencv.org/question/67622/removing-certain-pixels-with-threshold/For example, there are 9 region of interest. I need to remove those that are bigger than 1000 and smaller than 65. What is the best way to do it? I was using this method but it is not working.
![image description](/upfiles/14384567083460387.png)active92Sat, 01 Aug 2015 14:17:00 -0500http://answers.opencv.org/question/67622/vector of vectorshttp://answers.opencv.org/question/30898/vector-of-vectors/Hello,
I am looping through multiple files to read images inside each one, I got the files paths and number of images in each file:
//here is part of the code:
vector <Mat>& Images;
for( size_t i = 0; i < ImagesCounts.size(); i++ )
{
Mat img = imread( filename, 1 ); //
Images.push_back( img );
}
I read by this code the images of the first file so: Images[0]= img1 , Images[1]=img2 .....
I am now in the second loop (which has different filename and ImageCounts)
and I need to save first vector of Images in a global vector.. this mean:
FilesVector(vect[1],vect[2]......vect[N]); N is number of files
where: vect[1] should include the images of the first file vect[2] should include the images of the second file
So how I can define the global vector and push the images I have in the first loop to vect[1].. ?
I tried this code before going to the second loop but didn't work!
vector<vector<Mat>> FilesVector;
FilesVector[0]=Images;yamanneamehMon, 31 Mar 2014 05:11:07 -0500http://answers.opencv.org/question/30898/mutiply scalar to a vector opencvhttp://answers.opencv.org/question/24412/mutiply-scalar-to-a-vector-opencv/I want to mutiply 2 with each element of vec3 in opencv as we do in Matlab simplt by ".*". I searched alot but didn't find any command is their any command for this or not in opencv?
thanks in advance for any helpzulfiqarSat, 23 Nov 2013 06:19:19 -0600http://answers.opencv.org/question/24412/vector<vector<Point> > contours 'Unable to read memory error'http://answers.opencv.org/question/14979/vectorvectorpoint-contours-unable-to-read-memory-error/Hi,
i'm developing on win 64bit, windows 7.
OpenCV Version: opencv 2.4.3
[C:\fakepath\contour.png](/upfiles/13709391653207315.png)
My contour points after running findContours function, the some vector points in the contours have a 'unable to read memory error'.
The code below is a sample code obtained from opencv samples.
Please help?
Mat src; Mat src_gray;
int thresh = 100;
int max_thresh = 255;
RNG rng(12345);
/// Function header
void thresh_callback(int, void* );
int _tmain(int argc, _TCHAR* argv[])
{
/// Load source image and convert it to gray
src = imread( "D://Download - Software//1//1.2.840.113619.2.135.2025.1616282.5237.1221046207.476.jpg", 1 );
/// Convert image to gray and blur it
cvtColor( src, src_gray, CV_BGR2GRAY );
blur( src_gray, src_gray, Size(3,3) );
/// Create Window
char* source_window = "Source";
namedWindow( source_window, CV_WINDOW_AUTOSIZE );
imshow( source_window, src );
createTrackbar( " Threshold:", "Source", &thresh, max_thresh, thresh_callback );
thresh_callback( 0, 0 );
waitKey(0);
return(0);
}
/** @function thresh_callback */
void thresh_callback(int, void* )
{
Mat src_copy = src.clone();
Mat threshold_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
/// Detect edges using Threshold
threshold( src_gray, threshold_output, thresh, 255, THRESH_BINARY );
/// Find contours
findContours( threshold_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
/// Find the convex hull object for each contour
vector<vector<Point> >hull(contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ convexHull( Mat(contours[i]), hull[i], false ); }
/// Draw contours + hull results
Mat drawing = Mat::zeros( threshold_output.size(), CV_8UC3 );
for( int i = 0; i< contours.size(); i++ )
{
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
drawContours( drawing, contours, i, color, 1, 8, vector<Vec4i>(), 0, Point() );
drawContours( drawing, hull, i, color, 1, 8, vector<Vec4i>(), 0, Point() );
}
/// Show in a window
namedWindow( "Hull demo", CV_WINDOW_AUTOSIZE );
imshow( "Hull demo", drawing );
}BinnyTue, 11 Jun 2013 03:31:03 -0500http://answers.opencv.org/question/14979/Reshaping matrices into column vectors??http://answers.opencv.org/question/13658/reshaping-matrices-into-column-vectors/Hi,
I have a question regarding reshaping matrices. I have two square 3x3 matrices M1 and M2. I wish to turn both matrices into 9 element column vectors c1 and c2. Next i would like to create a new matrix consisting of new 2x9 matrix [c1,c2]. I need to do this in order to solve a least square problem.
I tried using the reshape function as M1.reshape(0,9) but it returns:
OpenCV Error: Image step is wrong (The matrix is not continuous, thus its number of rows can not be changed) in reshapenapoleonTue, 21 May 2013 16:04:40 -0500http://answers.opencv.org/question/13658/Multiply a vector<Point2f>http://answers.opencv.org/question/5530/multiply-a-vectorpoint2f/All of my points in my vector<Point2f> are halved, so I'd like to double them. Is there a better way to do this other than looping through all the points and multiplying each x, y value by 2?
Thanks!
garyThu, 27 Dec 2012 16:25:40 -0600http://answers.opencv.org/question/5530/