OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sat, 14 Sep 2019 08:09:27 -0500Determine orientation of an product for pick and placehttp://answers.opencv.org/question/218365/determine-orientation-of-an-product-for-pick-and-place/I'm trying to detect the orientation of products, so I can use this orientation for a pick and place sytem.
What i have so far:
- I can detect the contour of the poduct
- I can calculate the center of the contour
- I can calculate the angle by fitting an elipse over the contour, however the outcome is not stable
The problem is determin the angle, since the products are on the upper side and bottom side alsmost idetical mass wise. The calculation of the angle by fitting a elipse is not stable. sometimes the vector points left and sometimes right.
As shown on the following picture, you can see that the drawn line of the angle is not always pointing in the same direction.
![image description](/upfiles/15684659369233735.png)
Does somebody have an idea how i can make sure the calculation of angle(orientation) is 100% correct.
Attached you can find the sample picture.
Here is my code so far:
import cv2
import numpy as np
import math
# read the image
cap = cv2.imread("20190909_170137.jpg")
def nothing(x):
pass
# create slider
cv2.namedWindow("Trackbars")
hh='Max'
hl='Min'
wnd = 'Colorbars'
cv2.createTrackbar("threshold", "Trackbars", 150, 255, nothing)
cv2.createTrackbar("Houghlines", "Trackbars", 255, 255, nothing)
while True:
frame = cv2.imread("20190909_170137.jpg", cv2.IMREAD_COLOR)
scale_percent = 60 # percent of original size
width = int(frame.shape[1] * scale_percent / 100)
height = int(frame.shape[0] * scale_percent / 100)
dim = (width, height)
# resize image
frame = cv2.resize(frame, dim, interpolation = cv2.INTER_AREA)
# create sliders for variables
l_v = cv2.getTrackbarPos("threshold", "Trackbars")
u_v = cv2.getTrackbarPos("Houghlines", "Trackbars")
#convert frame to Black and White
bw = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#convert Black and White to binary image
ret,thresh4 = cv2.threshold(bw,l_v,255,cv2.THRESH_BINARY)
#find the contours in thresh4
im2, contours, hierarchy = cv2.findContours(thresh4, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
#calculate with contour
for contour in contours:
#calculate area and moment of each contour
area = cv2.contourArea(contour)
M = cv2.moments(contour)
if M["m00"] > 0:
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
#Use contour if size is bigger then 1000 and smaller then 50000
if area > 1000:
if area <50000:
approx = cv2.approxPolyDP(contour, 0.001*cv2.arcLength(contour, True), True)
#draw contour
cv2.drawContours(frame, contour, -1, (0, 255, 0), 3)
#draw circle on center of contour
cv2.circle(frame, (cX, cY), 7, (255, 255, 255), -1)
perimeter = cv2.arcLength(contour,True)
approx = cv2.approxPolyDP(contour, 0.04 * perimeter, True)
#fit elipse
_ ,_ ,angle = cv2.fitEllipse(contour)
P1x = cX
P1y = cY
length = 35
#calculate vector line at angle of bounding box
P2x = int(P1x + length * math.cos(math.radians(angle)))
P2y = int(P1y + length * math.sin(math.radians(angle)))
#draw vector line
cv2.line(frame,(cX, cY),(P2x,P2y),(255,255,255),5)
#output center of contour
print (P1x , P2y, angle)
#detect bounding box
rect = cv2.minAreaRect(contour)
box = cv2.boxPoints(rect)
box = np.int0(box)
#draw bounding box
cv2.drawContours(frame, [box],0,(0,0,255),2)
#Detect Hull
hull = cv2.convexHull(contour)
#draw line
#img_hull = cv2.drawContours(frame,[hull],0,(0,0,255),2)
#print (angle)
# print (p)
cv2.imshow("Frame", thresh4)
key = cv2.waitKey(1)
cv2.imwrite('thresh4.png',thresh4)
key = cv2.waitKey(1)
cv2.imshow("bw2", frame)
key = cv2.waitKey(1)
cv2.imwrite('box.png',frame)
key = cv2.waitKey(1)
key = cv2.waitKey(1)
#if key == 27:
# break
break
#cap.release()
cv2.destroyAllWindows()
The input image: *(please use rightclick and save image as)*
![image description](/upfiles/15684700185160563.jpg)wvaSat, 14 Sep 2019 08:09:27 -0500http://answers.opencv.org/question/218365/Isn't the calcOpticalFlowFarneback example calculating hue wrong?http://answers.opencv.org/question/208587/isnt-the-calcopticalflowfarneback-example-calculating-hue-wrong/I believe the canonical optical flow example is the one provided at [https://docs.opencv.org/master/d7/d8b/tutorial_py_lucas_kanade.html](https://docs.opencv.org/master/d7/d8b/tutorial_py_lucas_kanade.html). It takes a flow angle, converted from Cartesian to polar, in the range (0, 2Pi) and converts it to hue by **ang\*180/np.pi/2**, producing hue in the range (0, 180). This doesn't make any sense to me. I think hue, as a polar entity itself, should fully encompass the direction of the flow, hence it should have the range (0, 360), which is simply achieved by **ang\*180/np.pi**.
I am very perplexed by this since the seemingly correct equation is actually the simpler option. Someone actually added an extra division by two, thereby breaking the equation (by my reasoning). Consequently, I am concerned I am misunderstanding this somehow. But if you run the optical flow algorithm on artificially generated data (painted rectangles offset in the four cardinal directions), you get much more sensible results using my recommended equation. Hue encompasses and utilizes the entire hue range and depicts smooth direction gradients. The existing example only utilizes half the available hue (from red to cyan), and worse than discarding half the hue availability, it also yields a discontinuity between directions 359 and 0, depicted as a sudden jump from cyan to red. This really doesn't make any sense.
How has this example stood for so long in this form? Further compounding my confusion, this apparent error has propagated to other projects, as shown at [https://www.programcreek.com/python/example/89313/cv2.calcOpticalFlowFarneback](https://www.programcreek.com/python/example/89313/cv2.calcOpticalFlowFarneback). Consequently, as I stated above, I genuinely feel I am making some sort of mistake on all of this. I can't be the first person to have ever noticed this, so I must be interpreting it incorrectly, right? I'm very confused.
What does everyone else think about this?
Thanks.
Here is an example of the old method of flow-direction/hue mapping and my proposed method. The optical flow consists of a radially expanding ring, so as to show flow in all possible directions and their associated color mapping. I have included a typical HSV color wheel for comparison. I won't bother pasting all the code in here. I'll add it as an answer tomorrow (I can't answer it today because my account is too young).
![image description](/upfiles/15498268926726264.png)kebwiSat, 09 Feb 2019 23:14:30 -0600http://answers.opencv.org/question/208587/Calculate slope, length and angle of a specific part / side / line on a contour?http://answers.opencv.org/question/206392/calculate-slope-length-and-angle-of-a-specific-part-side-line-on-a-contour/![Original Picture](/upfiles/15465522005113944.png)
I got two detected contours in an image and need the diameter between the two vertical-edges of the top contour and the diameter between the vertical-edges of the lower contour. I achieved this with this code.
import cv2
import numpy as np
import math, os
import imutils
img = cv2.imread("1.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 0)
edges = cv2.Canny(gray, 200, 100)
edges = cv2.dilate(edges, None, iterations=1)
edges = cv2.erode(edges, None, iterations=1)
cnts = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
# sorting the contours to find the largest and smallest one
c1 = max(cnts, key=cv2.contourArea)
c2 = min(cnts, key=cv2.contourArea)
# determine the most extreme points along the contours
extLeft1 = tuple(c1[c1[:, :, 0].argmin()][0])
extRight1 = tuple(c1[c1[:, :, 0].argmax()][0])
extLeft2 = tuple(c2[c2[:, :, 0].argmin()][0])
extRight2 = tuple(c2[c2[:, :, 0].argmax()][0])
# show contour
cimg = cv2.drawContours(img, cnts, -1, (0,200,0), 2)
# set y of left point to y of right point
lst1 = list(extLeft1)
lst1[1] = extRight1[1]
extLeft1 = tuple(lst1)
lst2 = list(extLeft2)
lst2[1] = extRight2[1]
extLeft2= tuple(lst2)
# compute the distance between the points (x1, y1) and (x2, y2)
dist1 = math.sqrt( ((extLeft1[0]-extRight1[0])**2)+((extLeft1[1]-extRight1[1])**2) )
dist2 = math.sqrt( ((extLeft2[0]-extRight2[0])**2)+((extLeft2[1]-extRight2[1])**2) )
# draw lines
cv2.line(cimg, extLeft1, extRight1, (255,0,0), 1)
cv2.line(cimg, extLeft2, extRight2, (255,0,0), 1)
# draw the distance text
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 0.5
fontColor = (255,0,0)
lineType = 1
cv2.putText(cimg,str(dist1),(155,100),font, fontScale, fontColor, lineType)
cv2.putText(cimg,str(dist2),(155,280),font, fontScale, fontColor, lineType)
# show image
cv2.imshow("Image", img)
cv2.waitKey(0)
On the next Image you see the Output (green /blue)
**Now I would also need the angle of the slope lines (red) on the bottom side of the upper contour.**
![Output 1](/upfiles/15465522216155051.png)
Any ideas how I can get this? Is it possible using contours?
Or is it necessary to use HoughLinesP and sort the regarding lines somehow?
And continued question: Maybe its also possible to get function which describes parabola slope of that sides ?
![Demo](/upfiles/15465522297030594.png)
Thanks for any help =)sonicdooThu, 03 Jan 2019 16:01:16 -0600http://answers.opencv.org/question/206392/How to determine the angle of rotation?http://answers.opencv.org/question/205685/how-to-determine-the-angle-of-rotation/ There is a square in an image with equal sides (that is inside another square).
![image description](/upfiles/15453320057265702.jpg)
Does OpenCV have functions which can help to efficiently calculate the angle?
ya_ocv_userThu, 20 Dec 2018 12:55:19 -0600http://answers.opencv.org/question/205685/How to find rotation angle from homography matrix?http://answers.opencv.org/question/203890/how-to-find-rotation-angle-from-homography-matrix/I have 2 images and i am finding simliar key points by SURF.
I want to find rotation angle between the two images from homograpohy matrix.
Can someone please tell me how to find rotation angle between two images from homography matrix.
if len(good)>MIN_MATCH_COUNT:
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
Thank you.ronak.dedhiaThu, 22 Nov 2018 23:30:21 -0600http://answers.opencv.org/question/203890/Plot angle vs intensity histogram from image centerhttp://answers.opencv.org/question/202700/plot-angle-vs-intensity-histogram-from-image-center/Basically I'm selecting a circular area in the image(gray scale DFT image) where image center coordinates are the center of the circle.I want to plot angle vs intensity variation for the area covered by this circle.My idea is to use a virtual line through image center with the length of circle diameter and move by 1 degree and sum up total pixel intensity values covered by that line.Similar approach is mentioned in below links,
[finding-intensity-along-a-moving-line-in-an-image-and-compiling-to-give-an-angle-vs-intensity-plot](https://ch.mathworks.com/matlabcentral/answers/77334-finding-intensity-along-a-moving-line-in-an-image-and-compiling-to-give-an-angle-vs-intensity-plot)
[how-to-plot-intensity-images-i-e-any-angle-between-0-and-360-degree](https://ch.mathworks.com/matlabcentral/answers/334857-how-to-plot-intensity-images-i-e-any-angle-between-0-and-360-degree)
I would like to know is it possible to achieve such a thing using OpenCV + python (It seems not that easy)? Or is there any other approach for plotting angle vs intensity histogram through image center?
I was initially thinking to get logPolar transform of the image and access angle and intensity values from it.But i was not sure whether it was a possible approach since i couldn't even interpret logPolar transform output in an informative manner.(It just returns logploar transformed image, not angle and log magnitude vectors from the image center)
Any suggestions would be appreciated...nick_leoWed, 07 Nov 2018 15:38:09 -0600http://answers.opencv.org/question/202700/OpenCV doesn't work with 200° FOV camera ?http://answers.opencv.org/question/189563/opencv-doesnt-work-with-200deg-fov-camera/ Hello guys.
I've been trying to understand OpenCV for months and I think it was all for nothing.
I have a very wide angle fisheye camera (200 degrees or so) and would like to calibrate it.
I've tried with 27 good pictures but I get really awkward results for calibration, especially for the pictures where the pattern is near the edges. When I calibrate (using fisheye of course), the points on the edges are projected to infinity, and the points in the center are still not 100% accurate...
I also had to disable the CALIB_CHECK_COND because of some images (where checkerboard is beyond 180°).
I don't know what to do. After all the time I spent trying to learn OpenCV, do I have to throw all of it away and code my own functions ?
Is the pinhole model still accurate for my camera ?
(I've been searching for 2 hours on Google and found nothing)kulkx321Tue, 17 Apr 2018 15:12:44 -0500http://answers.opencv.org/question/189563/Why can't you change the Angle?http://answers.opencv.org/question/185625/why-cant-you-change-the-angle/Image_points has detected the number of angles and x, y.
UIImage *image=[UIImage imageNamed:[NSString stringWithFormat:@"%d",1]];
cv::Mat imageInput;
UIImageToMat(image, imageInput);
bool ok = cv::findChessboardCorners(imageInput, cv::Size(board_width, board_height), image_points,cv::CALIB_CB_ADAPTIVE_THRESH | cv::CALIB_CB_NORMALIZE_IMAGE);
if (ok==false) {
}else{
cv::Mat view_gray;
cv::cvtColor(imageInput, view_gray, cv::COLOR_BGR2GRAY);
UIImage *images=MatToUIImage(view_gray);
cv::TermCriteria criteria = cv::TermCriteria(
cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS,
30,
0.1);
cv::cornerSubPix(view_gray, image_points, cvSize(5,5), cvSize(-1,-1), criteria);
image_points_seq.push_back(image_points);
cv::drawChessboardCorners(imageInput, cv::Size(board_width, board_height), image_points, true);
UIImage *image=MatToUIImage(imageInput);
}
}
baihualinxinWed, 28 Feb 2018 01:58:00 -0600http://answers.opencv.org/question/185625/Is there an easy way to calculate the clockwise angle of a line drawing?http://answers.opencv.org/question/183583/is-there-an-easy-way-to-calculate-the-clockwise-angle-of-a-line-drawing/![image description](/upfiles/15173553319879814.png)
Hello everyone, can you please help me with this issue?
I want a clockwise angle which would be positive and greater than 90 degrees. Is there a single function that can give me a positive counterclockwise angle for a line between two points?
I would like to keep the code down to a minimum, such as one or two functions used at most to find my angle.masterenolTue, 30 Jan 2018 17:35:58 -0600http://answers.opencv.org/question/183583/Compute Angle of each contour pointhttp://answers.opencv.org/question/175291/compute-angle-of-each-contour-point/Hello,
After a Findcontour, how to compute the angle of each point of the contour ?
Thank you,
Christophe cjacquelThu, 28 Sep 2017 08:04:27 -0500http://answers.opencv.org/question/175291/orientation angle of ellipsehttp://answers.opencv.org/question/174078/orientation-angle-of-ellipse/ Hello everyone,
I am having some trouble in understanding how the angle parameter of the ellipse function actually works.
In the documentation it is meant to be anti-clockwise and referring to the main axis.
Therefore, for instance, if I try to draw an ellipse of size (100,50) and angle 45 deg I expect it to be in the first quadrant while instead it is in the second.
For instance this:
> ellipse(im, Point(im.cols/2, im.rows/2), Size(100, 50), 45, 0, 360, Scalar(200,0,0));
leads to the image below.
Of course if I switch the axis the orientation gets correct but this seems to be in contrast with the image shown in the documentation. ([opencv drawing doc](http://docs.opencv.org/2.4/modules/core/doc/drawing_functions.html))
What am I misunderstanding?
![image description](/upfiles/1505206320718566.png)
jappoz92Tue, 12 Sep 2017 03:57:28 -0500http://answers.opencv.org/question/174078/Shape alignement and differences computationhttp://answers.opencv.org/question/165683/shape-alignement-and-differences-computation/Hi! I intend to scan certain shapes, beam kind, and try to align them to a "ground truth" shape. One this is done I want to compute some differences, for example, compute the verticality of the B line compared to the original and measure the area difference in part A.
![image description](/upfiles/14998606482764383.png)
To sum up and put things in order:
1.- I should align the scanned shape to the original
2.- Compute the "angle deviation" between the B line scanned and the original. I know it is not really a straight line but I think it can be approximated to one.
3.- The head part A, due to the wear has lost its shape and therefore lost part of the area, I should project it over the original shape and compute the area difference. This is the point that I really do not know how to face.
Do you have any ideas about how to approach this problem? I am not asking you to provide the code, just guide me in what things I can try to solve this
Thanks a lot for your help!!!! KaileghWed, 12 Jul 2017 07:08:56 -0500http://answers.opencv.org/question/165683/Rotate points by an anglehttp://answers.opencv.org/question/165511/rotate-points-by-an-angle/ Hello,
i am trying to rotate a set of points in a vector<Points> by an user-defined angle and found a solution at [SO](https://stackoverflow.com/questions/7953316/rotate-a-point-around-a-point-with-opencv).
In the following code the dimension of the output image (rotated by 45 degree) is correct but the position of the points seem to be shifted. Can someone give me a tip, what the problem is?
cv::Point rotate2d(const cv::Point& inPoint, const double& angRad)
{
cv::Point outPoint;
//CW rotation
outPoint.x = std::cos(angRad)*inPoint.x - std::sin(angRad)*inPoint.y;
outPoint.y = std::sin(angRad)*inPoint.x + std::cos(angRad)*inPoint.y;
return outPoint;
}
cv::Point rotatePoint(const cv::Point& inPoint, const cv::Point& center, const double& angRad)
{
return rotate2d(inPoint - center, angRad) + center;
}
int main( int, char** argv )
{
// Create an dark Image with a gray line in the middle
Mat img = Mat(83, 500, CV_8U);
img = Scalar(0);
vector<Point> pointsModel;
for ( int i = 0; i<500; i++)
{
pointsModel.push_back(Point(i , 41));
}
for ( int i=0; i<pointsModel.size(); i++)
{
circle(img, pointsModel[i], 1, Scalar(120,120,120), 1, LINE_8, 0);
}
imshow("Points", img);
// Rotate Points
vector<Point> rotatedPoints;
Point tmpPoint;
cv::Point pt( img.cols/2.0, img.rows/2.0 );
for ( int i=0; i<pointsModel.size(); i++)
{
tmpPoint = rotatePoint(pointsModel[i] , pt , 0.7854);
rotatedPoints.push_back(tmpPoint);
}
Rect bb = boundingRect(rotatedPoints);
cout << bb;
Mat rotatedImg = Mat(bb.height, bb.width, img.type());
rotatedImg = Scalar(0);
for (int i=0; i<rotatedPoints.size(); i++ )
{
circle(rotatedImg, rotatedPoints[i], 1, Scalar(120,120,120), 1, LINE_8, 0);
}
imshow("Points Rotated", rotatedImg);
waitKey();
return 0;
}
Franz KaiserTue, 11 Jul 2017 08:54:18 -0500http://answers.opencv.org/question/165511/opencv_error_code-215_channelshttp://answers.opencv.org/question/98737/opencv_error_code-215_channels/ Hi,
Does anyone know why I'm getting the error message below :
Thanks,
OpenCV Error: Assertion failed (channels() == CV_MAT_CN(dtype)) in copyTo, file /home/.../opencv/opencv-3.1.0/modules/core/src/copy.cpp, line 257
terminate called after throwing an instance of 'cv::Exception'
what(): /home/.../opencv-3.1.0/modules/core/src/copy.cpp:257: error: (-215) channels() == CV_MAT_CN(dtype) in function copyTo
harfbuzzThu, 21 Jul 2016 03:31:47 -0500http://answers.opencv.org/question/98737/Angle and Scale Invariant template matching code for pythonhttp://answers.opencv.org/question/88937/angle-and-scale-invariant-template-matching-code-for-python/Hello,
I found angle and scale invariant template matching code, but not for python language... Have anybody got it?
I will be very grateful !adamsssMon, 29 Feb 2016 13:02:42 -0600http://answers.opencv.org/question/88937/extracting Magnitude and angle from flowfeaturehttp://answers.opencv.org/question/87733/extracting-magnitude-and-angle-from-flowfeature/1)I wanted to get all the pixels with some motion and wanted to store the position and angle in some data structure(Which data structure to use and how?).
Could anyone help me how I could do it.
2)Whether the below code is correct to extract the magnitude and direction of Optical Flow vector. If so how I can I do the operation mentioned in 1) from here on.
3)I find that the openCV manual could be supplemented with examples of the usage of different functions. I am not sure whether anyone else think the same.I think it will help new users. Now to write code I need to do lot of search on the net. :(. However community is having good response.
-----------------------------------------------------------------
calcOpticalFlowFarneback(prevgray, gray, flow, 0.5, 3, 15, 3, 5, 1.2, 0);
Mat xy[2];
split(flow, xy);
//calculate angle and magnitude
Mat magnitude, angle;
cartToPolar(xy[0], xy[1], magnitude, angle, true);
----------------------------------------------------------------
santhoshkelathodiMon, 15 Feb 2016 10:58:58 -0600http://answers.opencv.org/question/87733/How to know the co-ordinates/angle of the object detected (Rectangular)http://answers.opencv.org/question/80205/how-to-know-the-co-ordinatesangle-of-the-object-detected-rectangular/Hello,
I want to know the co-ordinates or angle with respect to y axis of the object detected. In my case we are detecting rectangular object placed in side the swimming pool or water. I should get the any of the object and I should make my robot to align and move in that direction. Please advice. pradeep_kbMon, 21 Dec 2015 13:13:06 -0600http://answers.opencv.org/question/80205/fitEllipse - angle of resulting rotated recthttp://answers.opencv.org/question/39147/fitellipse-angle-of-resulting-rotated-rect/It seems that fitEllipse realization miss the result RotatedRect angle computing in some cases.
...
box.center.x = (float)rp[0] + c.x;
box.center.y = (float)rp[1] + c.y;
box.size.width = (float)(rp[2]*2);
box.size.height = (float)(rp[3]*2);
if( box.size.width > box.size.height )
{
float tmp;
CV_SWAP( box.size.width, box.size.height, tmp );
box.angle = (float)(90 + rp[4]*180/CV_PI);
}
if( box.angle < -180 )
box.angle += 360;
if( box.angle > 360 )
box.angle -= 360;
return box;
If box.size.width <= box.size.height the angle of rotated rect would be indefinite. Please correct me, if I am wrong.Alan KazbekovMon, 11 Aug 2014 08:43:51 -0500http://answers.opencv.org/question/39147/How to find angle between two imageshttp://answers.opencv.org/question/77079/how-to-find-angle-between-two-images/How to find angle between two images, ones is ref and another is sample with rotation.
This [C:\fakepath\Crop.jpg](/upfiles/14482836041890362.jpg) is sample image and This [C:\fakepath\template.jpg](/upfiles/14482836557974902.jpg) is reference image.
Thank You in Advance!ganeshchavanMon, 23 Nov 2015 07:02:47 -0600http://answers.opencv.org/question/77079/ellipse approximation of blob using contours moments : confusing orientation anglehttp://answers.opencv.org/question/75181/ellipse-approximation-of-blob-using-contours-moments-confusing-orientation-angle/ Dear all,
I want to draw the ellipse approximating an isolated blob (the largest contour found with findContours). Using the formulas of paper :http://goo.gl/yvcUO5 for the major and minor axes I obtain consistent axes lengths. However using the formula to compute the orientation angle from the same paper( and which I find almost everywhere) I obtain odd results. This is the formula : theta = 0.5*atan (2*mu11 / (mu20-mu02)) ;
As long as the blob (which represents a human silhouette) is not close to horizontal, the formula returns a consistent value of the orientation angle but as soon as the blob becomes almost horizontal the sign of the orientation angle is flipped suddenly. I know the reason of such a behavior. If we refer to the formula above : when the blob is not horizontal mu20 is smaller than mu02. This is true while the blob starts climbing (falling) counter clockwise until it reaches an orientation of about 45 degrees. When it reaches that value the pxels distribution of the blob becomes horizontal rather than vertical and mu20 becomes larger than mu02 which implies inversion of the angle's sign. I don't know if this formula is correct.
Thanks a lot for your help.louniceMon, 02 Nov 2015 11:07:20 -0600http://answers.opencv.org/question/75181/How to separate the point2f to other variable?http://answers.opencv.org/question/71901/how-to-separate-the-point2f-to-other-variable/ Hello,
I had obtained the centroid coordinate in terms of float and array, Point2f a = mc[0]; Point2f b = mc[1];
the next step for me is to get the angle between these two points. By looking at the example throughout the net, the point2f must be separated to x1,y1 and x2,y2 so I can calculate the atan for the angle.
THe problem here I could not manage to get the values inside mc[0] and mc[1] for x1,y1 and x2,y2.
what I had tried and failed is using this statement
float x1,x2,y1,y2;
[x1,y1] = mc[0];
Anyone would help on this? Or is there any faster way to find the atan - angle for the two centroids coordinate I have obtained?zmsWed, 30 Sep 2015 03:03:54 -0500http://answers.opencv.org/question/71901/Question about polar or radial angle calculation in the image coordinate systemhttp://answers.opencv.org/question/6344/question-about-polar-or-radial-angle-calculation-in-the-image-coordinate-system/Hi,
I have general questions, not explicitly about OpenCV, but related to image processing: The image coordinate system normally has the x-axis pointing from left to right, and the y-axis pointing DOWNWARDS. However, in math, normally, the y-axis points upwards.
1. Therefore, is a correction needed to account for the orientation of the y-axis, say, while doing tangentInverse(y, x) to get the radial angle of the pixel location?
2. Will doing y' = height - 1- y, and calling tangentInverse(y', x), solve the problem? (height = image height)
3. The confusion began after I had tried to relate to the formula for the principal value on the "Polar coordinate system" page of wikipedia (specifically, the one that deals with x and y being in different quadrants). I will appreciate, if any expert could clarify. Thank you.HWTue, 22 Jan 2013 01:13:17 -0600http://answers.opencv.org/question/6344/Can I use OpenCV to compute the turning angle of video in real time?http://answers.opencv.org/question/70487/can-i-use-opencv-to-compute-the-turning-angle-of-video-in-real-time/ Hi, I am wondering is there any way that I can compute the turning angle of the camera if I record a video while the camera is turning?
Thanks a lot!gmountainWed, 09 Sep 2015 14:48:49 -0500http://answers.opencv.org/question/70487/aligment plate or RotatedRect and drawhttp://answers.opencv.org/question/69758/aligment-plate-or-rotatedrect-and-draw/ Hi,
i find car plate with opencv and the plate is in the vector<RotatedRect> this is not alignment (0 degree),
how can i rotate "rotatdrect" vector concurrent with original plate .
![image description](/upfiles/14410256402214986.jpg)
int cmin-100;
int cmax = 1000;
vector<vector<Point> >::iterator itc= contours.begin();
vector<RotatedRect> rects;
while (itc!=contours.end()) {
RotatedRect mr= minAreaRect(Mat(*itc));
if (itc->size() , cmin II ITC - . SIZE() . cmax)
itc= contours.erase(itc);
else
++itc;
rects.push_back(mr);
}
NOW HOW CAN I ONLY DRAW "RECTS" WITH CROP PICTURE WITH ANGLE 0'.
Many Thanks.
msucvMon, 31 Aug 2015 08:06:33 -0500http://answers.opencv.org/question/69758/unable to understand this finger counting codehttp://answers.opencv.org/question/69499/unable-to-understand-this-finger-counting-code/ hi,
can anybody explain me this code:
this code is for counting number of finger.
this is the input image
![image description](/upfiles/14406534106258931.png)
// feature extraction.cpp : Defines the entry point for the console application.
//
//--------------------------------------In the name of GOD
//-------------------------------BOW+SVM by Mohammad Reza Mostajabi
//#include "stdafx.h"
#include <opencv\cv.h>
#include <opencv\highgui.h>
//#include <opencv\ml.h>
#include <stdio.h>
#include <iostream>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/nonfree/features2d.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2\ml\ml.hpp>
#include <vector>
using namespace cv;
using namespace std;
using std::cout;
using std::cerr;
using std::endl;
using std::vector;
RNG rnga(12345);
inline void mix_channels(cv::Mat const &src, cv::Mat &dst, std::initializer_list<int> from_to)
{
cv::mixChannels(&src, 1, &dst, 1, std::begin(from_to), from_to.size() / 2);
}
double angle(std::vector<cv::Point>& contour, int pt, int r)
{
int size = contour.size();
cv::Point p0 = (pt>0) ? contour[pt%size] : contour[size - 1 + pt];
cv::Point p1 = contour[(pt + r) % size];
cv::Point p2 = (pt>r) ? contour[pt - r] : contour[size - 1 - r];
double ux = p0.x - p1.x;
double uy = p0.y - p1.y;
double vx = p0.x - p2.x;
double vy = p0.y - p2.y;
return (ux*vx + uy*vy) / sqrt((ux*ux + uy*uy)*(vx*vx + vy*vy));
}
int rotation(std::vector<cv::Point>& contour, int pt, int r)
{
int size = contour.size();
cv::Point p0 = (pt>0) ? contour[pt%size] : contour[size - 1 + pt];
cv::Point p1 = contour[(pt + r) % size];
cv::Point p2 = (pt>r) ? contour[pt - r] : contour[size - 1 - r];
double ux = p0.x - p1.x;
double uy = p0.y - p1.y;
double vx = p0.x - p2.x;
double vy = p0.y - p2.y;
return (ux*vy - vx*uy);
}
bool isEqual(double a, double b)
{
return fabs(a - b) <= 1e-7;
}
int main()
{
Mat input = imread("C:\\Users\\Intern-3\\Desktop\\IPF\\2.png");
Size size = input.size();
int erosion_size = 1;
Mat HSV, threshold;
cvtColor(input, HSV, COLOR_BGR2HSV);
inRange(HSV, cv::Scalar(0, 0, 100), cv::Scalar(0, 0, 255), threshold);
Mat erodeElement = getStructuringElement(MORPH_RECT, cv::Size(5, 5));
Mat dilateElement = getStructuringElement(MORPH_RECT, cv::Size(8, 8));
erode(threshold, threshold, erodeElement);
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
Mat mask = threshold;
findContours(mask.clone(), contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, Point(0, 0));
/// Draw contours
Mat drawing = Mat::zeros(mask.size(), CV_8UC3);
for (int i = 0; i< contours.size(); i++)
{
Scalar color = Scalar(rnga.uniform(0, 255), rnga.uniform(0, 255), rnga.uniform(0, 255));
drawContours(drawing, contours, i, color, 2, 8, hierarchy, 0, Point());
}
cout << "Contours = " << contours.size() << endl;
for (int i = 0; i < contours.size(); i++)
{
cout << "area : " << contourArea(contours[i]) << endl;
}
imshow("contour", drawing);
if (!contours.empty())
{
for (int i = 0; i<contours.size(); i++)
{
if (cv::contourArea(contours[i])>500)
{
Point center;
std::vector<cv::Point> fingers;
std::vector<cv::Point> contour;
cv::Moments m = cv::moments(contours[i]);
center.x = m.m10 / m.m00;
center.y = m.m01 / m.m00;
for (int j = 0; j < contours[i].size(); j += 16)
{
double cos0 = angle(contours[i], j, 40);
if ((cos0 > 0.5) && (j + 16<contours[i].size()))
{
double cos1 = angle(contours[i], j - 16, 40);
double cos2 = angle(contours[i], j + 16, 40);
double maxCos = std::max(std::max(cos0, cos1), cos2);
bool equal = isEqual(maxCos, cos0);
signed int z = rotation(contours[i], j, 40);
if (equal == 1 && z<0)
{
fingers.push_back(contours[i][j]);
}
}
}
contour = contours[i];
cout << "Finger Count : "<<fingers.size()<<endl;
//hands.push_back(tmp);
}
}
}
imshow("input", input);
waitKey(0);
}
in the following code i m not able to understand why this is being done to calculate angle. which 3 points (p0,p1,p2) is being considered to find the angle and its rotation.
the code block which i am not able to understand is :
if (cv::contourArea(contours[i])>500)
{
Point center;
std::vector<cv::Point> fingers;
std::vector<cv::Point> contour;
cv::Moments m = cv::moments(contours[i]);
center.x = m.m10 / m.m00;
center.y = m.m01 / m.m00;
for (int j = 0; j < contours[i].size(); j += 16)
{
double cos0 = angle(contours[i], j, 40);
if ((cos0 > 0.5) && (j + 16<contours[i].size()))
{
double cos1 = angle(contours[i], j - 16, 40);
double cos2 = angle(contours[i], j + 16, 40);
double maxCos = std::max(std::max(cos0, cos1), cos2);
bool equal = isEqual(maxCos, cos0);
signed int z = rotation(contours[i], j, 40);
if (equal == 1 && z<0)
{
fingers.push_back(contours[i][j]);
}
}
}
contour = contours[i];
cout << "Finger Count : "<<fingers.size()<<endl;
//hands.push_back(tmp);
}Deepak KumarThu, 27 Aug 2015 00:34:09 -0500http://answers.opencv.org/question/69499/Find angle and rotation of pointhttp://answers.opencv.org/question/69459/find-angle-and-rotation-of-point/ hi ,
i want to find the angle. i find the following code can anyone explain me its meaning.
here why p0,p1,p2 are being findout.
what is the meaning of (ux*vx + uy*vy) / sqrt((ux*ux + uy*uy)*(vx*vx + vy*vy) and (ux*vy - vx*uy) here.
double angle(std::vector<cv::Point>& contour, int pt, int r)
{
int size = contour.size();
cv::Point p0 = (pt>0) ? contour[pt%size] : contour[size - 1 + pt];
cv::Point p1 = contour[(pt + r) % size];
cv::Point p2 = (pt>r) ? contour[pt - r] : contour[size - 1 - r];
double ux = p0.x - p1.x;
double uy = p0.y - p1.y;
double vx = p0.x - p2.x;
double vy = p0.y - p2.y;
return (ux*vx + uy*vy) / sqrt((ux*ux + uy*uy)*(vx*vx + vy*vy));
}
int rotation(std::vector<cv::Point>& contour, int pt, int r)
{
int size = contour.size();
cv::Point p0 = (pt>0) ? contour[pt%size] : contour[size - 1 + pt];
cv::Point p1 = contour[(pt + r) % size];
cv::Point p2 = (pt>r) ? contour[pt - r] : contour[size - 1 - r];
double ux = p0.x - p1.x;
double uy = p0.y - p1.y;
double vx = p0.x - p2.x;
double vy = p0.y - p2.y;
return (ux*vy - vx*uy);
}
thanks!!Deepak KumarWed, 26 Aug 2015 08:24:32 -0500http://answers.opencv.org/question/69459/find tilt and pan with opencvhttp://answers.opencv.org/question/69249/find-tilt-and-pan-with-opencv/ hi every one
i write a cod and get the image moment and now i want to get pan and tilt of contour with this moment how to do it?
please tell me how to calculate pan and tilt angle of contour?
just c++
i appreciate it
JimmY_NotroNSun, 23 Aug 2015 17:02:27 -0500http://answers.opencv.org/question/69249/unable to determine angle because of points from linehttp://answers.opencv.org/question/67809/unable-to-determine-angle-because-of-points-from-line/I have an image from which i have created some lines. I have saved starting and end points of line. Lines are basically long side of rectangle that is bounding a white blob in image. Rectangles are placed in some circle. Image is shown as below ![image description](http://i.stack.imgur.com/HveKK.png)
Issue is when rectangle is formed in the lower part of circle, Starting point can be thought as the lower most Point of the circle i.e near the edge of circle, but when Rectangle is formed in the upper part of the circle as shown in the last dial of the image it is difficult to find out which point to choose as the starting point to find out the starting point which is near center of the dial.
Is there any workaround on how i can swap points of line in upper region of circle. Kindly guide me as i am out of ideas with this now.
Here is code to select longest side of rectangle and printing its points
int maxIndex = 0;
for (int a = 1; a < length.length; a++){
double newnumber = length[a];
if ((newnumber > length[maxIndex])){
maxIndex = a;
}
}
System.out.println("Start= "+pts[maxIndex].toString()+" End= "+pts[(maxIndex+1)%4].toString()+", Length="+length[maxIndex]);
Regards,
Saghir A. KhatriTue, 04 Aug 2015 07:34:34 -0500http://answers.opencv.org/question/67809/Calculate pixel angle from the center having hFov and vFovhttp://answers.opencv.org/question/65636/calculate-pixel-angle-from-the-center-having-hfov-and-vfov/Once that I calibrated my camera and I know horizontal and vertical fields of view (through *calibrationMatrixValues*), is it possible to know the X and Y angles (from the center) of a particular pixel?
Suppose I have a camera with fields of view of 100° (h) and 80° (v) and a resolution of 500x400. The angle/pixel ratio is (0.2°, 0.2°). So the central pixel will be (0°, 0°), its left neighbor (-0.2°, 0), the topmost central pixel (0°, +80°) and so on.
Is actually this relation constant through all the image or there is a formula to perform this calculation? Is the obtained information reliable?
This is going to be the first step for triangulating objects in a multi camera environment.JavelinMon, 06 Jul 2015 08:43:06 -0500http://answers.opencv.org/question/65636/Which algorithm can detect the face While Robustness against to face rotationhttp://answers.opencv.org/question/64478/which-algorithm-can-detect-the-face-while-robustness-against-to-face-rotation/ haarcascade_frontalface_alt.xml and haarcascade_frontalface_default.xml and .. are not robust to rotate More than 10 degrees.
also which algorithm can back face to normal angle for face recognize process?
mucFri, 19 Jun 2015 05:39:48 -0500http://answers.opencv.org/question/64478/