OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sun, 17 May 2020 15:00:31 -0500Use openCV for object+distance detectionhttp://answers.opencv.org/question/230272/use-opencv-for-objectdistance-detection/Please help with guidelines/examples with the following project:
Controlling a motor with Arduino (or ESP32), which is the easy part, for example a toy car, make it go forward or backwards and also control the speed.
Can I use a webcam connected to a laptop (later on it will be done by Raspberry pi) to identify the car and to assess the distance to the web cam?
For example, let's assume cars are driving towards the web cam, not at the same time.
if it is car #1, stop it 100 cm before the web cam.
If it is car #2, stop it 20 cm before the web cam.
I am new to openCV, so I am not so sure if I can provide a size info for the images when being studied by the openCV. or can it assess the distance if I place an object with pre-known size. (e.g. 10 cm red pole)
i am good with either method.
Another question about the web cam: currently I am using a simple one. I would like to consider a webcam with infrared transmitting LEDs, that will be able to work when it is dark. Recommendation for such cam will be appreciated.
ThanksYigalBSun, 17 May 2020 15:00:31 -0500http://answers.opencv.org/question/230272/Determine the best features of an object given several imageshttp://answers.opencv.org/question/224880/determine-the-best-features-of-an-object-given-several-images/ I am trying to automate a form scanner and align new images of the same printed form. I'm using the orb descriptors of the correctly aligned form and matching it (with hamming distance) against the new images, which works decently. But, how could I, given several images of the correct form, extract the most consistent and best features for alignment to use with new images?
Thank you!diegojrrWed, 15 Jan 2020 19:25:36 -0600http://answers.opencv.org/question/224880/finding distance between car on real-timehttp://answers.opencv.org/question/217672/finding-distance-between-car-on-real-time/i am trying to make a project with Opencv python on real-time finding distance from my dash cam ,which can tell me how much other car is far from me.i already make a code in python opencv which can detect objects like 98% car 88% person ,i need your help how i can find distance ?some other sites they showed me to use calibration method as i am very new in this filed i dont know where and how i can use it .
thank youAyeshayounisThu, 29 Aug 2019 16:15:16 -0500http://answers.opencv.org/question/217672/Distance between Camera and Marker (calculate with Tvec)http://answers.opencv.org/question/218178/distance-between-camera-and-marker-calculate-with-tvec/ So i got a marker of known size and set the world coordinate origin to the center of the marker. With solvePnP i calculated the corresponding rotation vector as well as the translation vector. Camera calibration is done in advance. When i project points given in world coordinates it shows them correctly on the display in pixel coordinates. Now i want to calculated the distance between the camera and the marker. If i understood everything correctly, the shift between the world coordinate system and the camera coordinate system is given by the translation vector. Accordingly, the distance between the marker and the camera should be the norm of the vector, given by -Rvec(inverse)*Tvec. but if i do that the distance is way too high (about 2,5x). Am i missing something here? How can i get the right distance between the camera and the marker?
Thanks in advanceMarkus11123Tue, 10 Sep 2019 13:18:34 -0500http://answers.opencv.org/question/218178/Calculate slope, length and angle of a specific part / side / line on a contour?http://answers.opencv.org/question/206392/calculate-slope-length-and-angle-of-a-specific-part-side-line-on-a-contour/![Original Picture](/upfiles/15465522005113944.png)
I got two detected contours in an image and need the diameter between the two vertical-edges of the top contour and the diameter between the vertical-edges of the lower contour. I achieved this with this code.
import cv2
import numpy as np
import math, os
import imutils
img = cv2.imread("1.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 0)
edges = cv2.Canny(gray, 200, 100)
edges = cv2.dilate(edges, None, iterations=1)
edges = cv2.erode(edges, None, iterations=1)
cnts = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
# sorting the contours to find the largest and smallest one
c1 = max(cnts, key=cv2.contourArea)
c2 = min(cnts, key=cv2.contourArea)
# determine the most extreme points along the contours
extLeft1 = tuple(c1[c1[:, :, 0].argmin()][0])
extRight1 = tuple(c1[c1[:, :, 0].argmax()][0])
extLeft2 = tuple(c2[c2[:, :, 0].argmin()][0])
extRight2 = tuple(c2[c2[:, :, 0].argmax()][0])
# show contour
cimg = cv2.drawContours(img, cnts, -1, (0,200,0), 2)
# set y of left point to y of right point
lst1 = list(extLeft1)
lst1[1] = extRight1[1]
extLeft1 = tuple(lst1)
lst2 = list(extLeft2)
lst2[1] = extRight2[1]
extLeft2= tuple(lst2)
# compute the distance between the points (x1, y1) and (x2, y2)
dist1 = math.sqrt( ((extLeft1[0]-extRight1[0])**2)+((extLeft1[1]-extRight1[1])**2) )
dist2 = math.sqrt( ((extLeft2[0]-extRight2[0])**2)+((extLeft2[1]-extRight2[1])**2) )
# draw lines
cv2.line(cimg, extLeft1, extRight1, (255,0,0), 1)
cv2.line(cimg, extLeft2, extRight2, (255,0,0), 1)
# draw the distance text
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 0.5
fontColor = (255,0,0)
lineType = 1
cv2.putText(cimg,str(dist1),(155,100),font, fontScale, fontColor, lineType)
cv2.putText(cimg,str(dist2),(155,280),font, fontScale, fontColor, lineType)
# show image
cv2.imshow("Image", img)
cv2.waitKey(0)
On the next Image you see the Output (green /blue)
**Now I would also need the angle of the slope lines (red) on the bottom side of the upper contour.**
![Output 1](/upfiles/15465522216155051.png)
Any ideas how I can get this? Is it possible using contours?
Or is it necessary to use HoughLinesP and sort the regarding lines somehow?
And continued question: Maybe its also possible to get function which describes parabola slope of that sides ?
![Demo](/upfiles/15465522297030594.png)
Thanks for any help =)sonicdooThu, 03 Jan 2019 16:01:16 -0600http://answers.opencv.org/question/206392/Use ZED to object detect und distance measurehttp://answers.opencv.org/question/200279/use-zed-to-object-detect-und-distance-measure/ Hello there:
I want to use ZED stereo camera to detect object and at the same time to measure the distance between object and camera. I know there is API and example for depth sensing. but I don't know how to combine them with detection object.
ThanksCritAndrewFri, 28 Sep 2018 10:10:57 -0500http://answers.opencv.org/question/200279/Determining Hausdorff distance - zero values for non overlaped shapeshttp://answers.opencv.org/question/196900/determining-hausdorff-distance-zero-values-for-non-overlaped-shapes/I use opencv 3.4.2. When I apply computeDistance(contour1, contour2) of createHausdorffDistanceExtractor, I obtain zero values for more than one position out of overlaped shape contours. To show this, consider following benchmark: The square contour (patch or model) which moves vertically over the image with the square contour (the same size and orientation as the patch). It should yield zero Hausdorff distance only once, why is not so ? Let me note that this is only in vertical direction. In the horizontal direction it works perfectly.
![C:\fakepath\Capture du 2018-08-03 16-40-28.png](/upfiles/15333073063499469.png)
![C:\fakepath\Capture du 2018-08-03 16-58-56.png](/upfiles/15333083738633117.png)
White is original contour and gray is moved patch.
Output:
Hausdorff distance - horizontal direction: 45.000000
Hausdorff distance - vertical direction: 0.000000
it should be instead the same for both i.e. 45. Has somone made the same observation ?
#include <string>
#include <stdio.h>
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/shape/shape_distance.hpp>
std::vector<cv::Point> get_points(cv::Mat& img)
{
std::vector<cv::Point> points ;
CV_Assert(img.depth() == CV_8U);
for(unsigned i=0; i<img.rows; i++)
{
for(unsigned j=0; j<img.cols; j++)
{
if(img.at<uchar>(i,j)==255)
points.push_back(cv::Point(i,j)) ;
}
}
return points;
}
int main( int argc, char** argv )
{
const unsigned square_dim = 150 ;
int movePatch = 105 ;
float distH_h ; // Hausdorff distance horizontal
float distH_v ; // Hausdorff distance vertical
// horizontally
cv::Mat imageContourH(square_dim,square_dim*3,CV_8UC1,cv::Scalar(0)) ;
cv::Mat imagePatchH = imageContourH.clone() ;
cv::rectangle(imageContourH,cv::Point(square_dim,0),cv::Point(2*square_dim-1,square_dim-1),cv::Scalar(255)) ;
cv::rectangle(imagePatchH,cv::Point(movePatch,0),cv::Point(movePatch+square_dim-1,square_dim-1),cv::Scalar(255)) ;
// vertically
cv::Mat imageContourV(square_dim*3,square_dim,CV_8UC1,cv::Scalar(0)) ;
cv::Mat imagePatchV = imageContourV.clone() ;
cv::rectangle(imageContourV,cv::Point(0,square_dim),cv::Point(square_dim-1,2*square_dim-1),cv::Scalar(255)) ;
cv::rectangle(imagePatchV,cv::Point(0,movePatch),cv::Point(square_dim-1,movePatch+square_dim-1),cv::Scalar(255)) ;
// Hausdorff distance from opencv library
cv::Ptr <cv::HausdorffDistanceExtractor> mysc = cv::createHausdorffDistanceExtractor() ;
std::vector<cv::Point> contourH_pt = get_points(imageContourH) ;
std::vector<cv::Point> patchH_pt = get_points(imagePatchH) ;
distH_h = mysc->computeDistance( contourH_pt, patchH_pt ) ;
std::vector<cv::Point> contourV_pt = get_points(imageContourV) ;
std::vector<cv::Point> patchV_pt = get_points(imagePatchV) ;
distH_v = mysc->computeDistance( contourV_pt, patchV_pt ) ;
printf("Hausdorff distance - horizontal direction: %f\n",distH_h) ;
printf("Hausdorff distance - vertical direction: %f\n",distH_v) ;
return 0 ;
}tomassFri, 03 Aug 2018 10:02:30 -0500http://answers.opencv.org/question/196900/how to calculate distances between pixels on a contourhttp://answers.opencv.org/question/195416/how-to-calculate-distances-between-pixels-on-a-contour/Hello,
I have a contour in OPENCV C++ in the contour format, and I want to measure the distance between each two pixels on the contour ( the number of pixels on the contour between them). It is clear that there are two distances between every two points on a closed contour. I need a clear algorithm or a code to measure these two distances in pixel.
Bests,
![image description](/upfiles/15312273799113132.png)sadegh6383Tue, 10 Jul 2018 07:13:03 -0500http://answers.opencv.org/question/195416/How do you use OpenCV to find horizontal angle and vertical angle from the center of an image to the center of a rectangular contour?http://answers.opencv.org/question/195108/how-do-you-use-opencv-to-find-horizontal-angle-and-vertical-angle-from-the-center-of-an-image-to-the-center-of-a-rectangular-contour/ I need a turret to rotate to a certain point using a camera, so I need angles.OpenCVNoob69Thu, 05 Jul 2018 12:04:08 -0500http://answers.opencv.org/question/195108/How to use the output of cv2.fitLine()http://answers.opencv.org/question/188415/how-to-use-the-output-of-cv2fitline/I am basically trying to fit 2 lines to 2 sets of points (each has 100 points) and find the normal distance between the lines, I am using cv2.fitLine() to fit the line in **python**
From the [documentation](https://docs.opencv.org/3.4.1/d3/dc0/group__imgproc__shape.html#gaf849da1fdafa67ee84b1e9a23b93f91f), fitLine returns a vector containing: (vx, vy, x0, y0), Where vx,vy are normalized vector collinear to the line and x0,y0 is a point on the line. I am confused on how to get the equation of the line from these values so that I can find the normal distance between the two lines.
abhijitMon, 02 Apr 2018 23:14:07 -0500http://answers.opencv.org/question/188415/Distances to surroundings in imagehttp://answers.opencv.org/question/186595/distances-to-surroundings-in-image/using a rotating robot and kinect depth data I am able to create a bw-image of the surroundings of my robot (black is free space, white are obstacles).<br> The robot is looking for a tag and if not found should try to move to another location and repeat the search. <br>I am a bit confused as of where the robot should next move to and thought maybe best in a direction with no or far away obstacles and not too close to an already proofed unsuccessful scan position.
<br>I know I could walk through every pixel in an extending circle and eliminate non-promising directions - however - I am in a python environment and stepping through all the pixels in a loop will be slow and using lots of cpu cycles.
<br>Any functions in opencv to rotate a beam around a fixed location (position of my robot) and get distancies (e.g. for each degree) to the next obstacle (in my case a white pixel) in reasonable time?
juergTue, 13 Mar 2018 10:33:59 -0500http://answers.opencv.org/question/186595/Get length in pixels between edges of two contourshttp://answers.opencv.org/question/182182/get-length-in-pixels-between-edges-of-two-contours/Hi.
I'm using OpenCV 3.3.1 on Windows 7.
I want to get length (in pixels) between two contours.
Here are thresholded and original images **(figure may rotated by different angles)**:
![image description](/upfiles/15156688805934961.jpg)
This is what I need to get:
![image description](/upfiles/15156800415519984.jpg)
![image description](/upfiles/15156804815980317.jpg)
![image description](/upfiles/15156690274220084.jpg)
I can find centers of this figures:
![image description](/upfiles/1515669144242928.jpg)
but I dont know how to find distance from center to edge...
Please give me advices.
**UPDATE**
Thanks @StevenPuttemans for idea, I tried to implement it, but during this time found one more important note that figure's position is changing and it may be rotated by random angle (sorry, I should note about it earlier):
![image description](/upfiles/15156798185900916.jpg)
![image description](/upfiles/15156800415519984.jpg)
![image description](/upfiles/15156804815980317.jpg)michlvlThu, 11 Jan 2018 05:17:59 -0600http://answers.opencv.org/question/182182/face recognition by Euclidean space distance seeking for helphttp://answers.opencv.org/question/179732/face-recognition-by-euclidean-space-distance-seeking-for-help/In the n dimensional Euclidean space,
If
The first point is **P1 [x1,x2,...,xn]**
The second point is **P2 [y1,y2,...,yn]**
so ，the Euclidean space distance from P1 to P2 is
**d(P1,P2) = sqrt ( (x1-y1)^2 + (x2-y2)^2 + ... + (xn-yn)^2 )**
so , in the face recognition field , for example , in many open source ，
every point have x coordinate value and y coordinate value ,
I think it is ,
The first face is **P1 [(x1,y1),(x2,y2),...,(xn,yn)]**
The second face is **P2 [(a1,b1),(a2,b2),...,(an,bn)]**
so ，the Euclidean space distance from P1 to P2 is ________________________________ ?
the result is too complex , even is unsolvable .
I don't know if I have a problem with my understanding ?strivingMon, 04 Dec 2017 22:01:04 -0600http://answers.opencv.org/question/179732/Fast Euclidean Distance Maphttp://answers.opencv.org/question/175613/fast-euclidean-distance-map/Hello,
I am searching for a fast method to compute the EDM with OpenCV ?
Thx
cjacquel cjacquelTue, 03 Oct 2017 06:54:14 -0500http://answers.opencv.org/question/175613/Calculate x distance to line from edge at middle of y?http://answers.opencv.org/question/162151/calculate-x-distance-to-line-from-edge-at-middle-of-y/ Hi.
So right now I'm using HoughLines to find the distance to a line and the angle of the line. My problem is though that HoughLines calculates the distance to the line (rho) from the origin normal to the line which depends on the angle of the line as can be seen here (http://answers.opencv.org/question/2966/how-do-the-rho-and-theta-values-work-in-houghlines/).
I want to find the distance to the line in pixels at the middle of the frame and the angle of the line as this rough sketch shows: http://i.imgur.com/mNePsR7.png.
Any tips on how to do this? I would guess using HoughLinesP with the different output from HoughLines and doing some calculations would work, anyone done this before?
There is always only one line in the image and it is always longer than the camera frame.
Thanks!
HjorturGSat, 24 Jun 2017 14:33:45 -0500http://answers.opencv.org/question/162151/Can Haar or Cascade classifiers be accurate enough in detecting object size?http://answers.opencv.org/question/119635/can-haar-or-cascade-classifiers-be-accurate-enough-in-detecting-object-size/ I'm trying to detect ping pong ball using my own trained Haar classifier(not so good one) and then to calculate ball distance from camera.I calibrated camera and used those parameters alongside with known ball dimensions in real world and ball size on picture,and the formula works fine when I detect ball just the right. The problem is Haar classifier sometimes detect ball slightly smaller, and sometimes slightly bigger then it is in picture so I got wrong distance values.Like here:
![image description](/upfiles/14821000177320978.jpg)
My question is, can Haar or Cascade classifiers be used for this purpose, or they are here only to detect that there is object but can't detect the exact size?
Will the classifier trained on larger set of images be more accurate here?(currently using haar trained on 730 positives and 1870 negatives images on 12 stages)ajsSun, 18 Dec 2016 16:37:31 -0600http://answers.opencv.org/question/119635/Is there any way to use a custom distance with FLANN?http://answers.opencv.org/question/127560/is-there-any-way-to-use-a-custom-distance-with-flann/ Hey guys, so I see te GenericIndex class is templated. FLANN originally, however, is a bit more flexible than that. It allows defining the distance function. Is it possible with OpenCV as well? (In other words, in the code below, I would like Knn_Distance be my own class definition)
this->p_index = std::make_shared< cv::flann::GenericIndex<Knn_Distance> >(
train_vectors,
cvflann::KDTreeIndexParams(this->params.num_trees),
Knn_Distance()
);
Thanks!
juanmanprSun, 12 Feb 2017 15:20:19 -0600http://answers.opencv.org/question/127560/Finding nearest non-zero pixelhttp://answers.opencv.org/question/125174/finding-nearest-non-zero-pixel/I've got a binary image `noObjectMask` (`CV_8UC1`) and a given point `objectCenter` (`cv::Point`). If the `objectCenter` is a zer-value pixel, I need to find the nearest non-zero pixel starting from the given point.
The number of non-zero points in the whole image can be large (even up to 50%), so calculating distances for each point returned from `cv::findNonZero` seems to be non-optimal. As the highest probability is that the pixel will be in the close neighborhood, I currently use:
# my prototype script in Python, but the final version will be implemented in C++
if noObjectMask[objectCenter[1],objectCenter[0]] == 0:
# if the objectCenter is zero-value pixel, subtract sequentially its neighborhood ROIs
# increasing its size (r), until the ROI contains at least one non-zero pixel
for r in range(noObjectMask.shape[1]/2):
rectL = objectCenter[1]-r-1
rectR = objectCenter[1]+r
rectT = objectCenter[0]-r-1
rectB = objectCenter[0]+r
# Pythonic way of subtracting ROI: noObjectMask(cv::Rect(...))
rect = noObjectMask[rectL:rectR, rectT:rectB]
if cv2.countNonZero(rect)>0: break
nonZeroNeighbours = cv2.findNonZero(rect)
# calculating the distances between objectCenter and each of nonZeroNeighbours
# and choosing the closest one
This works okay, as in my images the non-zero pixels are typically in the closest neighborhood (`r`<=10px), but the processing time increases dramatically with the distance of the closest pixel. Each repetition of `countNonZero` repeats counting of the previous pixels. This could be improved by incrementing the radius `r` by more than one, but this still looks a bit clumsy to me.
How to improve the procedure? And ideas? -Thanks!mstankieTue, 07 Feb 2017 07:56:46 -0600http://answers.opencv.org/question/125174/How to get first row of np.vstack? midpoint of contour - Pythonhttp://answers.opencv.org/question/123575/how-to-get-first-row-of-npvstack-midpoint-of-contour-python/ Hello,
Im new to OpenCV and Python, and Im trying to meassure the distance of the midpoints of two contours. This is my code so far:
image = cv2.imread('TwoMarkers.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 127 , 255, 0)
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for c in contours:
# if the contour is not sufficient large, or to small, ignore it
if cv2.contourArea(c) > 2000:
continue
elif cv2.contourArea(c) < 100:
continue
M = cv2.moments(c)
cX = int(M['m10'] /M['m00'])
cY = int(M['m01'] /M['m00'])
contourMidpoint= np.vstack([(cX, cY)])
D = dist.euclidean(contourMidpoint[0], contourMidpoint[1])
print(D)
How can I get the distance D between the two found values that are stacked? The apporach I tried dident work, and I have no clue how to sepererate the values that are stacked. Any help is much appreciated.
agromsMon, 23 Jan 2017 06:19:03 -0600http://answers.opencv.org/question/123575/Divide picture into vertical blocks and get pixel by x, y coordinate.http://answers.opencv.org/question/116858/divide-picture-into-vertical-blocks-and-get-pixel-by-x-y-coordinate/Hi. I’m new at OpenCV. Especially, I’m learning android at the same time, so I’m struggling to make an android application that using OpenCV.
I wanna divide picture into vertical blocks(groups of matOfKeyPoints) and get pixel(by x, y coordinate) to compare with another coulmn's pixel.
![image description](/upfiles/14807985843339901.png)
I got gray colored and cropped image.
And made the image black&white by adaptiveThreshold().
I got MatOfKeyPoint by FeatureDetector.FAST as 3rd image(It's a result of drawKeyPoints(Default flag)).
![image description](/upfiles/14807986017055522.png)
1. I need to get representative coulmn by user’s selection. So I wanna divide points groups vertically.
2. Then I wanna get values *red ones* that computed from comparison with representative column’s points’ x, y coordinate. (The representative column has values *blue ones* input by user.)
Below code is Detection(Button)'s OnClickListener. Thank you.
```java
private class findButtonOnClickListener implements View.OnClickListener {
MatOfKeyPoint matOfKeyPoints;
FeatureDetector endDetector;
@Override
public void onClick(View view) {
matOfKeyPoints = new MatOfKeyPoint();
endDetector = FeatureDetector.create(FeatureDetector.FAST);
endDetector.detect(mat, matOfKeyPoints);
Scalar color = new Scalar(0, 0, 255); // BGR
Features2d.drawKeypoints(mat, matOfKeyPoints, mat, color, 0);
Utils.matToBitmap(mat, bitmap);
bitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
procImage.setImageBitmap(bitmap);
procImage.invalidate();
bitmap = bitmapOrigin.copy(bitmapOrigin.getConfig(), true);
mat = matOrigin.clone();
}
}
```NOOPIESat, 03 Dec 2016 15:18:44 -0600http://answers.opencv.org/question/116858/Calculate length between 2 points in c++http://answers.opencv.org/question/103665/calculate-length-between-2-points-in-c/I need to calculate length of lines(number of pixels along the path) from a point until a terminal point. terminal point is a point which either ends or which has more than one line starting from that point. For example, in below image I need to calculate length between A->B, A->C, A->D.
I have attempted but then it goes into forever loop and I am not understanding why. I am new to work with images.
so to do that I want to implement a logic in which I select neighbouring 8 pixels of a point and if its 1 then it should do a depth first till terminal and store that point in a hashmap.
Please give me any suggestions. I am stuck on this. I can also share the original image if thats required. below image is only to explain problem
![image description](/upfiles/14756541229443297.png)
#include "stdafx.h"
#include <stdio.h>
#include <iostream>
#include <fstream>
#include <unordered_set>
#include <unordered_map>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include <opencv2\highgui.hpp>
#include <opencv2\nonfree\features2d.hpp>
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <opencv2\legacy\legacy.hpp>
using namespace cv;
using namespace std;
#define ERR(msg) printf("%s : %d", (msg), __LINE__)
struct point_hash {
inline std::size_t operator()(const cv::Point & v) const {
return v.x * 61 + v.y;
}
};
unordered_multimap<Point,int, point_hash> lengths;
bool is_terminal(Mat &image,Point grid, unordered_set<Point, point_hash>& visited) {
cout << "This point is 1:Terminal:" << endl;
int count = 0;
for (int i = grid.x-1; i < grid.x+2; i++) {
for (int j = grid.y - 1; j < grid.y + 2; j++) {
if ((int) image.at<char>(j,i) > 0 && visited.find(Point(i,j)) == visited.end()) {
count++;
}
}
}
if (count > 1) {
return true;
}else
return false;
}
int check_length(Mat &image, Point kp,Point start, int length, unordered_set<Point, point_hash> visited) {
cout << "Get Neighbouring points" << kp.x << " " << kp.y << endl;
visited.insert({ kp });
if (start!=kp && is_terminal(image, kp, visited)) {
lengths.insert({ start,length });
return 0;
}
else {
if ((int)image.at<uchar>(kp.y - 1, kp.x - 1) > 0 && visited.find(Point(kp.x - 1, kp.y - 1)) == visited.end()) {
check_length(image, Point(kp.x - 1, kp.y - 1), start, length + 1, visited);
}
if ((int)image.at<uchar>(kp.y, kp.x - 1) > 0 && visited.find(Point(kp.x - 1, kp.y)) == visited.end()) {
check_length(image, Point(kp.x - 1, kp.y), start, length + 1, visited);
}
if ((int)image.at<uchar>(kp.y + 1, kp.x - 1) > 0 && visited.find(Point(kp.x - 1, kp.y + 1)) == visited.end()) {
check_length(image, Point(kp.x - 1, kp.y + 1), start, length + 1, visited);
}
if ((int)image.at<uchar>(kp.y - 1, kp.x) > 0 && visited.find(Point(kp.x, kp.y - 1)) == visited.end()) {
check_length(image, Point(kp.x, kp.y - 1), start, length + 1, visited);
}
if ((int)image.at<uchar>(kp.y + 1, kp.x) > 0 && visited.find(Point(kp.x, kp.y + 1)) == visited.end()) {
check_length(image, Point(kp.x, kp.y + 1), start, length + 1, visited);
}
if ((int)image.at<uchar>(kp.y - 1, kp.x + 1) > 0 && visited.find(Point(kp.x + 1, kp.y - 1)) == visited.end()) {
check_length(image, Point(kp.x + 1, kp.y - 1), start, length + 1, visited);
}
if ((int)image.at<uchar>(kp.y, kp.x + 1) > 0 && visited.find(Point(kp.x + 1, kp.y)) == visited.end()) {
check_length(image, Point(kp.x + 1, kp.y), start, length + 1, visited);
}
if ((int)image.at<uchar>(kp.y + 1, kp.x + 1) > 0 && visited.find(Point(kp.x + 1, kp.y + 1)) == visited.end()) {
check_length(image, Point(kp.x + 1, kp.y + 1), start, length + 1, visited);
}
}
return 0;
}
int main() {
Mat img1 = imread("image1.jpg",0);
vector<Point> keypoints;
fstream myfile("row1.txt");// These files give me my interest points like A
fstream myfile1("col1.txt");// not necessarily B,C, D.
int a,b;
while (myfile >> a && myfile1 >> b)
{
keypoints.push_back(Point(b,a));
}
unordered_set<Point, point_hash> visited;
for (int i = 0; i < keypoints.size(); i++) {
int length = check_length(img1, keypoints[i],keypoints[i],0,visited);
visited.clear();
}
imshow("image 1", img1);
waitKey();
return 0;
}
vivekkhWed, 05 Oct 2016 03:18:19 -0500http://answers.opencv.org/question/103665/How can I find the max distance in multiple pointshttp://answers.opencv.org/question/99729/how-can-i-find-the-max-distance-in-multiple-points/ How can I find the max distance in multiple points?
The result should be including the two points and the distance.
Is there a function to do this in opencv OR I have to implement one by my own?caniusMon, 08 Aug 2016 00:39:59 -0500http://answers.opencv.org/question/99729/What type of distance measurement is used in CreateLBPHffacerecognizerhttp://answers.opencv.org/question/87057/what-type-of-distance-measurement-is-used-in-createlbphffacerecognizer/ Hello there,
I am trying to find out what is the type of distance measurement that is used in predict() function after having LBPH training?
Raafat SalihMon, 08 Feb 2016 13:04:43 -0600http://answers.opencv.org/question/87057/How to interpret the distances in matches of descriptorshttp://answers.opencv.org/question/86709/how-to-interpret-the-distances-in-matches-of-descriptors/ When doing a descriptor matching the `DMatch`es have a distance, I would like to know if there is a logic behind and what is it? I am doing multiple matches on different groups of descriptors, and maybe the best-match is not entirely correct, so I would like to do a filter on distances and this will be easier if I would know the logic behind, without trying to understand the code if it is possible :p
ThanksthdrksdfthmnFri, 05 Feb 2016 03:32:17 -0600http://answers.opencv.org/question/86709/calculate the distance (pixel) between the two edges ( lines) ?http://answers.opencv.org/question/74400/calculate-the-distance-pixel-between-the-two-edges-lines/Good day!
I have a question. I get the picture from the camera using OPENCV, and using functions Canny ROI and get the following result ( Picture ) . as much as possible , or whether it is possible to calculate the distance (pixel) between the two edges ( lines) ?
I will be very grateful for the help
![image description](http://pix.sevelina.ru/images/2015/10/27/341.jpg)MValeriyTue, 27 Oct 2015 02:24:19 -0500http://answers.opencv.org/question/74400/Best color difference or distance approximation?http://answers.opencv.org/question/65946/best-color-difference-or-distance-approximation/Currently, a standard way of comparing colors is using "Delta E" metric in CIELab [[Color-difference](https://en.wikipedia.org/wiki/Color_difference)] which is based on Euclidean distance in CIELab color space.
However, for certain applications using the distance metric intensively "Delta E" metric could be a bit slow (e.g. RGB2Lab conversion is necessary, floating point operations can be costly, etc.).
Is there a "good enough approximation" of color difference or distance?
Ex.
* Weighted Manhattan distance (L1 distance) (in RGB) (as suggested [here](http://stackoverflow.com/questions/9018016/how-to-compare-two-colors))
* Hue Manhattan distance (L1 distance) (in HSV) (as suggested [here](http://stackoverflow.com/questions/9018016/how-to-compare-two-colors))
* Any other suggestions?mkcThu, 09 Jul 2015 20:45:57 -0500http://answers.opencv.org/question/65946/Reading sensor value from arduino and use it in opencvhttp://answers.opencv.org/question/64102/reading-sensor-value-from-arduino-and-use-it-in-opencv/Hi all.... I do a project where i need to find distance between objects of various shapes and camera... I tried applying the method i found [here](http://www.pyimagesearch.com/2015/01/19/find-distance-camera-objectmarker-using-python-opencv/) ... but it requires my object to be in fixed orientation and width.. So I am thinking of using ultrasonic sensor to do the distance finding and communicate it with opencv serially using arduino.. but the problem is as i am new to opencv, i couldn't find any leads on how to read data serially in opencv... I could be extremely thankful if someone could help me.. i need it asap so badly...!!!BossNinjaSun, 14 Jun 2015 01:28:02 -0500http://answers.opencv.org/question/64102/Calculating distance to an unknown object with single camerahttp://answers.opencv.org/question/62788/calculating-distance-to-an-unknown-object-with-single-camera/Hi all,
The title says it all, I have a camera mounted on a moving boat and I want to know the distance of the detected targets. Here is the scenario:
- I have the GPS information.
- I do know my velocity and direction.
- I have a single camera and an algorithm to detect the targets.
- I am moving, changing my location all the time. However the camera is mounted.
- Targets are moving, they are mostly boats sailing around.
- My camera is calibrated already.
- Edit: Targets have different sizes. They could be kayaks, boats, huge container ships, sailing boats, etc.
Give these, is there a way to find how far is the detected target from me?
Any help is appreciated.
Thanks
frageDEThu, 28 May 2015 06:46:47 -0500http://answers.opencv.org/question/62788/calculate distance between two objects in a image using single camerahttp://answers.opencv.org/question/55622/calculate-distance-between-two-objects-in-a-image-using-single-camera/i am calculating distance between two balls in a image.
first i have detected balls using hough transform circle and get their center point coordinates and applied distance measuring formula to get distance, but not getting nearby to the solution.
say if two balls are 13 cm apart then i am getting 5.6 cm...JasdeepThu, 19 Feb 2015 01:48:38 -0600http://answers.opencv.org/question/55622/3D coordinates of two points in room and angle between themhttp://answers.opencv.org/question/58641/3d-coordinates-of-two-points-in-room-and-angle-between-them/ I have a room with 4 cameras. Each camera is in an upper corner in this room. (non-parallel stereo!)<br>
I will have different points which can be detected with these cameras. Let's assume there are only two.<br>
Now my question is as follow:<br>
How can I calibrate all **4 cameras** so I can determine real world coordinates of each point?<br>
After I will find out where the points are (X1/Y1/Z1) and (X2/Y2/Z2) I will be able to determine the angle between them but this is not the problem part.<br>
<br>
My assumtion is this like:<br>
- I thought to calibrate each pair of cams (i.e. 1 <-> 2 <-> 3 <-> 4 <-> 1)
- with stereoCalibrate I would get rvec, tvec and then the fundamental matrix
- I thought to simply use the chessboard which would be seen with every camera at the same time
- after getting rvec, tvec, F I could reproject a 2D Point from every image (cam-1, cam-2 a.s.o.) and I could verify if every single reprojection is the same in 3D
<br>
It will be very helpful for me to know if my idea to solve my problem is correct and where are the aspects which can be done more easier and more correct.
P.S.: I have done homography for just 2D and with Z=0 but the problem here is more complex for me.x4k3pSun, 29 Mar 2015 15:46:59 -0500http://answers.opencv.org/question/58641/