OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Mon, 11 Nov 2019 03:32:02 -0600Perspective transformation - Deriving formula for single Camerahttp://answers.opencv.org/question/221482/perspective-transformation-deriving-formula-for-single-camera/I would like to solve an equation and of that formulate a formula that takes two inputs and gives one output:
**Input:**
- (u,v) - Pixel coordinates
- (t) - Translation of the camera with respect to the plane, in one dimension (z)
**Output**
- (x,y) - World coordinate
And for the rotation of the camera, this is set to static. So the only part that can alternate is the height of the camera with respect to the plane.
I've successfully solved an equation system, but for when the camera has fixed rotation and fixed height as described here: https://dsp.stackexchange.com/a/46591/46122
But now I want to express a formula but that takes one additional parameter (height in [mm]).
However, I'm not sure how this equation system would look like described here, https://dsp.stackexchange.com/a/46591/46122
to reflect my additional parameter.
My goal is to have a camera mounted on a linear rail (that moves in the z-direction and is vertical to the plane) that can detect objects on the plane. To my help, I have a laser sensor that constantly measures the height from the plane to the camera, which can be given as an input to the transform.
Any help is appreciated!r.anderssonMon, 11 Nov 2019 03:32:02 -0600http://answers.opencv.org/question/221482/How to calculate the actual length of the black portion in image attached after getting actual contour of that.http://answers.opencv.org/question/213692/how-to-calculate-the-actual-length-of-the-black-portion-in-image-attached-after-getting-actual-contour-of-that/I got a middle portion of laser which is captured by angled camera. How to calculate the actual length of the black portion in image attached after getting actual contour of that
import cv2
import numpy as np
from skimage import morphology, color
import matplotlib.pyplot as plt
from scipy.spatial import distance as dist
from imutils import perspective
from imutils import contours
import argparse
import imutils
def midpoint(ptA, ptB):
return ((ptA[0] + ptB[0]) * 0.5, (ptA[1] + ptB[1]) * 0.5)
img = cv2.imread('F:\\Pycode\\ADAP_ANALYZER\\kk.jpg')
lowerb = np.array([0, 0, 120])
upperb = np.array([200, 100, 255])
red_line = cv2.inRange(img, lowerb, upperb)
red_line = cv2.GaussianBlur(red_line, (5, 5), 0)
ret, red_line = cv2.threshold(red_line, 45, 255, cv2.THRESH_BINARY)
red_line = cv2.dilate(red_line, None, iterations=1)
kernel = np.ones((10,10),np.uint8)
red_line = cv2.erode(red_line, kernel, iterations=1)
cv2.imwrite("F:\\Pycode\\ADAP_ANALYZER\\yy.jpg",red_line)
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(red_line, connectivity=8)
sizes = stats[1:, -1]; nb_components = nb_components - 1
min_size = 1800
img2 = np.zeros((output.shape))
for i in range(0, nb_components):
if sizes[i] >= min_size:
img2[output == i + 1] = 255
cv2.imwrite("F:\\Pycode\\ADAP_ANALYZER\\xx.jpg",img2)
cv2.imshow('red', img2)
cv2.waitKey(0)
image = cv2.imread("F:\\Pycode\\ADAP_ANALYZER\\xx.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
f,thresh = cv2.threshold(gray, 70, 255, cv2.THRESH_BINARY)
thresh = cv2.erode(thresh, None, iterations=1)
thresh = cv2.dilate(thresh, None, iterations=1)
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)
for var in c:
with open('c:\\your_file.txt', 'a') as f:
f.write(str(var) + "\n")
print(var)
for contour in cnts:
perimeter = cv2.arcLength(contour, True)
print(perimeter/2)
# determine the most extreme points along the contour
extLeft = tuple(c[c[:, :, 0].argmin()][0])
extRight = tuple(c[c[:, :, 0].argmax()][0])
extTop = tuple(c[c[:, :, 1].argmin()][0])
extBot = tuple(c[c[:, :, 1].argmax()][0])
green, top-most is blue, and bottom-most is teal
cv2.drawContours(image, [c], -1, (0, 255, 255), 2)
cv2.imshow("Image", image)
cv2.waitKey(0)RaghunathThu, 30 May 2019 00:10:21 -0500http://answers.opencv.org/question/213692/OpenCV function to transform one image taken from a camera to the image from another camera's viewpointhttp://answers.opencv.org/question/176400/opencv-function-to-transform-one-image-taken-from-a-camera-to-the-image-from-another-cameras-viewpoint/Is it possible to transform one 2D image from one camera (camera1) to the image from another camera (camera2, which is a virtual camera)'s viewpoint under the condition that I know both camera's poses? I looked up some techniques including homography transformation, but it looks not help.
Here is the information I have and I don't have. - Known: Camera1 pose, camera2 pose (= transformation matrix between two cameras), camera parameters for both cameras - Unkown: Object pose
If the object 3D pose in the original image is known, the conversion is easy. However, you can't suppose to get the 3D pose (depth) information in my setting.
I believe there is a way because it's already used in the car navigation (www.mdpi.com/1424-8220/12/4/4431/pdf), but I'm curious the general way to realize this transformation and how to do this type of image processing in OpenCV.kangarooMon, 16 Oct 2017 00:15:02 -0500http://answers.opencv.org/question/176400/perspective transformation with given camera posehttp://answers.opencv.org/question/72020/perspective-transformation-with-given-camera-pose/Hi everyone!
I'm trying to create a program, that I will use to perform some tests.
In this program an 2D image is being displayed in 3D space in the cv:viz window, so user can change camera (viewer) position and orientation.
![image description](/upfiles/1443709792833003.jpg)
After that, program stores camera pose and takes the snaphot of the current view (without coordinates axes):
![image description](/upfiles/14437098062513117.jpg)
An here is the goal:
I have the **snaphot** (perspective view of undetermined plane or part of the plane), **camera pose** (especially its orientation) and **camera parameters**. Using these given values I would like to **perform perspective transformation to compute an ortographic view of this given image** (or its visible part).
I can get the camera object and compute its projection matrix:
camera.computeProjectionMatrix(projectionMatrix);
and then decompose projection matrix:
decomposeProjectionMatrix(subProjMatrix,cameraMatrix, rotMatrix, transVect, rotMatX, rotMatY, rotMatZ);
And what should I do next?
Notice, that I can't use chessboard cornersbecause the image is undetermined (it may be any image) and I can't use the corner points of the image, because user can zoom and translate the camera, so there is posibility, that no image corner point will be visible...
Thanks for any help in advance!pawsThu, 01 Oct 2015 09:41:43 -0500http://answers.opencv.org/question/72020/camera rotation and translation based on two imageshttp://answers.opencv.org/question/68023/camera-rotation-and-translation-based-on-two-images/Hello,
I'm just starting my little project in OpenCV and I need your help :)
I would like to calculate rotation and translation values of the camera basing on two views of the same planar, square object.
I have already found functions such as: getPerspectiveTransform, decomposeEssentialMat, decomposeHomographyMat. Plenty of tools, but I'm not sure which of them to use in my case.
I have a square object of known real-world dimensions [meters]. After simple image processing I can extract pixel values of the vertices and the center of the square.
Now I would like to calculate the relative rotation and translation of the camera which led to obtain the second of two images:<br>
"Reference view" and "View #n"<br>
(please see below).
Any suggestions will be appreciated :)
1. Reference view:<br>
![image description](/upfiles/1438854857209.png)
<br>(center of the object is on the optical axis of camera, the camera-object distance is known)
2. View #1:<br>
![image description](/upfiles/14388548769288926.png)
3. View #2:<br>
![image description](/upfiles/14388548834324958.png)
4. View #3:<br>
![image description](/upfiles/1438854889587757.png)
AliceThu, 06 Aug 2015 05:40:19 -0500http://answers.opencv.org/question/68023/Inverse Perspective Mapping -> When to undistort?http://answers.opencv.org/question/15526/inverse-perspective-mapping-when-to-undistort/BACKGROUND:
I have a a camera mounted on a car facing forward and I want to find the roadmarks. Hence I'm trying to transform the image into a birds eye view image, as viewed from a virtual camera placed 15m in front of the camera and 20m above the ground. I implemented a prototype that uses OpenCV's warpPerspective function. The perspective transformation matrix is got by defining a region of interest on the road and by calculating where the 4 corners of the ROI are projected in both the front and the bird's eye view cameras. I then use these two sets of 4 points and use getPerspectiveTransform function to compute the matrix. This successfully transforms the image into top view.
QUESTION:
When should I undistort the front facing camera image? Should I first undistort and then do this transform or should I first transform and then undistort.
If you are suggesting the first case, then what camera matrix should I use to project the points onto the bird's eye view camera. Currently I use the same raw camera matrix for both the projections.
Please ask more details if my description is confusing!Ashok ElluswamyThu, 20 Jun 2013 19:33:57 -0500http://answers.opencv.org/question/15526/