2017-05-20 00:33:04 -0600
| commented answer | How to verify the accuracy of solvePnP return values? How do I get R, t for projectPoints. Do I need to measure them manually? Is there any tool to simulate a 3D model and generate that data? |
2017-05-19 01:34:36 -0600
| asked a question | How to verify the accuracy of solvePnP return values? Hello , I have used solvePnP to find the Pose of object and I am getting some results for rvet and tvet . Now I wanted to know how accurate they are. How to I compute the accuracy of retunred values of solvePnP?
1 method I found was re-projection error. But is there any way to generate test cases: Data need to generate as test cases: (image points, object points) (expected rvet, expected tvet) Result:
(computed rvet, computed tvet) - return values from solvePnP with each test case that we generated. And now comparing the ((expected rvet, expected tvet) and (computed rvet, computed tvet) to measure the accuracy for different flags available for solvePnP. Are there any ways/software/tools that helps me to generate accurate test cases.(Varieties of test cases may include nose environment, distance of object from camera varies, planar object points, non co-planar object points .. etc) ? |
2017-05-18 06:29:54 -0600
| received badge | ● Self-Learner
(source)
|
2017-05-18 03:29:55 -0600
| commented question | OpenCV Error: Assertion failed in undistort.cpp at line 293 In solvePnP code link I mentioned in line 95 it is calling undistortPoints(ipoints, undistortedPoints, cameraMatrix, distCoeffs); I think the error is from this call |
2017-05-18 02:12:48 -0600
| asked a question | OpenCV Error: Assertion failed in undistort.cpp at line 293 OpenCV Error: Assertion failed (CV_IS_MAT(_src) && CV_IS_MAT(_dst) && (_src->rows == 1 || _src->cols == 1) && (_dst->rows == 1 || _dst->cols == 1) && _src->cols + _src->rows - 1 == _dst->rows + _dst->cols - 1 && (CV_MAT_TYPE(_src->type) == CV_32FC2 || CV_MAT_TYPE(_src->type) == CV_64FC2) && (CV_MAT_TYPE(_dst->type) == CV_32FC2 || CV_MAT_TYPE(_dst->type) == CV_64FC2)) in cvUndistortPoints, file /home/javvaji/opencv-3.2.0/modules/imgproc/src/undistort.cpp, line 293 retval, rvec, tvec = cv2.solvePnP(cam.object_points, cam.image_points, cam.camera_matrix, cam.dist_coefficients, None, None, False, cv2.SOLVEPNP_P3P)
Parameter values: (all are numpy arrays) Image points:
[[ 433. 50.]
[ 512. 109.]
[ 425. 109.]
[ 362. 106.]]
Object points:
[[ 0. 0. 0. ]
[ 6.5 0. 0. ]
[ 0. 0. 6.5]
[ 0. 6.5 0. ]]
Cam mat:
[[ 811.13275146 0. 322.47589111]
[ 0. 811.27490234 225.78684998]
[ 0. 0. 1. ]]
Dist Coff:
[[-0.07649349 -0.02313312 -0.00911118 -0.0049251 0.74063563]]
I am using solvePnP function with flag SOLVEPNP_P3P . It is giving assertion error. The same solvePnP code works fine with SOLVEPNP_ITERATIVE flag. With P3P flag it internally calls undistortPoints function which is giving error. solvePnP code ref: https://github.com/opencv/opencv/blob... How to resolve this? |
2017-05-18 00:48:28 -0600
| asked a question | What are requirements of different algorithms available for solvePnP in opencv? Hello I am trying to get the Pose of known geometry object and I am using opencv3.2.0 ,solvePnP to find that. While searching in google I found that there are several algorithms for solvePnP implementation which we are passing as flag parameter to that function. But I did not find the requirements of each algorithm and when should I use which and I am trying to track the in real time. Flags available for solvePnP: SOLVEPNP_ITERATIVE SOLVEPNP_P3P SOLVEPNP_EPNP SOLVEPNP_DLS (in opencv3) SOLVEPNP_UPNP (in opencv 3) What are the minimum no of points required for each algorithm? Can those points be coplanar and noncoplanar ? What is the time complexity and accuracy of each algorithm ? |
2017-05-17 07:15:07 -0600
| asked a question | VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV I am getting error as pixel format not supported for external web cam.I am able to open external webcam but with below errors. Error : VIDEOIO ERROR: V4L2: Pixel format of
incoming image is unsupported by
OpenCV Unable to stop the stream:
Device or resource busy GStreamer
Plugin: Embedded video playback
halted; module v4l2src4 reported:
Cannot identify device '/dev/video0'. OpenCV Error: Unspecified error
(GStreamer: unable to start pipeline )
in cvCaptureFromCAM_GStreamer, file /home/djkp/opencv-3.2.0/modules/videoio/src/cap_gstreamer.cpp,
line 832
VIDEOIO(cvCreateCapture_GStreamer(CV_CAP_GSTREAMER_V4L2,
reinterpret_cast<char *="">(index))):
raised OpenCV exception: /home/djkp/opencv-3.2.0/modules/videoio/src/cap_gstreamer.cpp:832:
error: (-2) GStreamer: unable to start
pipeline in function
cvCaptureFromCAM_GStreamer Corrupt JPEG data: 3 extraneous bytes
before marker 0xd0 output for command: v4l2-ctl -d /dev/video1 --list-formats ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUYV 4:2:2
Index : 1
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : Motion-JPEG
General configuration for OpenCV 3.2.0 ===================================== Version control: unknown
Extra modules:
Location (extra): /home/djkp/opencv_contrib-3.2.0/modules
Version control (extra): unknown
Platform:
Timestamp: 2017-05-12T13:10:21Z
Host: Linux 4.4.0-75-generic x86_64
CMake: 3.5.1
CMake generator: Unix Makefiles
CMake build tool: /usr/bin/make
Configuration: RELEASE
C/C++:
Built as dynamic libs?: YES
C++ Compiler: /usr/bin/c++ (ver 5.4.0)
C++ flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG
C++ flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG
C Compiler: /usr/bin/cc
C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wno-narrowing -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG
C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wno-narrowing -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG
Linker flags (Release):
Linker flags (Debug):
ccache: NO
Precompiled headers: YES
Extra dependencies: /usr/lib/x86_64-linux-gnu/libwebp.so /usr/lib/x86_64-linux-gnu/libjasper.so /usr/lib/x86_64-linux-gnu/libImath.so /usr/lib/x86_64-linux-gnu/libIlmImf.so /usr/lib/x86_64-linux-gnu/libIex.so /usr/lib/x86_64-linux-gnu/libHalf.so /usr/lib/x86_64-linux-gnu/libIlmThread.so gtk-3 gdk-3 pangocairo-1.0 pango-1.0 atk-1.0 cairo-gobject cairo gdk_pixbuf-2.0 gio-2.0 gstvideo-0.10 gstapp-0.10 gstbase-0.10 gstriff-0.10 gstpbutils-0.10 gstreamer-0.10 gobject-2.0 gmodule-2.0 gthread-2.0 glib-2.0 xml2 dc1394 /usr/lib/x86_64-linux-gnu ... (more) |
2017-05-11 00:24:05 -0600
| received badge | ● Enthusiast
|
2017-05-10 07:11:20 -0600
| asked a question | How to visualize 3d Points data stream of object coordinates in Real time python I am getting 3d points of object in real word with triangulatePoints function in opencv. But to debug and verify accuracy of point I need a debugging procedure. I am gets new point for every 20 mill seconds. I need to plot the points and visualize them in real time. How do I achieve this. I am developing in python. |
2017-05-09 10:54:58 -0600
| commented question | OpenCV Error: Unsupported format or combination of formats (Input parameters must be matrices) in cvTriangulatePoints Even if I pass np.float32([[x,y]]) as parameter to triangulatePointsit still gives same error @berak |
2017-05-09 10:38:31 -0600
| commented question | OpenCV Error: Unsupported format or combination of formats (Input parameters must be matrices) in cvTriangulatePoints |
2017-05-09 10:33:25 -0600
| commented question | OpenCV Error: Unsupported format or combination of formats (Input parameters must be matrices) in cvTriangulatePoints Currently it is xy = [x,y] and passing np.float32(xy) as parameter. But I changed it and checked with xy= np.float32([x,y]) and passing np.float32(xy) as parameter. Still error exists. |
2017-05-09 10:06:38 -0600
| commented question | OpenCV Error: Unsupported format or combination of formats (Input parameters must be matrices) in cvTriangulatePoints @berak while passing to triangulatePoints I am pasing as np.float32(xy1) and np.float32(xy2) . and 3x4 matrix is filled by using solvePnP. |
2017-05-09 09:06:04 -0600
| asked a question | OpenCV Error: Unsupported format or combination of formats (Input parameters must be matrices) in cvTriangulatePoints I am new to OpenCV. While using cv2.triangulatePoints I am getting the error mentioned below. Error: OpenCV Error: Unsupported format or combination of formats (Input parameters must be matrices) in cvTriangulatePoints, file /io/opencv/modules/calib3d/src/triangulate.cpp, line 64 cv2.version '3.2.0' triangulationOutput = cv2.triangulatePoints(projection_matrix_cam1, projection_matrix_cam2, np.float32(xy1), np.float32(xy2))
Declaration of params: projection_matrix_cam1 = np.array([], dtype=np.float32).reshape(0,3,4) #lly projection_matrix_cam2
xy1 = [x,y] #lly xy2 , x,y are integers
Type of params respectively: <type 'numpy.ndarray'>
<type 'numpy.ndarray'>
<type 'numpy.ndarray'>
<type 'numpy.ndarray'>
How to resolve this. |
2016-12-19 10:32:06 -0600
| received badge | ● Editor
(source)
|
2016-12-19 10:24:11 -0600
| asked a question | What algorithms suits best for Image detection of known Image databases when viewing them from camera in real time . Hi everyone! I'm totally newbie at opencv and computer vision.I am trying to implement the SDK for Augmented reality. Consider an example of Augmented reality book.
I know the book that is used for that particular AR application.So now I can develop images from pages of the book(each page as each image and some parts of page as an individual image).Now using some OpenCV algorithms with the help of developed image data set I need to detect that image when viewing from camera in real time. I need to be able to tract the detected image until that is in my camera point of view viewed from any angle. Could anyone please suggest some best algorithms/procedures that suits for my problem. Should I consider the Image data set size(no of images) in choosing right algorithm? |