2019-06-28 08:36:24 -0600 | received badge | ● Famous Question (source) |
2017-06-25 13:49:15 -0600 | received badge | ● Famous Question (source) |
2017-02-23 13:07:52 -0600 | received badge | ● Notable Question (source) |
2016-04-17 13:50:39 -0600 | received badge | ● Taxonomist |
2016-03-11 21:17:50 -0600 | received badge | ● Notable Question (source) |
2015-12-11 16:11:55 -0600 | received badge | ● Famous Question (source) |
2015-10-22 06:52:55 -0600 | received badge | ● Popular Question (source) |
2015-10-09 22:05:21 -0600 | received badge | ● Popular Question (source) |
2015-03-19 02:22:57 -0600 | received badge | ● Famous Question (source) |
2014-12-09 13:50:21 -0600 | marked best answer | Wrong matrix from getAffineTransform I need to use Affine Transformation algorithm in order to find a matrix (that describes the Af.Tr. between two images which I took 3 matching points from) and afterwards multiply it for some coordinates (that represent some points in the first frame) in order to find the respective points in the second. I'm trying to experiment the OpenCV function getAffineTransform with 3 points that stay still. this is my code: I expect the matrix to be: 1 0 0 / 0 1 0 .....but i recive (from PrintMatrix function) huge numbers that don't make sense as 2346027296 32673544 32900240 // 2346027296 32673544 32900240 why? |
2014-12-09 13:47:57 -0600 | marked best answer | Feature matching of segmented images
|
2014-12-09 13:07:28 -0600 | marked best answer | FLANN algorithm cannot find any descriptor I'm acquiring some images from a video stream (of a simulated camera in a simulated environment (V-rep) ) : my goal is to find matching points in order to compute the Affine Transformation Matrix between two frames. I'm trying to use the Feature Matching with FLANN algorithm proposed by OpenCV (http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-matching-with-flann) with 2 Mat type images acquired through the cv_bridge (http://www.ros.org/wiki/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages) since i read them from a ROS topic. this is my code: at the point where ERROR is written i receive this error: OpenCV Error: Unsupported format or combination of formats (type=0 ) in buildIndex_, file /tmp/buildd/ros-fuerte-opencv2-2.4.2-1precise-20130312-1306/modules/flann/src/miniflann.cpp, line 299 terminate called after throwing an instance of 'cv::Exception' what(): /tmp/buildd/ros-fuerte-opencv2-2.4.2-1precise-20130312-1306/modules/flann/src/miniflann.cpp:299: error: (-210) type=0 in function buildIndex_ Aborted (core dumped) This happens because in the second frame the algorithm doesn't manage to find any descriptor! I'm wandering: why?...I'm running my code with the camera that stays still and, so, the images are exactly the same |
2014-10-26 03:29:38 -0600 | received badge | ● Notable Question (source) |
2014-09-17 13:34:26 -0600 | marked best answer | Unsupported format or combination of formats in buildIndex using FLANN algorithm I'm acquiring some images from a video stream (of a simulated camera) : my goal is to find matching points in order to compute the Affine Transformation Matrix between two frames afterwards. I'm trying to use the Feature Matching with FLANN algorithm proposed by OpenCV (http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-matching-with-flann) with 2 Mat type images acquired through the cv_bridge (http://www.ros.org/wiki/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages) since i read them from a ROS topic. this is my code: at the point where ERROR is written i receive this error: OpenCV Error: Unsupported format or combination of formats (type=0 ) in buildIndex_, file /tmp/buildd/ros-fuerte-opencv2-2.4.2-1precise-20130312-1306/modules/flann/src/miniflann.cpp, line 299 terminate called after throwing an instance of 'cv::Exception' what(): /tmp/buildd/ros-fuerte-opencv2-2.4.2-1precise-20130312-1306/modules/flann/src/miniflann.cpp:299: error: (-210) type=0 in function buildIndex_ Aborted (core dumped) But if I try the same code loading 2 JPG images it works!! what's wrong? |
2014-06-10 06:22:01 -0600 | received badge | ● Notable Question (source) |
2014-05-04 17:25:48 -0600 | received badge | ● Popular Question (source) |
2014-04-07 11:58:42 -0600 | asked a question | Wrong result from Hartley and Zisserman Triangulation I adapted the code of this interesting post in this way (only for one point): Where u and u1 are the points respectively in the first and second images and P and P1 are the projection matrices acquired multiplying C*[R|t], where C is the matrix of the intrinsic parameters of the camera and R,t the roto- translation matrices representing the camera position. The problem is that the result i get, the coordinates of the point in the world, is not correct: there is no overflow or something similar, it is just wrong...for ex i get a depth of 35m instead of 2m,or, if i put the camera closer to the 3d point feature, the depth increases. What am I doing wrong? Ps. The camera is already calibrated and the features are retrieved by a rectified image. |
2014-03-31 09:25:12 -0600 | asked a question | triangulatePoints with moving monocamera well, I'm trying to use the function triangulatePoints in order to retrieve the depth of a point. First, i'm adopting this process not using a stereo ring but a monocamera: the camera is moving in the environment and I select two frames and a point present in both of them (of course I know the position of the camera at any time). Assuming that this procedure is correct, I am wondering to which camera frame the 3d point output of the mentioned function is referred. |
2014-03-26 12:02:38 -0600 | commented question | Random values printing a Mat I stupid....thanks! |
2014-03-26 11:40:39 -0600 | asked a question | Random values printing a Mat Hi, I would like to print out a matrix Mat after having written the desired values in it. I'm doing this way: The strange thing is that, when I print out Rot, random values come out but the output of Rot(0,0) is correct! Any idea? Thanks |
2014-03-11 13:02:47 -0600 | marked best answer | Fill holes of a binary image hi, my goal is to fill with white colour the CLOSED black zones, not that ones that end on the image boundaries but aren't closed by white pixels (i hope i'm clear).
i tried with opening (or closure) functions but the they causes the closure of the opened/connected black rectangular zones, thing that is highly unwanted.
can someone help me?
|
2014-02-16 06:47:24 -0600 | received badge | ● Popular Question (source) |
2014-01-17 09:41:26 -0600 | commented answer | What does projection matrix provided by the calibration represent? if K stands for intrinsic matrix...no..in my case that's not true. i have just added the results of the calibration in the post. |
2014-01-16 10:44:50 -0600 | asked a question | What does projection matrix provided by the calibration represent? I'm using such a tool from ROS/OpenCV in order to perform the camera calibration. The procedure ends up providing: camera matrix, distortion parameters, rectification matrix and projection matrix. As far as I know the projection matrix contains the intrinsic parameter matrix of the camera multiplied by the extrinsic parameters matrix of the matrix. The extrinsic parameter matrix itself provides the roto-translation of the camera frame with respect to the world frame. If these assumptions are correct...how the projection matrix is computed by Opencv? I,m not defining any world frame! camera matrix 414.287922 0.000000 382.549277 0.000000 414.306025 230.875006 0.000000 0.000000 1.000000 distortion -0.278237 0.063338 -0.001382 0.000732 0.000000 rectification 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 projection 297.051453 0.000000 387.628900 0.000000 0.000000 369.280731 227.051305 0.000000 0.000000 0.000000 1.000000 0.000000 |
2013-12-17 11:16:08 -0600 | asked a question | drawMatches for vector<vector<DMatch> > good_matches i need to detect the matchings between 2 images. I use the following code: I would like now to visualize the matching between the 2 images as if i used drawMatches (actually i cannot use this function: i have a vector<vector<dmatch> > good_matches instead of vector<dmatch> good_matches). Any suggestion? |
2013-12-03 03:25:42 -0600 | commented question | BackgroundSubtractorMOG2 returns a white image I'm not sure that I understood...could you please make an example? btw, this function IS in a loop (a ROS while loop) but the result is the one previously exposed |
2013-12-02 11:42:28 -0600 | asked a question | BackgroundSubtractorMOG2 returns a white image I have a camera rigidly attached to a tool. A portion of this tool is included in the field of view of the camera itself. The tool is always on the move. My goal is to retrieve from each image acquired by the camera the segmentation of the tool. I want then to apply the BackgroundSubtractorMOG2 in order to highlight only the portion of the image that doesn't change over time (the tool). The code of the function i'm using is the following (c++ code). } My issue is that this function returns an image completely white. What am I doing wrong? |
2013-09-09 03:20:11 -0600 | commented question | Dynamic colouration of zones visualized by a flying camera i have available: the image (img_1) representing a portion of a bigger image (img_2). Ihave to overlap img_1 on img_2 (in the right place e with the right proportions) .....forget about the colours, that is not important in the end |
2013-09-06 05:33:21 -0600 | asked a question | Attach 2 images using the homography matrix i have an image_1 representing a portion of another image_2. I have to attach the first (modified) to the second in the right position (overlapping the image_1 to the portion of image_2 representing the same thing). How can i do? I was thinking to use the FLANN algorithm getting the homography matrix between image1 and image_2. then modify image_1. Then sum the two images using the matrix just found....the problem is that i really don't know how to perform this last step |
2013-09-06 03:41:39 -0600 | asked a question | Dynamic colouration of zones visualized by a flying camera i have an image (img_tot) printed on a plane and a down-facing camera flying on it and acquiring a frame, every time step "k", of a portion of such a plane. i have to store all the images (one each time step) representing img_tot with the portion represented by frame(k) coloured in a certain way. As if i had to highlight all the zones visualized by the camera in the plane How can I do?? |
2013-08-21 07:24:29 -0600 | asked a question | Nearest coloured point Is there any function that finds the nearest pixel with a desired colouration to a given point? |
2013-08-21 04:14:12 -0600 | commented question | Error in transormation using FindHomography ok..i tried it out: this function removes about 50 wrong matchings over about 280.. by the way the result doesn't change..the shifts are still present....:( |
2013-08-21 04:10:50 -0600 | commented question | Ratio check function with findHomography updated..sorry |
2013-08-20 13:07:42 -0600 | asked a question | Ratio check function with findHomography i need to acquire a perspective matrix from two images taken by the same camera after its movement in a 3D space...I'm taking inspiration from this OpenCV's example about the use of findHomography. I would like to substitute the "filter for the good_matches there suggested with the "ratio check function": In principle i have to give to the function a |