2016-10-21 12:29:21 -0600 | commented question | Get depth data from Kinect v2 - ROS
P.S.: I am working on a university computer because my pc cannot handle gazebo, so I will try to test it on monday. |
2016-10-21 09:08:15 -0600 | asked a question | Get depth data from Kinect v2 - ROS Hello, I am working with ROS Kinetic and I am using a Kinect v2. I am generally new to depth sensors and as a undergraduate student I don't have someone to ask these kind of questions so please bare with me. I have managed to get both color and depth image's data and detect an object from its color. From this detection, I find its position in the 2d image (let's say (x,y)) and I am trying with these points to go to the depth image and detect its depth. I am using mainly qhd resolutions. To be more specific: Here is the color detection in the rgb image: Here is the thresholded image: and here is the depth image I get: I have measured the distance from my object at 48cm but what I get from the depth image is 0 distance. The depth image is mostly black, but you can see some points that indicate some kind of depth. A snapshot of my code for the kinect's depth callback function is presented below: My main question is what is the most probable cause of getting zeros instead of non zero depth data even if I have put my object at a distance long enough for the sensor to detect (48-49 cm)? I would also like to ask how can I convert the raw data to meters? Is this malfunction caused maybe from bad calibration of the sensor ? I would greatly appreciate your help, especially from someone who has worked with depth sensors and ROS. Thank you for your answers and for your time in advance, Chris |
2016-10-20 03:29:09 -0600 | received badge | ● Teacher (source) |
2016-10-19 09:43:27 -0600 | received badge | ● Self-Learner (source) |
2016-10-19 09:08:18 -0600 | answered a question | Kinect v2 Callibration for ROS The problem arose because I didn't save the files in a separate directory as instructed by the author. This led to not finding the intrinsics of the rgb and ir sensor so the calibration program for the rotation and translation and depth calibration was given empty matrices. |
2016-10-17 11:44:35 -0600 | commented question | Kinect v2 Callibration for ROS Thank you for your reply.... I will try to run again all the calibration steps... |
2016-10-17 08:44:57 -0600 | asked a question | Kinect v2 Callibration for ROS Hello, I am using ROS Kinetic and I am trying to calibrate a Kinect v2 according to the instructions presented here I have managed to calibrate the intrinsics of the rgb camera and the ir sensor. I have also implemented steps 7 and 8, even though I get a reprojection error close to 120.0 (which is probably wrong). But when I am trying to implement step 9 I am getting the following error: I don't know why I am getting this error. If someone has experience with Kinect v2's calibration I would appreciate his/her help. Thank you for answering in advance, Chris |
2016-10-10 13:39:58 -0600 | commented answer | Assertion failed while using findEssentialMat() Thanks a lot for your help and for your patience also :) |
2016-10-10 04:40:17 -0600 | marked best answer | Getting a fully blurred undistored image from cv::undistort Hello, I am new to opencv and I am trying to undistort my camera's frames. I have used previously the callibration program from opencv to find the elements of the CameraMatrix and distCoeffs and I wrote the following program: The results can be seen in the following printscreen: As you can see the undistorted frame is not even close to the "distorted" frame that I get from my camera and also it has some weird symbols on it, which disappear if I change the int _type in the Mat() functions that I use. I am quite confused on what is really going on, I thought maybe the int _type i use for the Mat function was the reason, but I don't know how could I fix that. I am on Codeblocks at Ubuntu 14.04. Thank you for your answers and for your time in advance, Chris |
2016-10-09 19:04:51 -0600 | commented question | Assertion failed while using findEssentialMat() I have posted a screenshot with the call stack window, but I see no line when I am trying to jump on the line of the file generating the error. I am reediting again to show you what I mean. |
2016-10-09 18:21:11 -0600 | commented question | Assertion failed while using findEssentialMat() I don't know how to use gdb or generally debugging tools, although the tutorial is quite helpful. I am in codeblocks, so I guess I will use the debugger of codeblocks for which I saw there is the GDB/CDB debugger. I will try this... |
2016-10-09 17:11:49 -0600 | commented question | Assertion failed while using findEssentialMat() I have marked with arrows the possible locations of the failure, but I cannot see why the error happens. |
2016-10-09 16:36:10 -0600 | commented question | Assertion failed while using findEssentialMat() I can find the decleration of the function in calib3d.hpp, but I cannot find the .cpp file that the findEssentialMat() is being implemented in, because it is not being used with this name in the file. EDITED: Oops, nevermind I found it...wait for me to reedit please. |
2016-10-09 13:40:00 -0600 | asked a question | Translation Vector upto a scale factor (odometry). Hello, I apologize in advance for a question not directly related to opencv, but I have a question regarding a project paper I am reading about visual odometry. I am generally new to image processing and because I am not doing this for a project or something similar but for my interest, I don't have someone to ask these kind of questions. I will try to number my questions so the answer will be better organised: In page 3, at 2.Algorithm->2.1 Problem formulation-> Output the paper metions:
It mentions that scale factor for the translation several times in the paper, but I cannot understand what this actually means. Let me create an example and present you how I understand the translation vector and similarly the rotation vector and the general idea of visual odometry. Please correct me at the parts I am wrong. Let's say that we use one camera in a mobile robot for visual odometry and we capture the following image in our first frame in which we have used a feature detection function (like FAST for example) and we have detected the 4 corners (A,B,C,D - marked as black) of the portrait for simplification with their respective positions A(ax,ay) , B(bx,by), C(cx,cy) and D(dx,dy) : Now let's suppose that the robot moved forward linearly, without turning, and we got the next frame as follows in which we matched these corners (using for example the calcOpticalFlowPyrLK() function) but with red color this time (A',B',C',D') and their respective positions (A'(ax',ay'), B'(bx',by'), C'(cx',cy'), D'(dx',dy')), while visualizing the previous points of the first frame as black: Questions:
Please bare with me for any follow up questions that may arise. Thank you for your answers and for your time in advance, Chris |
2016-10-09 10:49:46 -0600 | asked a question | Assertion failed while using findEssentialMat() I am using findEssentialMat() function so I can later recover the relative pose (translation and rotation) between two following frames of a webcam with recoverPose(). However, I get an asserion failed error: To be more specific, I have written the following in a while loop which reads the following frames : I have declared the CameraMatrix outside this while loop. The issue arises from the findEssentialMat() but I am confused on what is wrong with it. I searched for similar assertion failures but I couldn't find something that it would be helpful. Surely the error comes from the sizes of the Mats used. What could be wrong with it? Thank you for your answers and for your time in advance, Chris 1st EDIT: The CameraMatrix has been declared as follows after the callibration of the webcam: 2nd EDIT: The full printscreen of the error is the following: If I follow the trace I see that it comes from: (more) |
2016-10-09 08:18:21 -0600 | commented answer | Issue with calcOpticalFlowPyrLK() Ok, thank you for your help. |
2016-10-09 07:55:14 -0600 | commented answer | Issue with calcOpticalFlowPyrLK() -"Optical Flow finds the location in the new frame, of the old points". Since I find the keypoints with Fast in the previous frame, convert them to points and then call Optical Flow, wouldn't the function try to search these points in the next frame? By "local proccess", I guess you mean that it searches locally for each point and not in the entire image, in other words is used for small range displacements and it might lose some points or show false match? -"Keypoint matching can find matches regardless of where the keypoint is in the second image, whereas optical flow is a local process." So you suggest to get rid of Optical flow and use a global matcher? Like brute force for example? -""Easy" Ahahaha. No." Lol,what I meant is not the implementation,but the idea of the estimation :) |
2016-10-08 22:19:40 -0600 | commented answer | Issue with calcOpticalFlowPyrLK() Firstly, thank you for your quick reply. So in other words, the Optical Flow gives me the error of the keypoint/point matching, since it searches for the match of the points given from the first frame, am I correct ? If indeed it does that, shouldn't be easy by their difference, while of course knowing the fps of my webcam, to know an estimate of the speed I am moving in a stable environment ? I apologize in advance for the many questions, but I am trying to ensure that my thoughts on this are correct and I have a clear understanding of what is going on and the way I proceed with monocular odometry. |
2016-10-08 21:03:48 -0600 | received badge | ● Editor (source) |
2016-10-08 21:01:26 -0600 | asked a question | Issue with calcOpticalFlowPyrLK() Hello, I am trying to implement visual odometry via my webcam and the steps I have followed so far are the following:
My issue is that at step 4 when I print the number of keypoints found in the 2 following frames I get different numbers between the frames (which is logical), but when I do the same thing after the calling of calcOpticalFlowPyrLK() I get the exact same number. To be more specific, I have wrote: and I print the following in each loop: If I comment the line
I get different numbers, but if I leave it like that I get 3 numbers which are all the same in each loop. All the above code is in a while loop. I am wondering why this happens? Shouldn't the calcOpticalFlowPyrLK() try to find the matching between the keypoints of the two frames and give the status of their differences or I am understanding something wrong? I am new to image processing so try to bare with me for any potential questions that may arise. Thank you for answering and for your time in advance, Chris |
2016-10-08 20:40:48 -0600 | received badge | ● Critic (source) |