Ask Your Question

patrchri's profile - activity

2016-10-21 12:29:21 -0600 commented question Get depth data from Kinect v2 - ROS
  • Done

  • So it is possible that I get zeros because of incorrect casting?

P.S.: I am working on a university computer because my pc cannot handle gazebo, so I will try to test it on monday.

2016-10-21 09:08:15 -0600 asked a question Get depth data from Kinect v2 - ROS

Hello,

I am working with ROS Kinetic and I am using a Kinect v2. I am generally new to depth sensors and as a undergraduate student I don't have someone to ask these kind of questions so please bare with me. I have managed to get both color and depth image's data and detect an object from its color. From this detection, I find its position in the 2d image (let's say (x,y)) and I am trying with these points to go to the depth image and detect its depth. I am using mainly qhd resolutions.

To be more specific:

Here is the color detection in the rgb image: image description

Here is the thresholded image:

image description

and here is the depth image I get:

image description

I have measured the distance from my object at 48cm but what I get from the depth image is 0 distance. The depth image is mostly black, but you can see some points that indicate some kind of depth.

A snapshot of my code for the kinect's depth callback function is presented below:

void kinectdepthCallback(const sensor_msgs::ImageConstPtr& msg2){

    cv_bridge::CvImagePtr cv_ptr2;
    try{
        cv_ptr2 = cv_bridge::toCvCopy(msg2, sensor_msgs::image_encodings::TYPE_16UC1); 
    }
    catch (cv_bridge::Exception& e){
        ROS_ERROR("Could not convert from '%s' to '16UC1'.", e.what());
    return;
    }
    cv::imshow("Depth Image",cv_ptr2->image);
    cv::waitKey(20);
    ROS_INFO("The object is in depth: %d",cv_ptr2->image.at<int>(Point(x,y)));
}

My main question is what is the most probable cause of getting zeros instead of non zero depth data even if I have put my object at a distance long enough for the sensor to detect (48-49 cm)?

I would also like to ask how can I convert the raw data to meters?

Is this malfunction caused maybe from bad calibration of the sensor ?

I would greatly appreciate your help, especially from someone who has worked with depth sensors and ROS.

Thank you for your answers and for your time in advance,

Chris

2016-10-20 03:29:09 -0600 received badge  Teacher (source)
2016-10-19 09:43:27 -0600 received badge  Self-Learner (source)
2016-10-19 09:08:18 -0600 answered a question Kinect v2 Callibration for ROS

The problem arose because I didn't save the files in a separate directory as instructed by the author. This led to not finding the intrinsics of the rgb and ir sensor so the calibration program for the rotation and translation and depth calibration was given empty matrices.

2016-10-17 11:44:35 -0600 commented question Kinect v2 Callibration for ROS

Thank you for your reply.... I will try to run again all the calibration steps...

2016-10-17 08:44:57 -0600 asked a question Kinect v2 Callibration for ROS

Hello,

I am using ROS Kinetic and I am trying to calibrate a Kinect v2 according to the instructions presented here

I have managed to calibrate the intrinsics of the rgb camera and the ir sensor. I have also implemented steps 7 and 8, even though I get a reprojection error close to 120.0 (which is probably wrong).

But when I am trying to implement step 9 I am getting the following error:

[ INFO] [main] Start settings:
       Mode: calibrate
     Source: depth
      Board: chess
 Dimensions: 5 x 7
 Field size: 0.03
Dist. model: 5 coefficients
Topic color: /kinect2/hd/image_mono
   Topic ir: /kinect2/sd/image_ir
Topic depth: /kinect2/sd/image_depth
       Path: ./

[ INFO] [main] restoring files...
[ INFO] [DepthCalibration::readFiles] restoring file: 0000_sync
[ERROR] [DepthCalibration::loadCalibration] couldn't read calibration './calib_ir.yaml'!
[ INFO] [main] starting calibration...
[ INFO] [DepthCalibration::calibrate] frame: ./0000_sync_depth.png
OpenCV Error: Assertion failed (_map1.size().area() > 0) in remap, file /tmp/binarydeb/ros-kinetic-opencv3-3.1.0/modules/imgproc/src/imgwarp.cpp, line 4833
terminate called after throwing an instance of 'cv::Exception'
  what():  /tmp/binarydeb/ros-kinetic-opencv3-3.1.0/modules/imgproc/src/imgwarp.cpp:4833: error: (-215) _map1.size().area() > 0 in function remap

I don't know why I am getting this error.

If someone has experience with Kinect v2's calibration I would appreciate his/her help.

Thank you for answering in advance,

Chris

2016-10-10 13:39:58 -0600 commented answer Assertion failed while using findEssentialMat()

Thanks a lot for your help and for your patience also :)

2016-10-10 04:40:17 -0600 marked best answer Getting a fully blurred undistored image from cv::undistort

Hello,

I am new to opencv and I am trying to undistort my camera's frames. I have used previously the callibration program from opencv to find the elements of the CameraMatrix and distCoeffs and I wrote the following program:

#include <string>
#include <iostream>
#include <opencv/cv.hpp>

using namespace std;
using namespace cv;

const int FRAME_WIDTH = 640;
const int FRAME_HEIGHT = 480;

int main()
{
 Mat frame1, frame2,frame1undist;
 double data[3][3];
 double data2[5];
 data[0][0] = 9.5327626068874099e+02;
 data[0][1] = 0.0;
 data[0][2] = 320.;
 data[1][0] = 0.0;
 data[1][1] = 9.5327626068874099e+02;
 data[1][2] = 240.0;
 data[2][0] = 0.0;
 data[2][1] = 0.0;
 data[2][2] = 1.0;
 data2[0] = -1.1919013558906022e-01;
 data2[1] = -2.9472820562856015e+00;
 data2[2] = 0.0;
 data2[3] = 0.0;
 data2[4] = -1.9208489842061063e+01;
 double avg_reprojection_error = 3.5640854681839190e-01;
 Mat CameraMatrix(3,3,CV_64F,&data);
 Mat NewCameraMatrix;
 Mat distCoeffs(1,5,CV_64F,&data2);
 Mat Result1, Result2;
 double data3[3][3];
 for (int i=0;i<3;i++){
    for (int j=0;j<3;j++){
        data3[i][j] = 0;
        if(i==j) data[i][j] = 1;
    }
 }
 //Mat R(3,3,CV_32FC1,&data3);
 //initUndistortRectifyMap(CameraMatrix,distCoeffs,R,NewCameraMatrix,CvSize(FRAME_WIDTH,FRAME_HEIGHT),CV_32FC1,Result1,Result2);
 VideoCapture capture(0);
 if(!capture.isOpened())    return -1;
 capture.set(CV_CAP_PROP_FRAME_WIDTH,FRAME_WIDTH);
 capture.set(CV_CAP_PROP_FRAME_HEIGHT,FRAME_HEIGHT);

 while(true){

 capture.read(frame1);
 //capture.read(frame2);
 undistort(frame1,frame1undist,CameraMatrix,distCoeffs);
 imshow("Frame 1",frame1);
 imshow("Frame 1 undistored", frame1undist);
 //imshow("Frame 2",frame2);

 waitKey(21);
 }


 return 0;
}

The results can be seen in the following printscreen:

image description

As you can see the undistorted frame is not even close to the "distorted" frame that I get from my camera and also it has some weird symbols on it, which disappear if I change the int _type in the Mat() functions that I use. I am quite confused on what is really going on, I thought maybe the int _type i use for the Mat function was the reason, but I don't know how could I fix that.

I am on Codeblocks at Ubuntu 14.04.

Thank you for your answers and for your time in advance,

Chris

2016-10-09 19:04:51 -0600 commented question Assertion failed while using findEssentialMat()

I have posted a screenshot with the call stack window, but I see no line when I am trying to jump on the line of the file generating the error. I am reediting again to show you what I mean.

2016-10-09 18:21:11 -0600 commented question Assertion failed while using findEssentialMat()

I don't know how to use gdb or generally debugging tools, although the tutorial is quite helpful. I am in codeblocks, so I guess I will use the debugger of codeblocks for which I saw there is the GDB/CDB debugger. I will try this...

2016-10-09 17:11:49 -0600 commented question Assertion failed while using findEssentialMat()

I have marked with arrows the possible locations of the failure, but I cannot see why the error happens.

2016-10-09 16:36:10 -0600 commented question Assertion failed while using findEssentialMat()

I can find the decleration of the function in calib3d.hpp, but I cannot find the .cpp file that the findEssentialMat() is being implemented in, because it is not being used with this name in the file.

EDITED: Oops, nevermind I found it...wait for me to reedit please.

2016-10-09 13:40:00 -0600 asked a question Translation Vector upto a scale factor (odometry).

Hello,

I apologize in advance for a question not directly related to opencv, but I have a question regarding a project paper I am reading about visual odometry. I am generally new to image processing and because I am not doing this for a project or something similar but for my interest, I don't have someone to ask these kind of questions. I will try to number my questions so the answer will be better organised:

In page 3, at 2.Algorithm->2.1 Problem formulation-> Output the paper metions:

"The vector, t can only be computed upto a scale factor in our monocular scheme."

It mentions that scale factor for the translation several times in the paper, but I cannot understand what this actually means.

Let me create an example and present you how I understand the translation vector and similarly the rotation vector and the general idea of visual odometry. Please correct me at the parts I am wrong.

Let's say that we use one camera in a mobile robot for visual odometry and we capture the following image in our first frame in which we have used a feature detection function (like FAST for example) and we have detected the 4 corners (A,B,C,D - marked as black) of the portrait for simplification with their respective positions A(ax,ay) , B(bx,by), C(cx,cy) and D(dx,dy) :

image description

Now let's suppose that the robot moved forward linearly, without turning, and we got the next frame as follows in which we matched these corners (using for example the calcOpticalFlowPyrLK() function) but with red color this time (A',B',C',D') and their respective positions (A'(ax',ay'), B'(bx',by'), C'(cx',cy'), D'(dx',dy')), while visualizing the previous points of the first frame as black:

image description

Questions:

  1. If after the procedure I presented above (and of course if I am right in what I am saying) I find the Essential matrix and recover the translation and rotation from the recoverPose() function shouldn't be this enough to know how the camera moved ?
  2. Isn't the t vector a vector containing the linear displacement of the camera and similarly the rotation vector containing its rotation?
  3. What does the phrase I outlined above actually means ? What is a scale factor and why should I use a scale factor for the translation?

Please bare with me for any follow up questions that may arise.

Thank you for your answers and for your time in advance,

Chris

2016-10-09 10:49:46 -0600 asked a question Assertion failed while using findEssentialMat()

I am using findEssentialMat() function so I can later recover the relative pose (translation and rotation) between two following frames of a webcam with recoverPose(). However, I get an asserion failed error:

OpenCV Error: Assertion failed(!fixedType() && !fixedSize()) in create, file /home/patrchri/opencv/modules/core/src/matrix.cpp, line 2297

To be more specific, I have written the following in a while loop which reads the following frames :

 vector<KeyPoint> kpointsframe1;
 vector<KeyPoint> kpointsframe2;
 vector<Point2f> pointsframe1;
 vector<Point2f> pointsframe2;
 vector<Point2f> velocityvect;
 vector<uchar> status;
 vector<float> err;
 Size winSize = Size(21,21);
 TermCriteria termcrit = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS,30,0.01);
 Mat Essential,R,T;
 capture.read(frame1);
 capture.read(frame2);
 undistort(frame1,frame1undist,CameraMatrix,distCoeffs);
 undistort(frame2,frame2undist,CameraMatrix,distCoeffs);
 cvtColor(frame1undist,frame1gray,CV_BGR2GRAY);
 cvtColor(frame2undist,frame2gray,CV_BGR2GRAY);
 //goodFeaturesToTrack(frame1gray,corners,500,qualityLevel,minDistance,Mat(),blockSize,useHarrisDetector);
 FAST(frame1gray,kpointsframe1,THRESHOLD_DIFFERENCE,false,FastFeatureDetector::TYPE_9_16);
 for(int i=0;i<kpointsframe1.size();i++)    pointsframe1.push_back(kpointsframe1.at(i).pt);
 calcOpticalFlowPyrLK(frame1gray,frame2gray,pointsframe1,pointsframe2,status,err,winSize,3,termcrit,0,0.001);
 int ind = 0;
 for( int i=0; i<status.size(); i++){
    Point2f pt = pointsframe2.at(i- ind);
        if ((status.at(i) == 0)||(pt.x<0)||(pt.y<0)){
              if((pt.x<0)||(pt.y<0)){
                status.at(i) = 0;
              }
              pointsframe1.erase(pointsframe1.begin()+(i-ind));
              pointsframe2.erase(pointsframe2.begin()+(i-ind));
              ind++;
        }

 }
 Point2d pp(320,240);
 double focal = 9.5327626068874099e+02;
 Essential = findEssentialMat(pointsframe1,pointsframe2,CameraMatrix,RANSAC,0.999,1,Mat());
 recoverPose(Essential,pointsframe1,pointsframe2,CameraMatrix,R,T,Mat());

I have declared the CameraMatrix outside this while loop.

The issue arises from the findEssentialMat() but I am confused on what is wrong with it. I searched for similar assertion failures but I couldn't find something that it would be helpful. Surely the error comes from the sizes of the Mats used. What could be wrong with it?

Thank you for your answers and for your time in advance,

Chris

1st EDIT:

The CameraMatrix has been declared as follows after the callibration of the webcam:

 double data[3][3];
 data[0][0] = 9.5327626068874099e+02;
 data[0][1] = 0.0;
 data[0][2] = 320.;
 data[1][0] = 0.0;
 data[1][1] = 9.5327626068874099e+02;
 data[1][2] = 240.0;
 data[2][0] = 0.0;
 data[2][1] = 0.0;
 data[2][2] = 1.0;
 double avg_reprojection_error = 3.5640854681839190e-01;
 Mat CameraMatrix(3,3,CV_64F,&data);

2nd EDIT:

The full printscreen of the error is the following:

image description

If I follow the trace I see that it comes from:

void _OutputArray::create(int d, const int* sizes, int mtype, int i,
                          bool allowTransposed, int fixedDepthMask) const
{
    int k = kind();
    mtype = CV_MAT_TYPE(mtype);

    if( k == MAT )
    {
        CV_Assert( i < 0 );
        Mat& m = *(Mat*)obj;
        if( allowTransposed )
        {
            if( !m.isContinuous() )
            {
                --->CV_Assert(!fixedType() && !fixedSize());<---- (That is line 2297)
                m.release();
            }

            if( d == 2 && m.dims == 2 && m.data &&
                m.type() == mtype && m.rows == sizes[1] && m.cols == sizes[0] )
                return;
        }

        if(fixedType())
        {
            if(CV_MAT_CN(mtype) == m.channels() && ((1 << CV_MAT_TYPE(flags)) & fixedDepthMask) != 0 ...
(more)
2016-10-09 08:18:21 -0600 commented answer Issue with calcOpticalFlowPyrLK()

Ok, thank you for your help.

2016-10-09 07:55:14 -0600 commented answer Issue with calcOpticalFlowPyrLK()

-"Optical Flow finds the location in the new frame, of the old points".

Since I find the keypoints with Fast in the previous frame, convert them to points and then call Optical Flow, wouldn't the function try to search these points in the next frame? By "local proccess", I guess you mean that it searches locally for each point and not in the entire image, in other words is used for small range displacements and it might lose some points or show false match?

-"Keypoint matching can find matches regardless of where the keypoint is in the second image, whereas optical flow is a local process."

So you suggest to get rid of Optical flow and use a global matcher? Like brute force for example?

-""Easy" Ahahaha. No."

Lol,what I meant is not the implementation,but the idea of the estimation :)

2016-10-08 22:19:40 -0600 commented answer Issue with calcOpticalFlowPyrLK()

Firstly, thank you for your quick reply. So in other words, the Optical Flow gives me the error of the keypoint/point matching, since it searches for the match of the points given from the first frame, am I correct ? If indeed it does that, shouldn't be easy by their difference, while of course knowing the fps of my webcam, to know an estimate of the speed I am moving in a stable environment ? I apologize in advance for the many questions, but I am trying to ensure that my thoughts on this are correct and I have a clear understanding of what is going on and the way I proceed with monocular odometry.

2016-10-08 21:03:48 -0600 received badge  Editor (source)
2016-10-08 21:01:26 -0600 asked a question Issue with calcOpticalFlowPyrLK()

Hello,

I am trying to implement visual odometry via my webcam and the steps I have followed so far are the following:

  1. Capture video from webcam (2 following frames each loop).
  2. Undistort the frames with the callibration parameters I got.
  3. Convert frames to gray scale.
  4. Apply FAST algorithm for feature detection.
  5. And now I am at the calcOpticalFlowPyrLK() step.

My issue is that at step 4 when I print the number of keypoints found in the 2 following frames I get different numbers between the frames (which is logical), but when I do the same thing after the calling of calcOpticalFlowPyrLK() I get the exact same number.

To be more specific, I have wrote:

 vector<KeyPoint> kpointsframe1;
 vector<KeyPoint> kpointsframe2;
 vector<Point2f> pointsframe1;
 vector<Point2f> pointsframe2;
 vector<uchar> status;
 vector<float> err;
 Size winSize = Size(21,21);
 TermCriteria termcrit = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS,30,0.01);
 Mat Essential;
 capture.read(frame1);
 capture.read(frame2);
 undistort(frame1,frame1undist,CameraMatrix,distCoeffs);
 undistort(frame2,frame2undist,CameraMatrix,distCoeffs);
 cvtColor(frame1undist,frame1gray,CV_BGR2GRAY);
 cvtColor(frame2undist,frame2gray,CV_BGR2GRAY);
 FAST(frame1gray,kpointsframe1,THRESHOLD_DIFFERENCE,false,FastFeatureDetector::TYPE_9_16);
 FAST(frame2gray,kpointsframe2,THRESHOLD_DIFFERENCE,false,FastFeatureDetector::TYPE_9_16);
 for(int i=0;i<kpointsframe1.size();i++)    pointsframe1.push_back(kpointsframe1.at(i).pt);
 for(int i=0;i<kpointsframe2.size();i++)    pointsframe2.push_back(kpointsframe2.at(i).pt);
 calcOpticalFlowPyrLK(frame1gray,frame2gray,pointsframe1,pointsframe2,status,err,winSize,3,termcrit,0,0.0001);

and I print the following in each loop:

 cout<<status.size()<<endl;
 cout<<pointsframe1.size()<<endl;
 cout<<pointsframe2.size()<<endl;
 cout<<"-------------------"<<endl;

If I comment the line

calcOpticalFlowPyrLK(frame1gray,frame2gray,pointsframe1,pointsframe2,status,err,winSize,3,termcrit,0,0.0001);

I get different numbers, but if I leave it like that I get 3 numbers which are all the same in each loop. All the above code is in a while loop. I am wondering why this happens? Shouldn't the calcOpticalFlowPyrLK() try to find the matching between the keypoints of the two frames and give the status of their differences or I am understanding something wrong?

I am new to image processing so try to bare with me for any potential questions that may arise.

Thank you for answering and for your time in advance,

Chris

2016-10-08 20:40:48 -0600 received badge  Critic (source)