Ask Your Question

Philippe's profile - activity

2017-05-31 04:51:35 -0500 asked a question dft function

I am using the dft function (tried with both real and complex input.... where imaginary part was set to zero). The output of the dft is in a compressed format.

1) how do I convert this into the natural format?

2) is there a way to calculate the dft without this intermediate step?

3) If I use a real signal with a pure tone (e.g. sine wave), I get artifacts in the dft (instead of only that specific frequency, I get perturbations other frequencies also). Is this a known issue?

4) when I do the inverse dft (to get the original signal), the results typically have 2 orders of magnitude more noise than LabView (language similar to Matlab, with its own FFT algorithm), does OpenVC take shortcuts that include rounding of values?

2015-05-11 03:26:43 -0500 asked a question reprojectImageTo3D not working for CV_16U

Hi All

For Kinect2, I have used the built in intrinsics, the initUndistortRectifyMap, remap, and then I want to repojectImagetTo3D. I see it works for the depth image only if I convert it first to CV_8UC1 or CV_32FC1. i.e. it does not work for the original format of CV_16U. Is there a new version of repojectImagetTo3D that can handle CV_16U ? Also, it appears that remap and reporjectImageTo3D runs on the CPU, how do I move this to the GPU? When I correct the IR or depth image, it reduces my frame rate from 15 down to <12 frames per second (I am doing other processing, but the relative drop in performance for just using a LUT and matrix manipulation appears terrible).

2014-05-06 03:04:25 -0500 received badge  Necromancer (source)
2014-05-04 10:16:00 -0500 answered a question How to reorder matrix columns according to sortIdx() result?

1 year later, but I have come across a solution that works on 2D arrays. so instead of 2 vectors, you can make it a Nx2 array, and sort the 2nd column based on the 1st column - no need for index list. Furthermore, if there are duplicates in the 1st column, you can then sort the duplicates based on the 2nd column - e.g.:

[7,15],[7,10],[1,2],[3,4],[7,2] => [1,2],[3,4],[7,2],[7,10],[7,15]

this can even be extended to Nx3 as done in the example below... most of the code is actually the cout to show the results - the code is a single line of qsort, and then the qsort comparison routine (which is the engine). Made a variation for int by changing the arr definition, and the qsort type - works like a charm...Since I am new to this forum, I am restricted in submitting code.... hope it comes out.

#include <sstream>
#include <string>
#include <iostream>
#include <stdlib.h>

using namespace std;


static int compfloat(const void* p1, const void* p2) {
  float* arr1 = (float*)p1;
  float* arr2 = (float*)p2;
  float diff1 = arr1[0] - arr2[0];
  if (diff1) return diff1;
  return arr1[1] - arr2[1];   //only compares 2nd column if the 1st is the same - can remove
}

#define arraysize 5   

int main()
{ float arr[arraysize][3] = {{5,10,1},{2,2,1}, {1,5,2}, {5,4,3}, {5,20,4}};  //example data
  for (int i=0; i<arraysize; i++)
  {   for (int j=0;j<3;j++)                            //3=number of columns... change as required
      cout << arr[i][j] << " ";
      cout << endl;
  }
      cout << endl;

  qsort(arr, arraysize, 3*sizeof(float), compfloat);   //3=number of columns... change as required

  for (int i=0; i<arraysize; i++)
  {   for (int j=0;j<3;j++)                            //3=number of columns... change as required
        cout << arr[i][j] << " ";
      cout << endl;
  }
  return 0;
}
2014-05-04 07:41:52 -0500 commented question Inverse bilinear interpolation (pupil tracker)

I have simply used a % of the 2 coordinates (x and y), and had OK results. How are you keeping the eye socket in the same relative position to the screen? I actually tried a heads up display and tried to fit in a camera (so that the eye socket is stationary with respect to both the camera and the screen), but this would prevent widespread use. Using the webcam and a a face tracker It should be possible to find the center of the eye socket at any time (head movement), from which the gaze direction can be calculated. I suspect that the much lower resolution of the webcam will require both eyes to be tracked to increase the resolution. Please share your code once you have it working: a friend of mine has muscular dystrophy, and such a tool would be so appreciated.

2014-05-04 02:55:06 -0500 commented answer Finding the center of eye pupil

same difficulties myself - and it gets harder when there is a light source reflecting on the pupil. When you have this solved, please share an example piece of code...

2014-05-04 02:45:35 -0500 answered a question Intensity value based tracking

There is fantastic tracking example (extracted into your samples directory when you installed opencv) called objecttracking. 1) it converts to image from RGB to HSV - cvtColor(imageRGB,imageHSV,COLOR_BGR2HSV); 2) it extracts all sections that match the required HSV requirement - in your case intensity only inRange(imageHSV,Scalar(H_MIN,S_MIN,V_MIN),Scalar(H_MAX,S_MAX,V_MAX),imagethreshold); 3) It uses a morphological function (erode and dilate) to clean the image of 'noise' 4) It then uses find contours to create rectangles around all parts of the image that match your threshold (findContours(imagetemp,contours,hierarchy,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE );

Hope I understood your question....

2014-05-04 02:32:21 -0500 answered a question Measuring the physical distance between two points on an image

If you have a reference size somewhere in the image, you can use this. Possibly the height of the person if the image is taken at the correct angle. For all cases you will require a form of reference, either from a previous image in the same location, or a set of features on the same image as your silhouette.

2014-05-04 01:09:39 -0500 answered a question missing mingw in 2.4.8 windows

Is it possible to have the binaries for mingw included in opencv distributable as done for previous versons? uncompressed the mingw is only 45Meg (versus 350Meg for vc11). I realize that there are so many possibilities with mingw, but using the same settings as done for opencv2.4.5 would be great....

http://sourceforge.net/projects/mingw/files/Automated%20MinGW%20Installer/mingw-get-inst/

2014-05-04 01:06:01 -0500 received badge  Supporter (source)
2013-06-19 02:19:45 -0500 asked a question multiple cameras

Hi All.

I would like to connect 6 cameras to my PC. In OpenGL, I have no problems (used openframeworks in codeblocks). In Opencv I can connect to each camera sequentially, or even 2 simultaneously, however I cannot get a live feed from all 6 at the same time. 1 or 2 of the cameras may still display an image (after long delays), while the others remain blank. I suspect a bug in opencv. Anyone out there with a solution? I suspect that internally they buffers are clashing, the frames not updating or something similar. Code below.

#include <opencv2/opencv.hpp>

int main()
{
    //initialize and allocate memory to load the video stream from camera
    cv::VideoCapture camera0(0);
    cv::VideoCapture camera1(4);
    cv::VideoCapture camera2(5);

    if( !camera0.isOpened() ) return 1;
    if( !camera1.isOpened() ) return 1;
    if( !camera2.isOpened() ) return 1;


    while(true) {
        //grab and retrieve each frames of the video sequentially
        cv::Mat3b frame0;
        camera0 >> frame0;
        cv::Mat3b frame1;
        camera1 >> frame1;
        cv::Mat3b frame2;
        camera2 >> frame2;

        cv::imshow("Video0", frame0);
        cv::imshow("Video1", frame1);
        cv::imshow("Video2", frame2);

        //wait for 40 milliseconds
        int c = cvWaitKey(100);

        //exit the loop if user press "Esc" key  (ASCII value of "Esc" is 27)
        if(27 == char(c)) break;
    }

    return 0;
}