Ask Your Question

ann's profile - activity

2015-01-14 02:34:04 -0600 commented question Steps for 3D image construction

I don't want complete solution. Actually i read lot of papers but i don't get any idea. that's why. I already post a question about the same which am tried. But i didn't get any response .

2015-01-13 22:23:20 -0600 asked a question Steps for 3D image construction

I want to convert 2D images to 3D images. I search a lot and finally I found 3D morphable model.But I don't understand the steps they are following. After capturing 2D images what I do next?

2015-01-03 04:24:19 -0600 commented question How to make 3D images in opencv

I got H2 as (3,3) matrix.I don't know why and how to make it (4,4 ) matrix.

2015-01-02 03:14:51 -0600 commented question How to make 3D images in opencv

reply me...

2014-12-30 02:31:14 -0600 received badge  Enthusiast
2014-12-28 22:09:05 -0600 commented question How to make 3D images in opencv

I got H2 as (3,3) matrix.I don't know why and how to make it (4,4 ) matrix.

2014-12-27 03:23:03 -0600 received badge  Editor (source)
2014-12-27 03:22:12 -0600 asked a question How to make 3D images in opencv

I want to make 3D images from 2D images. So I tried the following code.

 int main(int argc, char *argv[]){

// Read the images
Mat imgLeft = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
Mat imgRight = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE );
 Mat Q; Mat xyz;

// check
if (!imgLeft.data || !imgRight.data)
        return 0;

// 1] find pair keypoints on both images (SURF, SIFT):::::::::::::::::::::::::::::

// vector of keypoints
std::vector<cv::KeyPoint> keypointsLeft;
std::vector<cv::KeyPoint> keypointsRight;

// Construct the SURF feature detector object
SiftFeatureDetector sift(
        0.01, // feature threshold
        10); // threshold to reduce
            // sensitivity to lines
            // Detect the SURF features

// Detection of the SIFT features
sift.detect(imgLeft,keypointsLeft);
sift.detect(imgRight,keypointsRight);

std::cout << "Number of SURF points (1): " << keypointsLeft.size() << std::endl;
std::cout << "Number of SURF points (2): " << keypointsRight.size() << std::endl;

// 2] compute descriptors of these keypoints (SURF,SIFT) ::::::::::::::::::::::::::

// Construction of the SURF descriptor extractor
cv::SurfDescriptorExtractor surfDesc;

// Extraction of the SURF descriptors
cv::Mat descriptorsLeft, descriptorsRight;
surfDesc.compute(imgLeft,keypointsLeft,descriptorsLeft);
surfDesc.compute(imgRight,keypointsRight,descriptorsRight);

// 3] matching keypoints from image right and image left according to their descriptors (BruteForce, Flann based approaches)

// Construction of the matcher
cv::BruteForceMatcher<cv::L2<float> > matcher;

// Match the two image descriptors
std::vector<cv::DMatch> matches;
matcher.match(descriptorsLeft,descriptorsRight, matches);

std::cout << "Number of matched points: " << matches.size() << std::endl;


// 4] find the fundamental mat ::::::::::::::::::::::::::::::::::::::::::::::::::::

// Convert 1 vector of keypoints into
// 2 vectors of Point2f for compute F matrix
// with cv::findFundamentalMat() function
std::vector<int> pointIndexesLeft;
std::vector<int> pointIndexesRight;
for (std::vector<cv::DMatch>::const_iterator it= matches.begin(); it!= matches.end(); ++it) {

     // Get the indexes of the selected matched keypoints
     pointIndexesLeft.push_back(it->queryIdx);
     pointIndexesRight.push_back(it->trainIdx);
}

// Convert keypoints into Point2f
std::vector<cv::Point2f> selPointsLeft, selPointsRight;
cv::KeyPoint::convert(keypointsLeft,selPointsLeft,pointIndexesLeft);
cv::KeyPoint::convert(keypointsRight,selPointsRight,pointIndexesRight);

// Compute F matrix from n>=8 matches
cv::Mat fundemental= cv::findFundamentalMat(
        cv::Mat(selPointsLeft), // points in first image
        cv::Mat(selPointsRight), // points in second image
        CV_FM_RANSAC);       // 8-point method


// 5] stereoRectifyUncalibrated()::::::::::::::::::::::::::::::::::::::::::::::::::

//H1, H2 – The output rectification homography matrices for the first and for the second images.
cv::Mat H1(4,4, imgRight.type());
cv::Mat H2(4,4, imgRight.type());
cv::stereoRectifyUncalibrated(selPointsRight, selPointsLeft, fundemental, imgRight.size(), H1, H2);

// create the image in which we will save our disparities
Mat imgDisparity16S = Mat( imgLeft.rows, imgLeft.cols, CV_16S );
Mat imgDisparity8U = Mat( imgLeft.rows, imgLeft.cols, CV_8UC1 );

// Call the constructor for StereoBM
int ndisparities = 16*5;      // < Range of disparity >
int SADWindowSize = 5;        // < Size of the block window > Must be odd. Is the 
                              // size of averaging window used to match pixel  
                              // blocks(larger values mean better robustness to
                              // noise, but yield blurry disparity maps)

StereoBM sbm( StereoBM::BASIC_PRESET,
    ndisparities,
    SADWindowSize );

// Calculate the disparity image
sbm( imgLeft, imgRight, imgDisparity16S, CV_16S );

// Check its extreme values
double minVal; double maxVal;

minMaxLoc( imgDisparity16S, &minVal, &maxVal );

printf("Min disp: %f Max value: %f \n", minVal, maxVal);

// Display it as a CV_8UC1 image
imgDisparity16S.convertTo( imgDisparity8U, CV_8UC1, 255/(maxVal - minVal));
 //reprojectImageTo3D ( const Mat &  disparity , Mat &  _3DImage , const Mat &  Q , bool  HandleMissingValues ...
(more)
2014-12-27 01:12:13 -0600 received badge  Scholar (source)
2014-12-08 03:54:04 -0600 commented question Training using PCA

plz respond....

2014-12-05 21:54:15 -0600 asked a question Training using PCA

I am doing face recognition using pca. I want to train images of 40 people. So is it mandatory to give same number of images of each person at the time of training or I can give different number of images ?. Is there any change in the recognition result when I give same number of images and different number of images of each person?

2014-12-04 23:58:36 -0600 asked a question Face recognition using PCA

I am doing face recognition using pca. I want to train 40 peoples. So is it mandatory to give same number of images of each people at the time of training ? or is it possible to give different number of images of each person. For example, 30 images of first person, 40 images of second person. Is there any difference in the recognition result if I give same number of images and different number of images?

2014-11-03 04:55:57 -0600 asked a question How to convert 2D image to 3D

I want to convert my 2D image to 3D image. Is there any method to do it in opencv?

2014-10-08 05:28:32 -0600 received badge  Supporter (source)
2014-10-07 23:31:39 -0600 commented answer How to do face alignment without considering eye position

Is there any way to do face alignment for profile face?

2014-10-07 04:44:06 -0600 commented answer How to do face alignment without considering eye position

Actually matchTemplate() is used for image matching. How we canc use this for face alignment????

2014-10-07 04:21:55 -0600 commented answer How to do face alignment without considering eye position

Yes I know , it is easy by locating eyes. Suppose if I got image with side face at that time I can't locate eyes. That's why I am asking is it possible to do face alignment without locating any points.

2014-10-07 00:15:19 -0600 asked a question How to do face alignment without considering eye position

I am doing a face recognition project. Now I want to align faces without considering eye position. Suppose if I got side view of an image, at that time I can't plot eye positions. Even though I need to do face alignment. So is it possible to do face alignment without considering any particular points? Since I want to align faces if it is front face or side face.Could anyone give me a good suggestion.