Ask Your Question

souraklis's profile - activity

2021-02-17 02:52:39 -0600 received badge  Famous Question (source)
2019-11-16 09:31:29 -0600 received badge  Famous Question (source)
2019-09-03 05:22:49 -0600 received badge  Famous Question (source)
2017-11-29 11:31:57 -0600 received badge  Notable Question (source)
2017-10-25 14:44:27 -0600 received badge  Notable Question (source)
2017-05-31 04:53:01 -0600 received badge  Notable Question (source)
2016-12-20 10:59:10 -0600 received badge  Popular Question (source)
2016-07-18 13:36:06 -0600 received badge  Popular Question (source)
2016-06-05 08:43:16 -0600 received badge  Popular Question (source)
2014-10-28 03:20:24 -0600 received badge  Nice Question (source)
2014-10-06 03:00:01 -0600 asked a question Extract foreground using grabcut

I am using grabcut algorithm in order to extract background-foreground of an image. I want to extract the calculated grabcut mask in order to keep only the background of the image. How can I do so? How can I extract only the calculated pixels from my image? I use the following c++ code:

cv::grabCut(image,    // input image
    result,   // segmentation result
    rectangle,// rectangle containing foreground 
    bgModel,fgModel, // models
    1,        // number of iterations
    cv::GC_INIT_WITH_RECT); // use rectangle

cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
cv::imshow("test.image", result);
cv::waitKey();
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result); // bg pixels not copied
cv::rectangle(image, rectangle, cv::Scalar(255,255,255),1);
2014-10-03 08:20:35 -0600 asked a question Grab_cut implementation using a specific mask

I want to perform grab_cut using my one created_mask. I found python code which cam be found :http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.html . What I what is to perform grab_cut by given my own mask. How is it possible to give as an mask input a bounding box of image?

2014-10-01 09:50:42 -0600 commented answer opencv_highgui249.lib(opencv_highgui249.dll) : fatal error LNK1112

In the fifth step 5.In the Type or select the new platform drop-down list, select a 64-bit platform. the dropdown menu is empty.

2014-10-01 09:47:18 -0600 commented answer opencv_highgui249.lib(opencv_highgui249.dll) : fatal error LNK1112

When I am trying to create new configuration with x64 variable in dropdown menu only contains win32 thus I ve got to create it. But in the field copy setting for only contains win32 option.

2014-10-01 08:27:18 -0600 asked a question opencv_highgui249.lib(opencv_highgui249.dll) : fatal error LNK1112

I am trying to compile a simple opencv project from windows 7 64 bit OS. I am facing the following error: opencv_highgui249.lib(opencv_highgui249.dll) : fatal error LNK1112: module machine type 'x64' conflicts with target machine type 'X86'

In project properties , in Linker I put additional Library dependencies: E:...\opencv\build\x64\vc10\lib; and add all necessary libs in input-> Additional Dependencies. However, I ve got the avobe error.

2014-03-18 04:04:10 -0600 asked a question Features extraction with orb

I am using opencv class orb for feature extraction. As I see for every input image it gives several interest points as a feature vector. However, I ve got a database of images and I want to calculate the feature vector of all these images for classification purposes. I am wandering how is it possible to create to sample all these interest points in a fix size. Am I supposed to create a visual codebook of images?

2014-02-24 04:08:58 -0600 commented question Store mat data in txt

Yea, thanks int casting works!

2014-02-24 03:55:40 -0600 commented question Store mat data in txt

 h < E ^ W r ; C 9 1 6 7 4 %  m      b ¦ ¦ ž † ‹ J k ] < ( / O * C ¬ Ζ Ξ Τ Μ Ή  ‚ \ V B L D - 0 /     ' Ε  ά Τ Ψ Υ Ι Ά ‚ \ B V < 4 ? ;   | Ξ Σ ά Ω Ϊ Ϊ Ρ Ζ ± d B > F 6 A ¨    „ d Τ ή ά Ϊ Χ Υ Δ ± ˜ 2 H L 9 Q ³     r p † ² Χ Ψ ‹ X ž } œ p ? a 2 V     • ~ Σ  Ε Ύ 3 (  « Ÿ n – ‰ œ    

2014-02-24 03:54:22 -0600 commented question Store mat data in txt

Ok know it works without exceptions, but in the txt file, weird characters it is stored.

2014-02-24 03:49:21 -0600 commented question Store mat data in txt

I am trying to figure out what is the type of my Mat file. I ve just used mage = imread(filename, 0); to read the file, where filename is a .jpg file.

2014-02-24 03:38:48 -0600 asked a question Store mat data in txt

I am trying to write a mat file to txt file. I am using the above function to do so:

void writeMatToFile(cv::Mat& m, const char* filename){

ofstream fout(filename);
for(int i=0; i<m.rows; i++){
    for(int j=0; j<m.cols; j++){
        fout<<m.at<float>(i,j)<<"\t";
    }
    fout<<endl;
}
fout.close();

}

And the main code:

 string file = "output.txt";
 writeMatToFile(image,file.c_str());

However I am receiving unhandled exceptions. Any idea how can i store mat data in txt?

2014-02-21 09:11:49 -0600 asked a question GrabCut operation mode

I am trying to use opencv's grabCut implementation for foreground extraction in an image. When I am using operation mode GC_INIT_WITH_MASK, I ve got no prob running the algorithm in a database of images. However, when I am using operation mode GC_INIT_WITH_RECT program crashed during running process in specific images. What is the possible explanation for this?

2014-01-13 04:50:44 -0600 commented question FaceRecognizer returns always the same label

Saving the model is working fine!!

2014-01-13 04:49:55 -0600 commented question FaceRecognizer returns always the same label

In matlab implementation, I divide the pixels with 255 so as to normalize them to 0-1. Basically it seems that my problem stands with h-s-v value channel. When I use the simple rgb it works. My question now: if it is necessary here to reshape the image from matrix to be a vector 1xn in train process.

2014-01-13 03:58:02 -0600 received badge  Editor (source)
2014-01-13 03:15:30 -0600 asked a question FaceRecognizer returns always the same label

I am having problems using opencv's face recognitzer algorithm. I am training my model using images of 5 different persons. In predict process I am always receiving as predicted label the same label 1. The images I am using for training are cropped and aligned(black n white) of the persons.

My code for training: (V channel from hsv colormap)

images = dbreading.trainImages; //2d Mat vector with 
labels = dbreading.trainLabels; // vector with labels

Ptr<FaceRecognizer> model =  createEigenFaceRecognizer();

model->train(images, labels);
model->save("eigenfaces.yml");

My code for predict:

// Detections tested image cropped aligned (V channel from hsv colormap) 
Ptr<FaceRecognizer> model = createEigenFaceRecognizer();
model->load("eigenfaces.yml");
cout << "The size of the detected image is width: " << detections.cols << "height: " << detections.rows << endl;

// And get a Prediction from the cv::FaceRecognizer:
int predicted_label;
predicted_label= model->predict(detections);

I am guessing that I am missing a classic mistake here. What is the usual reasons when recognizer stuck in the same label? I push_back images in vector images with their initial size. Do I have to reshape them as vectors before the training process??? Do I have to randomize the order of the train images? Does it have some meaning for the training process??

EDIT: I ve discovered that .yml file contains zeros inside. So the train process is totally wrong. Is it necessary to normalize the values of the pixel? Is it possible that the arising problems is due to lack of normalization??

2014-01-08 16:58:54 -0600 commented answer Converting a rgb image to hsv and to grayscale

How can I keep the V channel. Is the third dimension of the mat file??

2014-01-08 09:58:05 -0600 asked a question Converting a rgb image to hsv and to grayscale

I am working with a face recognition task. I am reading an rgb image and trying to convert it to hsv as I am trying to see if hsv is better for illumination issues.

       image = imread( path, 1 );
       cropped_rgb = image(faceRect).clone();
       cvtColor(cropped_rgb, cropped_hsv, CV_BGR2HSV);
       cvtColor(cropped_hsv, cropped_hsv, CV_BGR2GRAY);

I am trying to understand if there is a really difference between converting an rgb image to bgr_gray and rgb to bgr_gray. In fact when I plot cropped_hsv I ve got a weird result is not the same with rgb_gray image. Is this the final hsv_gray I want??

2013-12-19 05:32:06 -0600 received badge  Supporter (source)
2013-12-19 05:32:03 -0600 received badge  Scholar (source)
2013-12-17 09:22:46 -0600 received badge  Student (source)
2013-12-16 07:12:16 -0600 commented question Return confidence factor from detectMultiScale

I ll take a closer look, thanks anyway!!

2013-12-16 05:25:35 -0600 commented question Return confidence factor from detectMultiScale

I ve just have to calculate output y (viola-jones) using the weights from the xml file? Or I ve got to rebuilt opencv in order to keep track of the calculated weights??

2013-12-16 03:51:09 -0600 asked a question Return confidence factor from detectMultiScale

I am using haarcascade_frontalface detector, for a face detection system. I am trying to find way to return a confidence factor for the detection process. From the parameters of the function I cant locate something about confidence factor. What I ve got to do?