Ask Your Question

albertofernandez's profile - activity

2020-10-13 17:52:48 -0600 received badge  Nice Question (source)
2020-10-13 17:52:44 -0600 marked best answer additive Gaussian noise with different SNR

I am reading a paper. It says like this:

"For experiments conducted on noisy images, each texture image was corrupted by additive Gaussian noise with zero mean and standard deviation that was determined according to the corresponding Signal-to-Noise Ratios (SNR) value."

And then, they show the classification rate (%) on UIUC database with additive gaussian nosie of different Signal-To-Noise Rations (SNR):(SNR=100 SNR=30 SNR=15 SNR=10 SNR=5)

So I want to do the same....

Is GaussianBlur my function? How do I determine the SNR?

2020-10-13 17:52:24 -0600 received badge  Notable Question (source)
2020-09-29 02:14:32 -0600 received badge  Nice Answer (source)
2020-04-08 23:15:49 -0600 received badge  Notable Question (source)
2018-12-09 13:30:10 -0600 commented question find contours in an image

In connection with the image: an 8-bit single-channel image. Non-zero pixels are treated as 1's. Zero pixels remain 0's,

2018-12-09 13:21:59 -0600 commented question Extracting foreground with otsu thresholding

I do not understand you very well, but I think that you should try this: res = cv2.bitwise_and(img, img, mask=imgf)

2018-12-09 13:21:09 -0600 commented question Extracting foreground with otsu thresholding

I do not understand you very well, but I think that you should try this: cv2.bitwise_and(img, img, mask=imgf)

2018-12-06 04:17:02 -0600 commented answer How to get boundry and center information of a mask

x_centroid = round(M['m10'] / M['m00']) y_centroid = round(M['m01'] / M['m00'])

2018-12-06 04:16:00 -0600 commented answer How to get boundry and center information of a mask

x_centroid = round(M['m10'] / M['m00']) y_centroid = round(M['m01'] / M['m00'])

2018-12-06 04:15:34 -0600 commented answer How to get boundry and center information of a mask

x_centroid = round(M['m10'] / M['m00']) y_centroid = round(M['m01'] / M['m00'])

2018-12-06 04:15:02 -0600 commented answer How to get boundry and center information of a mask

x_centroid = round(M['m10'] / M['m00']) y_centroid = round(M['m01'] / M['m00'])

2018-11-29 05:37:36 -0600 received badge  Nice Answer (source)
2018-11-27 10:07:47 -0600 answered a question How to get boundry and center information of a mask

This python code performs what you want. # Import required packages: import cv2 # Load the image and convert it to gr

2018-11-05 01:34:25 -0600 commented answer Can anyone know the code of python to put two frames in a single window output specifically to use it in opencv

Many thanks @sturkmen :). It's nice to ear things like this

2018-11-04 15:03:18 -0600 received badge  Nice Answer (source)
2018-11-04 11:45:27 -0600 answered a question Can anyone know the code of python to put two frames in a single window output specifically to use it in opencv

Images are numpy arrays. Therefore you can use numpy capabilities to create an image contaning two images. The images to

2018-09-06 10:01:15 -0600 received badge  Good Answer (source)
2018-04-27 04:46:10 -0600 edited question human detection using python-opencv

Deteccion de persona utilizando python- openCV. I want to detect the movement of a person using: bodydetection = cv2.Ca

2018-04-27 04:46:06 -0600 edited question human detection using python-opencv

Deteccion de persona utilizando python- openCV. I want to detect the movement of a person using: bodydetection = cv2.Ca

2018-02-02 19:47:13 -0600 received badge  Good Answer (source)
2017-11-23 15:00:49 -0600 received badge  Nice Question (source)
2017-11-09 16:20:16 -0600 received badge  Notable Question (source)
2017-10-31 10:18:52 -0600 received badge  Famous Question (source)
2017-08-10 14:41:25 -0600 received badge  Popular Question (source)
2017-08-01 11:28:25 -0600 received badge  Nice Answer (source)
2017-07-25 12:04:29 -0600 received badge  Popular Question (source)
2017-04-20 08:37:21 -0600 received badge  Good Answer (source)
2016-07-28 06:31:39 -0600 received badge  Popular Question (source)
2016-05-25 08:00:05 -0600 received badge  Notable Question (source)
2016-02-27 07:22:22 -0600 received badge  Good Answer (source)
2016-02-24 17:30:25 -0600 received badge  Nice Answer (source)
2016-02-15 03:45:26 -0600 marked best answer HEP-histogram of equivalent patterns

There are a lot of papers comparing Local Binary Pattern (LBP) versus Local Ternary Pattern (LTP), or modifications to the original LBP operator like Center-symmetric local binary patterns (CS-LBP), Local quinary patterns (LQP), Completed Local Binary Pattern (CLBP) and so on.

All this methods belong to the same type and a framework for texture analysis can be build. This framework for texture analysis is called HEP (histogram of equivalent patterns) and it is described in this paper: Texture description through histograms of equivalent patterns

This is the project web page, with Matlab implementation of hep (hep.m):

http://dismac.dii.unipg.it/hep/index.html

It would be fantastic if Opencv had HEP implementation.







I have written some lines of pseudo-code assuming there is an Opencv implementation:

//create a hep descriptor. In this case, a uniform local binary pattern descriptor for gray-scale images 
int neighbors = 8;
int radius = 1;
//char is for    values between 0...255 (gray images)
HEP<LBP<u2>,char> * hep_lbp_u2_descriptor = new HEP<LBP<u2>>(neighbors,radius,..); 

//load a gray-scale image to test the descriptors
Mat image = imread(...., );

Mat gray_image;
cvtColor( image, gray_image, CV_BGR2GRAY );

 //1. First, you could have the possibility to test a "raw pattern" in order to 
 //see how this descriptor works for a given pixel

 Mat image_test = Mat::zeros(Size(3, 3), CV_8UC1);
 image_test.at<uchar>(0,0) = 1;
 image_test.at<uchar>(0,1) = 2;
 image_test.at<uchar>(0,2) = 3;
 image_test.at<uchar>(1,0) = 4;
 image_test.at<uchar>(1,1) = 4;
 image_test.at<uchar>(1,2) = 6;
 image_test.at<uchar>(2,0) = 7;
 image_test.at<uchar>(2,1) = 8;
 image_test.at<uchar>(2,2) = 9;

 //compute the "raw pattern" for the central pixel (1,1)
 cout<<"the value for the central pixel is"<<hep_lbp_u2_descriptor->compute_raw_pattern(image_test,1,1)<<endl;

 //2. You could use this descriptor to build a lbp-image (a hep-image):
 Mat lbp_image;
 hep_lbp_u2_descriptor->compute_map(gray_image,lbp_image);

 //3. You could have the posibility to build the lbp-histogram (a hep-histogram):
 Mat lbp_histogram;
 int width_divisions = 5;
 int height_divisions = 6;
 hep_lbp_u2_descriptor->compute_histogram_grid(gray_image, lbp_histogram, width_divisions ,height_divisions );
 //(in this case the lbp_histogram would have 59*5*6 features 
 //(uniform lbp with a neighborhood of 8 pixels))
2016-02-15 03:43:53 -0600 received badge  Good Question (source)
2016-02-13 10:41:13 -0600 received badge  Good Answer (source)
2016-02-09 10:08:04 -0600 received badge  Good Answer (source)
2016-02-09 06:26:37 -0600 received badge  Nice Answer (source)
2016-02-09 05:12:45 -0600 answered a question Fire detection using opencv

Based on comment from this question, and coments from the other related question (Fire/Flame Detection using OpenCV):

@StevenPuttemans: "Another way could be to use the fact that fire actually involves motion between frames in a video input"

@Guanta "You could train a cascade-classifier with LBPs since LBPs are also often used for texture detection/recognition this would be worth trying"

@red-viper
"is there any other way that uses machine learning?"

@pklab "The example shows that in the visible spectrum, the flame visibility depend on context. In the IR, the context is less relevant. After this you could use ML on IR image too."

I would like to complete the proposed hardware answer with a software one that i think could work together.

In this paper: "Early Fire Detection Using HEP and Space-time Analysis" [1] motion (as suggested by @stevenputtemans) and texture analysis (as suggested by @guanta) are involved.

You could also check "Video Fire Detection - Review" [2], Real-time Fire Detection for Video Surveillance Applications using a Combination of Experts based on Color, Shape and Motion [3], Automatic fire pixel detection using image processing: A comparative analysis of Rule-based and Machine Learning-based methods [4]

[1] http://arxiv.org/pdf/1310.1855.pdf

[2] http://signal.ee.bilkent.edu.tr/Publi...

[3] https://www.researchgate.net/profile/...

[4] https://hal.archives-ouvertes.fr/hal-...

2016-02-05 12:30:10 -0600 commented answer How to train images for smile detection?

I agree with @StevenPuttemans

2016-02-05 07:10:33 -0600 received badge  Nice Answer (source)
2016-02-05 06:53:00 -0600 answered a question How to train images for smile detection?

In order to get more images, you can use GENKI-4k Face, Expression, and Pose Dataset [1]

The GENKI-4K dataset contains 4,000 face images spanning a wide range of subjects, facial appearance, illumination, geographical locations, imaging conditions, and camera models. All images are labeled for both Smile content (1=smile, 0=non-smile) and Head Pose (yaw, pitch, and roll parameters, in radians).

I have used this database to train a smile classifier using LBP + SVM approach (some years ago) and results were quite satisfactory. Steps:

  • (1) Face detection
  • (2) Get Local Binary Pattern from the face
  • (3) SVM classification

Step (2) can be decomposed in:

  • crop face region
  • normalize face region based on eye coordinates
  • histogram equalization or something similar
  • Apply LBP to a NxM non-overlapping region

The public GENKI-4K dataset is available for download here [2].

[1] http://mplab.ucsd.edu/wordpress/?page...

[2] http://mplab.ucsd.edu/wordpress/wp-co...

Two sample images:

image description

2016-01-31 15:45:23 -0600 received badge  Good Answer (source)
2016-01-21 09:58:26 -0600 received badge  Good Answer (source)
2016-01-11 03:11:19 -0600 received badge  Nice Answer (source)
2016-01-06 12:14:53 -0600 commented question Wardrobe Analysis in OpenCV: Suggestions please!
  • Face detection
  • HSV conversion
  • Small rectangle region below the detected face is calculated
  • Calculate dominant color type in the region --> getPixelColorType function (color types: ("Black", "White","Grey","Red","Orange","Yellow","Green","Aqua","Blue","Purple","Pink"))
  • In order to calculate different types of clothes, you can compute LBP features over this small rectangle (or more regions) because LBP features can discriminate textures quite well.

References:

shirtDetection

2016-01-06 08:01:44 -0600 edited question Informative algorithms related to opencv

Based on the idea proposed in Informative websites related to OpenCV by Sturkmen, I think that it would also be useful to create a list about implementations based on papers and publications:

  1. implemented with OpenCV and/or
  2. that can be used/integrated easily with OpenCV

(maybe one toy example with each one would be fantastic)

As OpenCV is under the open-source BSD license, it would also be interesting that these algorithms would be BSD or similar. So, I am going to put my list (Maybe some algorithms should not be in this list, so do it together!).


TRACKING

  • Object tracking

  • Real-time Compressive Tracking

(http://www4.comp.polyu.edu.hk/~cslzha...) implementation integrated with opencv

Zhang, K., Zhang, L., & Yang, M. H. (2012). Real-time compressive tracking. In Computer Vision–ECCV 2012 (pp. 864-877). Springer Berlin Heidelberg.

  • Accurate scale estimation for robust visual tracking

Implemented in DLIB library http://dlib.net/

Danelljan, M., Häger, G., Khan, F., & Felsberg, M. (2014). Accurate scale estimation for robust visual tracking. In British Machine Vision Conference, Nottingham, September 1-5, 2014. BMVA Press. (winning algorithm from last year's Visual Object Tracking Challenge. )

FACE PROCESSING

  • Face pre-processing

  • Tan&Triggs processing

A efficient image pre-processing normalization algorithm to deal with difficult lighting conditions: Tan, X., & Triggs, B. (2010). Enhanced local texture feature sets for face recognition under difficult lighting conditions. Image Processing, IEEE Transactions on, 19(6), 1635-1650.

implementation: https://github.com/bytefish/opencv/bl... (BSD license)

  • Real-Time Face Pose Estimation

One Millisecond Face Alignment with an Ensemble of Regression Trees Kazemi, V., & Sullivan, J. (2014, June). One millisecond face alignment with an ensemble of regression trees. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on (pp. 1867-1874). IEEE.

Implemented in DLIB library http://dlib.net/

demo snippet: https://gist.github.com/berak/b23262a...

  • Face landmarks detector (face alignment)

Cao X, Wei Y, Wen F, et al. Face alignment by explicit shape regression[J]. International Journal of Computer Vision, 2014, 107(2): 177-190.

implementation: https://github.com/delphifirst/FaceX/

demo snippet: https://gist.github.com/berak/79aeb39...

  • Eye localization: Average of Synthetic Exact Filters

Bolme, D. S., Draper, B., & Beveridge, J. R. (2009, June). Average of synthetic exact filters. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on (pp. 2105-2112). IEEE.

implementation: https://github.com/laoyang/ASEF

  • Eye localization: Accurate eye centre localisation by means of gradient

Timm, F., & Barth, E. (2011, March). Accurate Eye Centre Localisation by Means of Gradients. In VISAPP (pp. 125-130).

implementation: https://github.com/trishume/eyeLike

  • Eye pupil localization (tracking)

Markuš, N., Frljak, M., Pandžić, I. S., Ahlberg, J., & Forchheimer, R. (2014). Eye pupil localization with an ensemble of randomized trees. Pattern recognition, 47(2), 578-587.

implementation: https://github.com/chrisjryan/eye-tra...

youtube video: https://www.youtube.com/watch?v=7J30y...

(This method uses the OpenCV library to identify eyes on faces and prepare these sub-images with previously-landmarked pupil information.)

FACE DETECTION

  • PICO Face detection

N. Markus, M. Frljak, I. S. Pandzic, J. Ahlberg and R. Forchheimer, "Object ...

(more)