2020-10-13 17:52:48 -0600 | received badge | ● Nice Question (source) |
2020-10-13 17:52:44 -0600 | marked best answer | additive Gaussian noise with different SNR I am reading a paper. It says like this: "For experiments conducted on noisy images, each texture image was corrupted by additive Gaussian noise with zero mean and standard deviation that was determined according to the corresponding Signal-to-Noise Ratios (SNR) value." And then, they show the classification rate (%) on UIUC database with additive gaussian nosie of different Signal-To-Noise Rations (SNR):(SNR=100 SNR=30 SNR=15 SNR=10 SNR=5) So I want to do the same.... Is GaussianBlur my function? How do I determine the SNR? |
2020-10-13 17:52:24 -0600 | received badge | ● Notable Question (source) |
2020-09-29 02:14:32 -0600 | received badge | ● Nice Answer (source) |
2020-04-08 23:15:49 -0600 | received badge | ● Notable Question (source) |
2018-12-09 13:30:10 -0600 | commented question | find contours in an image In connection with the image: an 8-bit single-channel image. Non-zero pixels are treated as 1's. Zero pixels remain 0's, |
2018-12-09 13:21:59 -0600 | commented question | Extracting foreground with otsu thresholding I do not understand you very well, but I think that you should try this: res = cv2.bitwise_and(img, img, mask=imgf) |
2018-12-09 13:21:09 -0600 | commented question | Extracting foreground with otsu thresholding I do not understand you very well, but I think that you should try this: cv2.bitwise_and(img, img, mask=imgf) |
2018-12-06 04:17:02 -0600 | commented answer | How to get boundry and center information of a mask x_centroid = round(M['m10'] / M['m00']) y_centroid = round(M['m01'] / M['m00']) |
2018-12-06 04:16:00 -0600 | commented answer | How to get boundry and center information of a mask x_centroid = round(M['m10'] / M['m00']) y_centroid = round(M['m01'] / M['m00']) |
2018-12-06 04:15:34 -0600 | commented answer | How to get boundry and center information of a mask x_centroid = round(M['m10'] / M['m00']) y_centroid = round(M['m01'] / M['m00']) |
2018-12-06 04:15:02 -0600 | commented answer | How to get boundry and center information of a mask x_centroid = round(M['m10'] / M['m00']) y_centroid = round(M['m01'] / M['m00']) |
2018-11-29 05:37:36 -0600 | received badge | ● Nice Answer (source) |
2018-11-27 10:07:47 -0600 | answered a question | How to get boundry and center information of a mask This python code performs what you want. # Import required packages: import cv2 # Load the image and convert it to gr |
2018-11-05 01:34:25 -0600 | commented answer | Can anyone know the code of python to put two frames in a single window output specifically to use it in opencv Many thanks @sturkmen :). It's nice to ear things like this |
2018-11-04 15:03:18 -0600 | received badge | ● Nice Answer (source) |
2018-11-04 11:45:27 -0600 | answered a question | Can anyone know the code of python to put two frames in a single window output specifically to use it in opencv Images are numpy arrays. Therefore you can use numpy capabilities to create an image contaning two images. The images to |
2018-09-06 10:01:15 -0600 | received badge | ● Good Answer (source) |
2018-04-27 04:46:10 -0600 | edited question | human detection using python-opencv Deteccion de persona utilizando python- openCV. I want to detect the movement of a person using: bodydetection = cv2.Ca |
2018-04-27 04:46:06 -0600 | edited question | human detection using python-opencv Deteccion de persona utilizando python- openCV. I want to detect the movement of a person using: bodydetection = cv2.Ca |
2018-02-02 19:47:13 -0600 | received badge | ● Good Answer (source) |
2017-11-23 15:00:49 -0600 | received badge | ● Nice Question (source) |
2017-11-09 16:20:16 -0600 | received badge | ● Notable Question (source) |
2017-10-31 10:18:52 -0600 | received badge | ● Famous Question (source) |
2017-08-10 14:41:25 -0600 | received badge | ● Popular Question (source) |
2017-08-01 11:28:25 -0600 | received badge | ● Nice Answer (source) |
2017-07-25 12:04:29 -0600 | received badge | ● Popular Question (source) |
2017-04-20 08:37:21 -0600 | received badge | ● Good Answer (source) |
2016-07-28 06:31:39 -0600 | received badge | ● Popular Question (source) |
2016-05-25 08:00:05 -0600 | received badge | ● Notable Question (source) |
2016-02-27 07:22:22 -0600 | received badge | ● Good Answer (source) |
2016-02-24 17:30:25 -0600 | received badge | ● Nice Answer (source) |
2016-02-15 03:45:26 -0600 | marked best answer | HEP-histogram of equivalent patterns There are a lot of papers comparing Local Binary Pattern (LBP) versus Local Ternary Pattern (LTP), or modifications to the original LBP operator like Center-symmetric local binary patterns (CS-LBP), Local quinary patterns (LQP), Completed Local Binary Pattern (CLBP) and so on. All this methods belong to the same type and a framework for texture analysis can be build. This framework for texture analysis is called HEP (histogram of equivalent patterns) and it is described in this paper: Texture description through histograms of equivalent patterns This is the project web page, with Matlab implementation of hep (hep.m): http://dismac.dii.unipg.it/hep/index.html It would be fantastic if Opencv had HEP implementation. I have written some lines of pseudo-code assuming there is an Opencv implementation: |
2016-02-15 03:43:53 -0600 | received badge | ● Good Question (source) |
2016-02-13 10:41:13 -0600 | received badge | ● Good Answer (source) |
2016-02-09 10:08:04 -0600 | received badge | ● Good Answer (source) |
2016-02-09 06:26:37 -0600 | received badge | ● Nice Answer (source) |
2016-02-09 05:12:45 -0600 | answered a question | Fire detection using opencv Based on comment from this question, and coments from the other related question (Fire/Flame Detection using OpenCV): @StevenPuttemans: "Another way could be to use the fact that fire actually involves motion between frames in a video input" @Guanta "You could train a cascade-classifier with LBPs since LBPs are also often used for texture detection/recognition this would be worth trying" @red-viper @pklab "The example shows that in the visible spectrum, the flame visibility depend on context. In the IR, the context is less relevant. After this you could use ML on IR image too." I would like to complete the proposed hardware answer with a software one that i think could work together. In this paper: "Early Fire Detection Using HEP and Space-time Analysis" [1] motion (as suggested by @stevenputtemans) and texture analysis (as suggested by @guanta) are involved. You could also check "Video Fire Detection - Review" [2], Real-time Fire Detection for Video Surveillance Applications using a Combination of Experts based on Color, Shape and Motion [3], Automatic fire pixel detection using image processing: A comparative analysis of Rule-based and Machine Learning-based methods [4] [1] http://arxiv.org/pdf/1310.1855.pdf [2] http://signal.ee.bilkent.edu.tr/Publi... |
2016-02-05 12:30:10 -0600 | commented answer | How to train images for smile detection? I agree with @StevenPuttemans |
2016-02-05 07:10:33 -0600 | received badge | ● Nice Answer (source) |
2016-02-05 06:53:00 -0600 | answered a question | How to train images for smile detection? In order to get more images, you can use GENKI-4k Face, Expression, and Pose Dataset [1] The GENKI-4K dataset contains 4,000 face images spanning a wide range of subjects, facial appearance, illumination, geographical locations, imaging conditions, and camera models. All images are labeled for both Smile content (1=smile, 0=non-smile) and Head Pose (yaw, pitch, and roll parameters, in radians). I have used this database to train a smile classifier using LBP + SVM approach (some years ago) and results were quite satisfactory. Steps:
Step (2) can be decomposed in:
The public GENKI-4K dataset is available for download here [2]. [1] http://mplab.ucsd.edu/wordpress/?page... [2] http://mplab.ucsd.edu/wordpress/wp-co... Two sample images: |
2016-01-31 15:45:23 -0600 | received badge | ● Good Answer (source) |
2016-01-21 09:58:26 -0600 | received badge | ● Good Answer (source) |
2016-01-11 03:11:19 -0600 | received badge | ● Nice Answer (source) |
2016-01-06 12:14:53 -0600 | commented question | Wardrobe Analysis in OpenCV: Suggestions please!
References: |
2016-01-06 08:01:44 -0600 | edited question | Informative algorithms related to opencv Based on the idea proposed in Informative websites related to OpenCV by Sturkmen, I think that it would also be useful to create a list about implementations based on papers and publications:
(maybe one toy example with each one would be fantastic) As OpenCV is under the open-source BSD license, it would also be interesting that these algorithms would be BSD or similar. So, I am going to put my list (Maybe some algorithms should not be in this list, so do it together!). TRACKING
FACE PROCESSING
(This method uses the OpenCV library to identify eyes on faces and prepare these sub-images with previously-landmarked pupil information.) FACE DETECTION
(more) |