# Detecting color range from "avarage"

Currently I am using a HaarCascade to detect a face in a picture. Which is working, I am getting a rect of where the face is.

Now I want to get the "average?" (skin color) in that rect. And use that as a base for the color range to search for other skin in the photo. How should I go about that?

I have found the inRange function, which searches for a color in a range. But I am not quite sure how I could get the average color of my skin in there. It seems that the inRange function needs HSV values? However, I don't know quite what that format is. It doesn't seem to be the same as HSB in photoshop. (Which I tried for "testing purposes").

My question boils down to this, how can I get the "average" color in a rect, and find other colours in that range (e.g, lighter and darker than that color, but the same shade).

Thanks.

edit retag close merge delete

Sort by » oldest newest most voted

I think the other answer is way to complicated for this problem. Basically you will need to do the following steps

1. Convert your region of interest (detection) to HSV color space by using the cvtColor function with the CV_BGR@HSV parameter.
2. Now define the max and min value of H S and V channel.
3. Use these values to get a good segmentation of the original values

This code snippet should do about what you need. It contains much more functionality, but it shouldn't be hard to filter out the needed parts, which I have no time for now.

// workshop_face_detect.cpp : Performing LBP CUDA face detection on live video stream
// Make it possible to segment out skin color

#include <opencv/cv.h>
#include <opencv/cvaux.h>

#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/gpu/gpu.hpp"

#include <iostream>
#include <stdio.h>

using namespace std;
using namespace cv;

// Basic function to calculate gradient magnitude and angle matrix based on input
Mat img_smooth;
GaussianBlur( input, img_smooth, Size(11,11), 5);

Mat grad_x = Mat(img_smooth.rows, img_smooth.cols, CV_64F);
Mat grad_y = Mat(img_smooth.rows, img_smooth.cols, CV_64F);

Sobel( img_smooth, grad_x, CV_64F, 1, 0, 3, 1, 0, BORDER_DEFAULT );
Sobel( img_smooth, grad_y, CV_64F, 0, 1, 3, 1, 0, BORDER_DEFAULT );

Mat magnitude = Mat(img_smooth.rows, img_smooth.cols, CV_64F);

Mat orientations = Mat(img_smooth.rows, img_smooth.cols, CV_64F);

for(int i = 0; i < img_smooth.rows; i++){
for(int j = 0; j < img_smooth.cols; j++){
}
}

vector<Mat> output;
output.push_back(magnitude);
output.push_back(orientations);

return output;
}

// Based on radial coördinates (angle and magnitude) calculate corresponding carthesian coördinates (x,y)
// Specific for OpenCV coördinate system
vector<Point> radial_to_carthesian(Point start, double angle, double magnitude){
const double PI = 3.141592;

// Since sin and cos functions already return values between [-1,1] we do not need to calculate signs for quadrants
// However, this corner is still given a standard
double angle_rad = angle * PI / 180;
double x_temp = cos(angle_rad) * magnitude;
double y_temp = sin(angle_rad) * magnitude;
double x_2 = start.x + x_temp;
double y_2 = start.y + y_temp;

// Create points
vector<Point> result;
result.push_back(start);
result.push_back(Point(x_2, y_2));

return result;
}

Mat result = Mat(input.rows, input.cols, input.type());
input.copyTo(result);
for(int i = 3; i < input.rows; i = i + step){
for(int j = 3; j < input.cols; j = j + step){
// the points (i,j) now loop through the image with points to draw
// check in which quadrant the angle lies and then compute the correct x and y length
// Since data is now provided as [0-1] ranges, we need to multiply with 360 to get the actual angle
double angle = gradients[1].at<double>(i,j) * 360;
vector<Point> line_positions = radial_to_carthesian(Point(j,i), angle, magnitude);
line(result ...
more

I must agree that my method was not one of the easiest, but why is a bad choice anyway?

( 2014-02-05 07:11:42 -0500 )edit

Hmm I am in the opinion (but that is just mine) that your solution is 'over the top' for solving this problem. However, this is my opinion and shouldn't be taken for granted... I think making it somewhat this difficult is not needed for inexperienced users :)

( 2014-02-05 07:24:42 -0500 )edit

You should try the method backprojection: http://docs.opencv.org/doc/tutorials/imgproc/histograms/back_projection/back_projection.html It will give you a probability map of your image, where you get skin color, and where not.

more

Official site

GitHub

Wiki

Documentation