Ask Your Question

Jagatheesan Jack's profile - activity

2014-07-15 05:11:45 -0500 asked a question Will callbacks be called once OpenCV Package Manager finish installing in Google Play Store [Android]??

Given a scenario as such:

Referring to the workflow outlined in the document, Manager Workflow and the given callbacks, Loader Callback Inteface and Install Callback Interface.

  1. OpenCV Package Manager is not installed.
  2. User launches application.
  3. User is asked to install OpenCV Package Manager and is directed to Google Play Store.
  4. User installs OpenCV Package Manager.

Will OpenCV callbacks be called to indicate the package has been finish installing so that the activity that the user was in before being directed to Google Play Store can be resumed?

2014-06-24 06:47:16 -0500 commented question How to improve grayscale image quality for eye detection?

Thx for the suggestion. Just browsed through the implementation of fladmark. Looks straightfoward. Btw, I am developing an Android application. Integrating with Android will include working with JNI etc. My only concern might be whether it would be too heavy on a smartphone.

Still looking for any optimization that can be done to the image before eye detection.

2014-06-23 04:50:36 -0500 asked a question How to improve grayscale image quality for eye detection?

Hei guys,

Currently I am converting preview frames into grayscale using the codes below:

        Imgproc.cvtColor(matYuv, matRgb, Imgproc.COLOR_YUV420sp2RGB, 4);
        Imgproc.cvtColor(matRgb, matGray, Imgproc.COLOR_RGB2GRAY, 0);

I am using this preview frames for eye detection using haarcascades_eye.xml cascade classifier. Before passing the region of interest for eye detection, I'm doing some preprocessing as below:

        Imgproc.equalizeHist(matGray, matGray);

However, I find that the image produces is distorted as in "white washed" when the lighting is low and as a consequence, the eye detection is bad. Is there any preprocessing that can be done to enhance this images?


2014-04-29 09:32:23 -0500 received badge  Editor (source)
2014-04-29 09:31:57 -0500 asked a question How to use OpenCV face detection in portrait using byte[] data from onPreviewFrame()?

Hi guys,

I am trying to use OpenCV face detection using the the byte[] data obtained from the onPreviewFrame() method of the Camera.PreviewCallback

I manage to convert the data into grayscale image using the codes below.

        Mat matNew = new Mat(pHeight, pWidth, CvType.CV_8U);
        matNew.put(0, 0, data);
        Mat matrgb = new Mat();
        Imgproc.cvtColor(matNew, matrgb, Imgproc.COLOR_YUV420sp2RGB, 4);
        Mat matgray = new Mat();
        Imgproc.cvtColor(matrgb, matgray, Imgproc.COLOR_RGB2GRAY, 0);

and I have set android:screenOrientation to "portrait" in the AndroidManifest file.

I am using OpenCV JavaDetector

mJavaDetector.detectMultiScale(matgray, faceDetected, 1.1, 3, 0,
new org.opencv.core.Size(0,0), new org.opencv.core.Size(matgray.width(), matgray.height()));

and drawing a rectangle over the faces detected using this

     for (Rect rect : faceDetected.toArray()){
      Core.rectangle(matgray, new Point(rect.x, rect.y), 
new Point(rect.x + rect.width, rect.y + rect.height),
new Scalar(0, 255, 0));

However, in the resulting grayscale mat, face detection only happens when I hold my Android phone in landscape position. It does not work in portrait position.

Is there any way to overcome this issue? I have used the Android FaceDetectionListener and that doesn't seem to have problem detecting faces in portrait mode. But, FaceDetectionListener's functions are limited compared to OpenCV.

Any help would be greatly appreciated. Thx.

2013-11-20 09:36:07 -0500 commented question Why face detected is in OpenCV is not exactly surrounding face??

Any idea??

2013-11-19 10:53:52 -0500 asked a question Why face detected is in OpenCV is not exactly surrounding face??

I am trying to do a face detection application.

It receives data from from onPreviewFrame and process it in openCV.

The problem is that when I convert it back to Bitmap to be displayed in Android, the rectangle for the face detected is not exactly surrounding the face.

It is moved slightly downwards. So, the top of the rectangle is at below the nose and the bottom is at the bottom of the neck.

My code are as below.

Mat matYuv = new Mat();

matYuv = new Mat(pHeight, pWidth, CvType.CV_8UC1);
matYuv.put(0, 0, data);

Mat matRgb = new Mat();
Imgproc.cvtColor(matYuv, matRgb, Imgproc.COLOR_YUV420sp2RGB, 4);
Mat matGray = new Mat();

int height = matGray.rows();
int faceSize = Math.round(height * 1.0F);

MatOfRect rectFaces = new MatOfRect();
Imgproc.cvtColor(matRgb, matGray, Imgproc.COLOR_RGB2GRAY, 0);

//transpose and flipping matrix to enable detection in portrait mode                                            
Mat temp = matGray.clone();
Core.transpose(matGray, temp);
Core.flip(temp, temp, -1);

if (mJavaDetector != null)
mJavaDetector.detectMultiScale(temp, faces, 1.1, 2, 2, // TODO: objdetect.CV_HAAR_SCALE_IMAGE
                                                        new Size(faceSize, faceSize), new Size());

facesArray = faces.toArray();
//drawing rectangle around faces
for (int i = 0; i < facesArray.length; i++){
Core.rectangle(matGray, facesArray[i].tl(), facesArray[i].br(), FACE_RECT_COLOR, 3);
//converting from OpenCV to bitmap to be displayed in surface view.

resultBitmap = Bitmap.createBitmap(640, 480, Bitmap.Config.RGB_565);
Utils.matToBitmap(matGray, resultBitmap);

catch(Exception e){                         
Log.e(TAG, "Length of resultBitmap" +resultBitmap.getByteCount());
2013-11-15 08:00:38 -0500 asked a question Face detection using OpenCV Java Android's camera onPreviewFrame()'s byte[] data does not work.

I am trying to used OpenCV java wrapper to detect faces in an app.

1) The app is using front-facing camera in potrait mode.

2) The preview is displayed inside a SurfaceView.

3) I am using Android's onPreviewFrame()'s byte[] data and changing it into OpenCV format to detect the faces using OpenCV.

4) I send the coordinates detected by mJavaDetector to a View class to be drawn on a instance of the drawingView surface.

When I start the application, there are no errors but no rectangle is drawn. The "Log.v(TAG, "Length of facesArray" + facesArray.length);" inside SurfaceChanged is also not displayed. Is it because the onPreviewFrame() is not called or a problem with my OpenCV implementation or other problems?

I have attached my codes below.

public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {

           previewing = false;

          if (mCamera != null){
           try {
               mCamera.setPreviewCallback(new PreviewCallback() {
                    public void onPreviewFrame(byte[] data, Camera mCamera) {
                        Log.d(TAG, "ON Preview frame");
                        int mAbsoluteFaceSize   = 0;

                        int width = surfaceView.getWidth();
                        int height = surfaceView.getHeight();
                        Mat img = new Mat(height, width, CvType.CV_8UC1);
                        Mat gray = new Mat(height, width, CvType.CV_8UC1);
                        img.put(0, 0, data);        

                        Imgproc.cvtColor(img, gray, Imgproc.COLOR_YUV420sp2GRAY);

                        MatOfRect faces = new MatOfRect();

                        if (mJavaDetector != null)
                            mJavaDetector.detectMultiScale(gray, faces, 1.1, 2, 2, // TODO: objdetect.CV_HAAR_SCALE_IMAGE
                                    new Size(mAbsoluteFaceSize, mAbsoluteFaceSize), new Size());

                        Rect[] facesArray = faces.toArray();

                        Log.v(TAG, "Length of facesArray" + facesArray.length);

                        if(facesArray != null){

                        for (int i = 0; i < facesArray.length; i++){
                            double l = facesArray[i].tl().x;
                            double t = facesArray[i].tl().y;
                            double r = facesArray[i].br().x;
                            double b = facesArray[i].br().y;
                            drawingView.setCoordinates(l, t, r, b);


              "Max Face: " + mCamera.getParameters().getMaxNumDetectedFaces()));
            previewing = true;**/
           } catch (IOException e) {
            // TODO Auto-generated catch block

private class DrawingView extends View{

      boolean haveFace;
      Paint drawingPaint;
      float left;
      float top;
      float right;
      float bottom;

      public DrawingView(Context context) {
       haveFace = false;
       drawingPaint = new Paint();

      public void setHaveFace(boolean h){
       haveFace = h;

      public void setCoordinates(double l,double t, double r, double b){
          left = (float)l;
          top = (float)t;
          right = (float)r;
          bottom = (float)b;
      protected void onDraw(Canvas canvas) {

                       left, top, right, bottom,  

My drawingView declaration in onCreate()

  drawingView = new DrawingView(this);
    LayoutParams layoutParamsDrawing 
     = new LayoutParams(LayoutParams.FILL_PARENT, 
    this.addContentView(drawingView, layoutParamsDrawing);