Ask Your Question

vasu12360's profile - activity

2017-02-03 03:49:10 -0600 commented answer OpenCV configure with TBB for ARM, Ubuntu 12.10

How to install TBB after compilation on odroid xu4

2017-01-25 05:52:13 -0600 asked a question Problem with opencv+opencl odroid

I am working on odroid ux4 which comprise of Mali-T628.

When I run my simple canny edge detection algorithum for cpu and opencl on my laptop opencl came to be 2-2.5 times faster than cpu version.

However when i run the same code on odroid opencl code is much slower than cpu.

This is not only with the case of canny but also with many other algorithums like haar detection and many more.

can anyone tell me what can be the problem and how to solve it.

code:

#include <iostream>
#include <string>
#include <iterator>
//Program for testing CPU and openCL performance when canny algorithum is applied on the Image.
/*

1 GPU devices are detected.
name                 : Mali-T628
available            : 1
imageSupport         : 1
OpenCL_C_Version     : OpenCL C 1.1 

UMat X Canny Time: 5.28345
Mat X Canny Time: 4.39179

*/

#include <opencv2/opencv.hpp>
#include <opencv2/core/ocl.hpp>
//#include <CL/cl.h>
using namespace std;

int main(){
     if (!cv::ocl::haveOpenCL())
        {
            cout << "OpenCL is not avaiable..." << endl;
            return 0;
        }
        cv::ocl::Context context;
        if (!context.create(cv::ocl::Device::TYPE_GPU))
        {
            cout << "Failed creating the context..." << endl;
            return 0;
        }

//   cv::ocl::setUseOpenCL(true);
    // In OpenCV 3.0.0 beta, only a single device is detected.
        cout << context.ndevices() << " GPU devices are detected." << endl;


    for (int i = 0; i < context.ndevices(); i++)   
        {
        cv::ocl::Device device = context.device(i);
        cout << "name                 : " << device.name() << endl;
        cout << "available            : " << device.available() << endl;
        cout << "imageSupport         : " << device.imageSupport() << endl;
        cout << "OpenCL_C_Version     : " << device.OpenCL_C_Version() << endl;
        cout << endl;
//    break;
        }
 //   cv::ocl::Device(context.device(0));
    double tic, toc;

    cv::Mat mat_src = cv::imread("/home/vasu/opencv_opencl/DSC_0231.JPG", 0);
    cv::Mat mat_dst;
    cv::UMat umat_src = mat_src.getUMat(cv::ACCESS_READ, cv::USAGE_ALLOCATE_DEVICE_MEMORY);
    cv::UMat umat_dst;
if(!mat_src.data)
{
cout<<"no data available";
}
else
{
    cv::imshow("Input",mat_src);
    cv::waitKey(0);
}

    tic = (double)cv::getTickCount();
    for(int i = 0 ; i<100; i++){
        cv::Canny(umat_src, umat_dst, 0, 50);
    }
    toc = (double)cv::getTickCount();
    cout<<"UMat X Canny Time: "<<(double)((toc-tic)/cv::getTickFrequency())<<endl;

    tic = (double)cv::getTickCount();
        for(int i = 0 ; i<100; i++){
                cv::Canny(mat_src, mat_dst, 0, 50);
        }
        toc = (double)cv::getTickCount();
        cout<<"Mat X Canny Time: "<<(double)((toc-tic)/cv::getTickFrequency())<<endl;

    return 0;


}
2017-01-11 07:18:12 -0600 asked a question need suggestion regarding opencv+cuda+opencl

have been recently hired by a startup and they have given me the task to make opencv program as much computationally optimised as possible using cuda and opencl and record the performance of the program(code) in different scenario.

I have worked on opencv for 3-4 months but have not worked with cuda or opencl built opencv nor I have prior knowledge of cuda or opencl .

Though there are a few examples of cuda and ocl opencv functions, i dont want to just copy and exchange simple functions with cuda functions....i want to aquire deep knowledge of where to use cuda or opencl and were to use cpu for better optimisation.

Can anyone suggest me how to take steps in this field and how to pursue further? Further if we use standalone cuda or opencl libraries instead of built opencl lib is there any case when anyone get better result in former case P.S: I am using opencv 3.2 with c++

2016-08-12 06:26:39 -0600 commented question mouse problem in python 2 examples Ubuntu 12.04

hey rupert , could u submit that workaround of yours as i am stuck in the same problem

2016-08-12 02:53:30 -0600 commented question wateshed not working

from what i have seen watershed is only importing sketcher from common. In sketcher problem might be most probably in onmouse function, in which "cv2.EVENT_LBUTTONDOWN" is working perfectly fine as i have checked it myself it seems problem exist in "self.prev_pt and flags & cv2.EVENT_FLAG_LBUTTON". see if that helps

2016-08-12 02:08:22 -0600 commented answer wateshed not working

that's exactly what i am doing

2016-08-12 02:06:11 -0600 commented question wateshed not working

yep i am quite sure about that

2016-08-12 01:04:12 -0600 commented question wateshed not working

i have added it in the question part

2016-08-12 01:03:33 -0600 received badge  Editor (source)
2016-08-12 00:58:35 -0600 received badge  Enthusiast
2016-08-11 13:43:05 -0600 asked a question wateshed not working

I am trying to use watershed given in python example ,i am able to select the marker but it is not painting i.e when i am trying to drag on image no region is getting selected.I have not made any changes in the code provided in the example. i am using python 2.7.12 and opencv 2.4.13.

watershed.py:

#!/usr/bin/env python

'''
Watershed segmentation
=========

This program demonstrates the watershed segmentation algorithm
in OpenCV: watershed().

Usage
-----
watershed.py [image filename]

Keys
----
  1-7   - switch marker color
  SPACE - update segmentation
  r     - reset
  a     - toggle autoupdate
  ESC   - exit

'''




import numpy as np
import cv2
from common import Sketcher

class App:
    def __init__(self, fn):
        self.img = cv2.imread(fn)
        h, w = self.img.shape[:2]
        self.markers = np.zeros((h, w), np.int32)
        self.markers_vis = self.img.copy()
        self.cur_marker = 1
        self.colors = np.int32( list(np.ndindex(2, 2, 2)) ) * 255

        self.auto_update = True
        self.sketch = Sketcher('img', [self.markers_vis, self.markers], self.get_colors)

    def get_colors(self):
        return map(int, self.colors[self.cur_marker]), self.cur_marker

    def watershed(self):
        m = self.markers.copy()
        cv2.watershed(self.img, m)
        overlay = self.colors[np.maximum(m, 0)]
        vis = cv2.addWeighted(self.img, 0.5, overlay, 0.5, 0.0, dtype=cv2.CV_8UC3)
        cv2.imshow('watershed', vis)

    def run(self):
        while True:
            ch = 0xFF & cv2.waitKey(50)
            if ch == 27:
                break
            if ch >= ord('1') and ch <= ord('7'):
                self.cur_marker = ch - ord('0')
                print 'marker: ', self.cur_marker
            if ch == ord(' ') or (self.sketch.dirty and self.auto_update):
                self.watershed()
                self.sketch.dirty = False
            if ch in [ord('a'), ord('A')]:
                self.auto_update = not self.auto_update
                print 'auto_update if', ['off', 'on'][self.auto_update]
            if ch in [ord('r'), ord('R')]:
                self.markers[:] = 0
                self.markers_vis[:] = self.img
                self.sketch.show()
        cv2.destroyAllWindows()


if __name__ == '__main__':
    import sys
    try: fn = sys.argv[1]
    except: fn = '../cpp/fruits.jpg'
    print __doc__
    App(fn).run()

common.py:

#!/usr/bin/env python

'''
This module contais some common routines used by other samples.
'''

import numpy as np
import cv2
import os
from contextlib import contextmanager
import itertools as it

image_extensions = ['.bmp', '.jpg', '.jpeg', '.png', '.tif', '.tiff', '.pbm', '.pgm', '.ppm']

class Bunch(object):
    def __init__(self, **kw):
        self.__dict__.update(kw)
    def __str__(self):
        return str(self.__dict__)

def splitfn(fn):
    path, fn = os.path.split(fn)
    name, ext = os.path.splitext(fn)
    return path, name, ext

def anorm2(a):
    return (a*a).sum(-1)
def anorm(a):
    return np.sqrt( anorm2(a) )

def homotrans(H, x, y):
    xs = H[0, 0]*x + H[0, 1]*y + H[0, 2]
    ys = H[1, 0]*x + H[1, 1]*y + H[1, 2]
    s  = H[2, 0]*x + H[2, 1]*y + H[2, 2]
    return xs/s, ys/s

def to_rect(a):
    a = np.ravel(a)
    if len(a) == 2:
        a = (0, 0, a[0], a[1])
    return np.array(a, np.float64).reshape(2, 2)

def rect2rect_mtx(src, dst):
    src, dst = to_rect(src), to_rect(dst ...
(more)
2016-08-09 08:18:40 -0600 asked a question install opencv 2.4.13 in ubuntu 16.04

i have recently updated in ubuntu 16.04 ,and now i am facing a lot of problem to run opencv can someone give me proper and clear instruction/steps on how to install opencv on 16.04

2016-08-01 12:46:34 -0600 commented question How to save trained data for Face recognisation

i would need to train 200-500 image

2016-08-01 12:05:23 -0600 commented question How to save trained data for Face recognisation

opencv 2.4.13 , python 2.7

2016-08-01 11:46:15 -0600 commented question How to save trained data for Face recognisation

but how do i store trained model in memory.I will rely appreciate if you can guide me through this or suggest me some links which can help me to implement LBPH as i am not much proficient in opencv

2016-08-01 07:04:00 -0600 asked a question How to save trained data for Face recognisation

I am working on a face recognition program using cv2.createLBPHFaceRecognizer() on python given at: http://hanzratech.in/2015/02/03/face-...

I am able to add new faces in database however, I want to save the trained data in a file so that i don't have to retrain same database how could i do so.