Ask Your Question

pi-null-mezon's profile - activity

2018-02-07 11:40:08 -0600 marked best answer Sequence of calls in the cv::gemm(...) function

Recently I'have digged into the opencv sources, the reason was low performance of the opencv_dnn module. Searching around I have came to the cv::gemm(...) function. So, I see that function can be divided in two parts. First, optional part that calls optimized version of gemm routine from ocl module or if clAMDBLAS defined from clAMDBLAS. The second part makes some optional transpositions and in the end calls gemm32/64f(...) whitch (track the chain calls) will call "manual" non optimized gemmImpl(...) function! From the source code we can see, that two mentioned parts are independent, so if cv::gemm will be called both of them will be executed... and the performance drammatically drops. If I comment second part, I get 15x speed up, BUT also different results on the same data. It means that the second part does something very important, but I can not find what exactly. So, is this an issue, what does exactly cv::gemm(...) do?

2017-12-14 05:30:22 -0600 received badge  Nice Answer (source)
2017-12-13 07:49:27 -0600 edited answer Detecting pattern in an image

You can use copyMakeBorder(...) function to enlarge your image with borderType=BORDER_WRAP option. After that you can pr

2017-12-13 07:16:34 -0600 answered a question Detecting pattern in an image

You can use copyMakeBorder(...) function to enlarge your image with borderType=BORDER_WRAP option. After that you can pr

2017-05-03 09:07:13 -0600 commented question what is the solution to this error

The solutions is to check is image empty or not before try to show it.

2017-05-03 09:05:25 -0600 commented question C++ OPENCV std::out_of_range error

To many vector<vector<vector...&gt;&gt;, so,="" somwhere="" in="" the="" code="" you="" just="" miss="" out_of_range.="" simplify="" the="" code,="" for="" the="" instance="" you="" could="" search="" only="" "strong="" features"="" points="" and="" then="" find="" min="" area="" rect="" instead="" of="" searching="" contours.<="" p="">

2017-05-03 08:56:24 -0600 commented question Can I use OpenCV to detect weeds in a paddock?

It seems that RaspberryPi is perfectly fits to your aims

2017-05-03 08:56:24 -0600 received badge  Commentator
2017-05-03 08:49:04 -0600 answered a question Adaptative Gaussian Filter.

As LBerger said there is no default masking option for smooth operations in opencv. For the speed up you can divide your image into several regions (for the instance in the same number as your system CPU cores) and process them in parallel, then stitch all results together. Opencv contains simple to use API for such optimizations. For the instance you can read this.

2017-05-03 08:35:07 -0600 commented question Is there any OpenCV or IPP equivalent for this function?

It seems you can seriously speedup your own code by replacing all the constructions as im.at<float>(x,y) by the pointers arithmetics equivalent.

2017-05-03 08:27:50 -0600 commented question Opencv 3.2 and Videocapture not work

Some antiviruses can lock webcam. If you use one of them you need to manually allow webcam usage.

2017-05-03 08:23:47 -0600 answered a question I want to develop End to End Text Recognition in Natural Scene Images

The good ideia is to check if anybody already solve your problem. Check opencv-text samples. I suggest you to start with this one.

2017-05-03 08:19:24 -0600 answered a question fps - how to divide count by time function to determine fps

To get an fps you should divide the so called tick-frequency (how many ticks processor makes at a second) on the task tick-count (how many ticks processor has spended to mske a particualr task).

In C++ API it will be:

 unsigned long tickmark = cv::getTickCount(); // update current tick count 
   while(true) {
      // DO SOME PROCESSING HERE
      fps = cv::getTickFrequency()/(cv::getTickCount() - tickmark); // fps in Hz (1/s)
      tickmark = cv::getTickFrequency(); 
    }
2017-04-14 09:58:00 -0600 commented question Replace a chain of image blurs with one blur

Maybe it is because of bounding effects on the image boundaries. It will be great if you just visualize the diff by the cv::imshow(...). Can you paste it here?

2017-04-14 07:11:46 -0600 commented answer Encoding 32FC1 explained?

Sorry, but it is not clear what type of conversion you want to do? What you have on the input? What you want to get at the output?

2017-03-24 07:49:32 -0600 received badge  Nice Answer (source)
2017-03-24 06:35:35 -0600 commented question can't extract opencv 2.1

if you strongly need opencv 2.1, just try to redowload the archive and make another attempt to extract

2017-03-24 06:32:22 -0600 commented question namedWindow + imshow not showing on the screen

does application returns when you press any button after camera has been opened?

2017-03-24 06:22:29 -0600 commented question make error in openCV tuitorial example code

What tools you want to use? Microsoft Visual Studio or something else? Btw, CMake is needed only for custom generation of the makefile for opencv build.

2017-03-24 06:13:17 -0600 answered a question Encoding 32FC1 explained?

32FC1 means that each pixel value is stored as one channel floating point with single precision

2017-02-28 00:37:11 -0600 answered a question Creating a Histogram

Yes, it is possible. For the instance:

void ImageQualityController::calculateHist(const cv::Mat &input, float *blue, float *green, float *red)
{
    /* Calculates histograms of inputImage and copies them into input vectors,          *
     * it is caller responsibility to allocate memory for them, each needs float[256]   */

    int bins = 256;
    int histSize[] = { bins };
    float marginalRanges[] = { 0, 256 };
    const float* ranges[] = { marginalRanges };

    int channels[] = { 0 };
    cv::Mat hist;
    cv::calcHist(&input, 1, channels, cv::Mat(), // mask not used
        hist, 1, histSize, ranges,
        true, // the histogram is uniform
        false);
    auto pointer = hist.ptr<float>(0);
    for (auto i = 0; i < 256; i++) {
        blue[i] = pointer[i];
    }

    channels[0] = 1;
    cv::calcHist(&input, 1, channels, cv::Mat(), // mask not used
        hist, 1, histSize, ranges,
        true, // the histogram is uniform
        false);
    pointer = hist.ptr<float>(0);
    for (auto i = 0; i < 256; i++) {
        green[i] = pointer[i];
    }

    channels[0] = 2;
    cv::calcHist(&input, 1, channels, cv::Mat(), // mask not used
        hist, 1, histSize, ranges,
        true, // the histogram is uniform
        false);
    pointer = hist.ptr<float>(0);
    for (auto i = 0; i < 256; i++) {
        red[i] = pointer[i];
    }
}
2017-02-27 02:08:58 -0600 commented question I need to extract invariant features of iris from a normal picture

As your code says, you detect faces with the minsize(30,30). So, the eyes detector could not accurate detect eyes on such small scales... and if we will go further how much pixels should represent the iris for the invariant description evaluation? It seems, that you are trying to extract iris images from the ordinary web or ip camera images... Do not waste your time it is not possible. The right way for the iris recognition is the stationary and controlled image acquisition setup as ophthalmologists are used.

2017-02-27 01:41:25 -0600 answered a question Neural Network for Image Recognition in C++/OpenCv

If you familiar with the Opencv and tiny-dnn this project could be helpfull.

2017-02-27 01:33:27 -0600 commented question DirectShow camera gives black image

Have you research if any other 3rddparty software can capture frames from the IDS uEye? For the instance VLC video player?

2017-02-22 10:24:30 -0600 received badge  Nice Answer (source)
2017-02-22 07:58:24 -0600 answered a question Hello trying to create a cv::Mat() got Insufficient memory

When you try to allocate more than 2 in power of 32 bytes in the 32-bit system it could not be done because there is not enough memory you can address (9208 * 15152 * 4(channels) > 2 in power of 32). Your image is too big for your build system. So what you should do? In case you are working in 32-bit operation system you should upgrade your operation system to 64-bit. In case you working in 64-bit system, you probably use 32-bit build environment (compiler), just switch to 64-bit build tools.

2017-02-22 07:46:39 -0600 commented answer Make Border and sum

image Mat copies themself into the part of the padded Mat. If 0.999... still appears the result comes from the float point arithmetic features, try to check if it will still be 0.999... if you'll increase optimal_rows and optimal_cols by 1

2017-02-22 05:13:19 -0600 answered a question Make Border and sum

Because in your particular case copyMakeBorder makes extrapolation, as a result you have got slightly different sum value. To prevent this you can make custom copy without extrapolation:

cv::Mat computeDFT(cv::Mat image, int optimal_rows, int optimal_cols) {
    cv::Mat padded = cv::Mat(optimal_rows, optimal_cols, image.type());

    std::cout << image.rows << " " << image.cols << " " << image.type() << " " << cv::sum(image) << std::endl;

    cv::Mat _tempmat = cv::Mat(padded, cv::Rect(0,0,image.cols,image.rows));
    image.copyTo(_tempmat); 

    std::cout << padded.rows << " " << padded.cols << " " << padded.type() << " " << cv::sum(padded) << std::endl;


    cv::Mat complexImage(padded.rows, padded.cols, CV_64FC2);
    cv::dft(padded, complexImage, CV_HAL_DFT_COMPLEX_OUTPUT);
    return complexImage;
}
2017-02-22 02:17:55 -0600 answered a question correct ghosting

Do you have a linear polarizer filter? As glass of the mirror is a dielectric material, the light reflected by the glass surface should be partially polarized, at the same time the light reflected by the mirror's substract (wich is metallic) should not be polarized. So, you can try to cut off those ghosting reflection by means of polarization filter, that should be mount on top of your camera lens.

2017-02-20 05:53:39 -0600 marked best answer Opencv_dnn >> can't load network ResNet-101

Hello. I have found that ResNet-101 cannot be loaded by opencv dnn module. When try to load this message appears:

[libprotobuf ERROR C:\Programming\3rdParties\opencv310\opencv_contrib\modules\dn
n\3rdparty\protobuf\sources\protobuf-3.1.0\src\google\protobuf\text_format.cc:29
8] Error parsing text-format caffe.NetParameter: 33:26: Message type "caffe.Laye
rParameter" has no field named "batch_norm_param".

OpenCV Error: Unspecified error (FAILED: ReadProtoFromTextFile(param_file, param
). Failed to parse NetParameter file: ResNet-101-deploy_augmentation.prototxt) in cv::dnn::ReadNetParamsFromTextFileOrDie, file C:\Programming\3rdParties\opencv310\opencv_contrib\mo
dules\dnn\src\caffe\caffe_io.cpp, line 1101 C:\Programming\3rdParties\opencv310\opencv_contrib\modules\dnn\src\caffe\caffe_io.cpp:1101: error: (-2) FAILED: ReadProtoFromTextFile(param_file, param). Failed to parse NetParameter file: ResNet-101-deploy_augmentation.prototxt in function cv::dnn::ReadNetParamsFromTextFileOrDie

So, it is clear that it is because: "caffe.LayerParameter" has no field named "batch_norm_param". But, what I supposed to do to solve this? Any ideas? Please, help me.

2017-02-20 05:21:33 -0600 received badge  Necromancer (source)
2017-02-20 05:21:33 -0600 received badge  Self-Learner (source)
2017-02-20 05:20:16 -0600 answered a question Opencv_dnn >> can't load network ResNet-101

Good news everyone! ResNet layers have been added!

2017-02-20 05:16:35 -0600 answered a question Trying to load a .mp4 Video fails

Install cmake-gui and try to generate make file for the opencv build by it. You can see if ffmpeg support is enabled or not.

2017-02-20 05:02:02 -0600 answered a question how can i play video

Playback video in the window with the fps from the file metainfo, filename passed as 2'nd cmd argument:

#include <opencv2/opencv.hpp>

using namespace cv;

int main(int argc, char *argv[])
{
    if(argc > 1) { // application will wait videofilename as the second cmd argument

        VideoCapture _vc;
        if(_vc.open(argv[1]) == false)
            return -2; // Indicates that file could not be opened for the some reasons

        double fps = _vc.get(CV_CAP_PROP_FPS); // get fps from the file's metainfo
        int delayms = static_cast<int>(1000.0/fps); // get delay between the frames in milliseconds

        Mat matframe;
        while(_vc.read(matframe)){
            imshow("viewport", matframe);
            if(waitKey(delayms) == 27) { // 27 is ESCAPE code, so if the user press escape then while will be breaked
                break;
            }
        }
        return 0;
    } else {
        return -1; // means that the user did not provide videofilename
    }
}

You want take into account processing time? So, let's slightly change the code:

#include <opencv2/opencv.hpp>

using namespace cv;

double measureframetimems(VideoCapture &_vc, cv::Mat(*_functionpointer)(const Mat &_mat), unsigned int _n) {
    if(_vc.isOpened() == false) {
        return -1.0;
    }
    cv::Mat _framemat;
    double _totalprocctime = 0.0;
    double _timemark;
    unsigned int _frames = 0;
    for(unsigned int i = 0; i < _n; ++i) {
        if(_vc.read(_framemat)) {
            _functionpointer(_framemat);
            if(i > 0) { // let's drop first frame from the time count
                _totalprocctime += (getTickCount() - _timemark)/getTickFrequency();
                _frames++;
            }
            _timemark = getTickCount();
        } else {
            break; // for the instance all frames in the video file could be finished before _n
        }
    }
    return 1000.0*_totalprocctime/_frames; // in milliseconds
}

cv::Mat processframe(const Mat &_mat) {
    // Paste target processing code
    return _mat;
}

int main(int argc, char *argv[])
{
    if(argc > 1) { // application will wait videofilename as the second cmd argument

        VideoCapture _vc;
        if(_vc.open(argv[1]) == false)
            return -2; // Indicates that file could not be opened for the some reasons              

        double fps = _vc.get(CV_CAP_PROP_FPS); // get fps from the file's metainfo       
        int delayms = static_cast<int>(1000.0/fps); // get delay between the frames in milliseconds

        delayms -= measureframetimems(_vc, *processframe, 100); // adjust delay
        // Reopen video file if first frames are necessary
        if(_vc.open(argv[1]) == false)
            return -2; // Indicates that file could not be opened for the some reasons

        Mat matframe;
        while(_vc.read(matframe)){
            matframe = processframe(matframe);
            imshow("viewport", matframe);
            if(waitKey(delayms) == 27) { // 27 is ESCAPE code, so if the user press escape then while will be breaked
                break;
            }
        }
        return 0;
    } else {
        return -1; // means that the user did not provide videofilename
    }
}
2017-02-20 04:43:23 -0600 answered a question Why does findcontours crash the program in Qt?

There is several stanges/issues/bugs in your code:

  1. Why when you create QImage you pass 4-th argument as Mat.step()? To create QImage you should provide the number of bytes per line. So the calls should be:

    QImage qoriginal((const uchar*)original.data,original.cols,original.rows,original.cols*3,QImage::Format_RGB888);
    QImage qgray((const uchar*)gray.data,gray.cols,gray.rows,gray.cols,QImage::Format_Grayscale8);
    
  2. Qimage could be created from the raw data only if raw data is continuous, so you need to check:

     if(original.isContinuous() && gray.isContinuous()) {
         // create QImages
     }
    
  3. Update your opencv version to the last one

  4. Use QPainter to draw QImage on the QWidget instead of QLabel with QPixmap, research of how to here

2017-02-20 02:59:43 -0600 answered a question the dtype of read image

Use Mat::convertTo(...) function.

2017-02-20 02:55:44 -0600 commented question open cv for thermal image

Can you provide the samples of the such images and show result that you want to get on them?

2017-01-30 08:37:52 -0600 answered a question I sometimes get the below access violation error when running any of my OpenCV programs such as the simple one below.

I do not know why exeption is thrown, but suggest to call waitKey in the right way:

#include "opencv2\opencv.hpp"    
using namespace cv;    
int main(int argv, char** argc)
{
    Mat test = imread("McLaren P1 Bahrain-773-crop5184x2670.jpg", CV_LOAD_IMAGE_UNCHANGED);
    imshow("test", test);
    while(true)  {
      char option = waitKey(1); // wait 1 ms before waitKey will return value
      if(option = 27) // 27 is ESCAPE code from the ASCII table
          break; // break loop if the user has pressed escape button 
    }
    return 0;
}
2017-01-30 08:27:59 -0600 answered a question Distributed Face Recognition

You can use the client-server architecture for your system. Client side should be started on the Raspberry and make three things:

1) Detect and track the faces on the video;

2) When new face wil be detected, send it's image to the recognition server;

3) Wait repeat from the server and do what you want to do on the Raspberry.

Whereas, the server side should be started on PC with good enough performance and just wait the recognition tasks from the clients. Some time ago I have developed all parts of very similar solution on Opencv and Qt.

2017-01-30 08:11:17 -0600 answered a question OpenCV 3.2 Visual Studio .dlls missing

Just copy opencv_world320.dll from the design machine to the directory with the application's exe on the target machine.