Ask Your Question

logidelic's profile - activity

2020-07-22 09:49:43 -0600 edited question getPerspectiveTransform & perspectiveTransform

getPerspectiveTransform & perspectiveTransform I'm using getPerspectiveTransform & perspectiveTransform to map p

2020-07-21 18:59:06 -0600 asked a question getPerspectiveTransform & perspectiveTransform

getPerspectiveTransform & perspectiveTransform I'm using getPerspectiveTransform & perspectiveTransform to map p

2020-04-05 20:38:02 -0600 received badge  Popular Question (source)
2018-11-20 14:10:54 -0600 edited question cv::remap on a pre-rotated image

cv::remap on a pre-rotated image cv::remap on a pre-rotated image I am undistorting images from my camera, based on cal

2018-11-20 14:09:29 -0600 asked a question cv::remap on a pre-rotated image

cv::remap on a pre-rotated image cv::remap on a pre-rotated image I am undistorting images from my camera, based on cal

2018-10-30 13:31:53 -0600 received badge  Notable Question (source)
2018-08-09 05:16:45 -0600 received badge  Teacher (source)
2018-06-07 01:21:58 -0600 received badge  Popular Question (source)
2018-04-27 14:07:59 -0600 commented answer Training cuda::CascadeClassifier

Thank you for the answer. A first attempt at this did indeed work as advertised! I was expecting not, mostly because of

2018-04-27 14:05:29 -0600 commented answer Training cuda::CascadeClassifier

Thank you for the answer. A first attempt at this did indeed work as advertised! I was expecting not, mostly because of

2018-04-27 04:27:48 -0600 marked best answer Training cuda::CascadeClassifier

I have been using haarcascades_cuda/haarcascade_frontalface* with cv::cuda::CascadeClassifier (for a while now) using OpenCV 3.4.1 and it works very well!

However, I would really like to train using my own data. I have seen a lot of threads suggesting that this isn't really supported for the cuda class anymore. Is that so? Or is there a way to do it?

Also, is the training data for the existing haarcascade_frontalface* classifiers available?

Thank you!

2018-04-26 14:59:52 -0600 asked a question Training cuda::CascadeClassifier

Training cuda::CascadeClassifier I have been using haarcascades_cuda/haarcascade_frontalface* with cv::cuda::CascadeClas

2018-03-22 10:57:17 -0600 commented answer Titan V - Hanging on cuda::CascadeClassifier::detectMultiScale

Thanks for the response! I'm at this again, trying to get this working, but still no luck. I built using: cmake -DCUDA_

2018-02-26 14:30:19 -0600 edited question Titan V - Hanging on cuda::CascadeClassifier::detectMultiScale

Titan V - Hanging on cuda::CascadeClassifier::detectMultiScale I am trying to get my software working on a Titan V (on U

2018-02-26 14:30:02 -0600 edited question Titan V - Hanging on cuda::CascadeClassifier::detectMultiScale

Titan V & OpenCV - Hanging on cuda::CascadeClassifier::detectMultiScale I am trying to get my software working on a

2018-02-26 13:20:07 -0600 asked a question Titan V - Hanging on cuda::CascadeClassifier::detectMultiScale

Titan V & OpenCV - Hanging on cuda::CascadeClassifier::detectMultiScale I am trying to get my software working on a

2016-12-30 13:14:42 -0600 commented question Detecting (not decoding!) datamatrix (rectangle) regions in an image

Appreciate the suggestions here. After some experimentation, I've had good luck with MSER after blurring, resizing and inRanging the image to give me black blobs where the codes are likely to be. It seems to work quite well, except that my results vary dramatically depending on lighting conditions. Any ideas on how I might normalize for these given the steps I mention? I tried the approach mentioned in http://answers.opencv.org/question/75... , but it doesn't seem to solve the issue...

Thanks again.

2016-12-16 15:09:24 -0600 commented question Detecting (not decoding!) datamatrix (rectangle) regions in an image

Thanks for the suggestions! Regarding "finding squares", would you suggest MSER for that?

2016-12-15 10:43:20 -0600 asked a question Detecting (not decoding!) datamatrix (rectangle) regions in an image

I know that OpenCV doesn't provide any datamatrix decoding. That's ok. I'm currently using zxing to decode datamatrixes and it seems to work well.

However, Sometimes my image may contain two or more datamatrixes. In this case zxing fails because it needs an image with a single datamatrix in it.

Does anyone know how I can detect the datamatrix region? It doesn't have to be perfect as I don't mind false-positives, as long as I also get the rectangle for the actual datamatrix if there is one. I figure that this should be pretty easy with OpenCV since it is essentially trying to find a high-contrast rectangle, but I haven't been able to figure out how to do it...

Any help would be appreciated!

2016-07-15 09:15:58 -0600 commented question Improving eigenFaces by adding negative training images

Hasn't anyone out there already done a comprehensive accuracy comparison of the different methods?

2016-07-11 15:40:20 -0600 commented answer Calibrate at one resolution, remap at another resolution?

Nifty. Thanks for the cleaner code. :)

2016-07-11 09:08:02 -0600 commented question Improving eigenFaces by adding negative training images

still classifies as a known person

From what I remember eigenfaces always classifies as a known person. It just gives you the most similar. You need to use some sort of confidence value (like the one it returns or something else) to decide how likely the match is.

2016-07-11 09:07:28 -0600 answered a question Improving eigenFaces by adding negative training images

still classifies as a known person

From what I remember eigenfaces always classifies as a known person. It just gives you the most similar. You need to use some sort of confidence value (like the one it returns or something else) to decide how likely the match is.

2016-07-07 02:27:59 -0600 received badge  Necromancer (source)
2016-07-07 02:27:59 -0600 received badge  Self-Learner (source)
2016-07-06 12:02:27 -0600 answered a question meanshift termination criteria & epsilon value meaning

Would still love to hear other peoples' answers to this, but for the moment, I've added a simple histogram comparison (using compareHist) to try to get a sense of the similarity between the original and the tracked object. If the similarity is less than some threshold, I assume I've lost the track.

2016-07-06 07:56:05 -0600 commented question improving fps of face recognition by using threads

Have you checked where the time is being used? I assume it is mostly going to detectMultiScale(). In this case starting a new thread won't help you with anything.

Apparently this function is already parallelized (?) but I'll let someone else elaborate more about that (how does it decide how many threads to use?).

2016-07-05 11:49:02 -0600 asked a question Meanshift - Cleaning up back projection

In the OpenCV meanshift documentation, it states that:

You can simply pass the output of calcBackProject to this function. But better results can be obtained if you pre-filter the back projection and remove the noise. For example, you can do this by retrieving connected components with findContours , throwing away contours with small area ( contourArea ), and rendering the remaining contours with drawContours.

Since I love "better results", I've been trying to do exacly what has been suggested, but so far the reults are not useable. Here is my code (where backProjection is the original back projection created by calcBackProject():

// Clean up back projection by selecting for well-sized contour areas
vector< vector<Point> > contours;
vector<Vec4i>           hierarchy;
findContours(backProjection, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
backProjection = Mat::zeros(backProjection.rows, backProjection.cols, backProjection.type());
for(unsigned i = 0; i < contours.size(); ++i) {
  double area = contourArea(contours[i]);
  if(area >= MIN_AREA && area <= MAX_AREA)
    drawContours(backProjection, contours, i, Scalar(255,255,255), CV_FILLED, 8);
}

Sometimes the result is useable: the ROI is white and most of the rest of the image is black. However, for other situations, the ROI is completely black (no matter the values of MIN_AREA/MAX_AREA above).

What am I doing wrong? Any code out there for cleaning up a back-projection in a generic way?

Thanks!

2016-07-04 13:58:04 -0600 edited question meanshift termination criteria & epsilon value meaning

I'm wondering if someone could give an explanation of the epsilon value passed as part of the meanshift termination criteria. I've read the documentation of course, and searched around, but am still unsure of the meaning.

Motivation for this question: I am trying to determine the likelihood that what meanshift found is indeed the original object of interest (i.e. that the new rectangle is similar to the previous ROI). I was surprised that, sometimes, if my original object has left the frame, meanshift identified some background noise as the new location (which, to my eyes, had nothing in common with the original ROI).

Currently I'm using a max count of 10 and an epsilon of 1 for the termination criteria. I then ensure that the count returned from meanShift() is < 10, or else I discard the result. Still, it seems to terminate sooner, even when the object is gone...

Thanks!

2016-07-04 08:32:30 -0600 received badge  Self-Learner (source)
2016-07-04 08:12:03 -0600 answered a question Calibrate at one resolution, remap at another resolution?

Got this working after posting:

  Mat newCamMatrix = calibCamMatrix.clone();
  double scale = double(newWidth) / double(calibWidth);
  newCamMatrix.at<double>(0,0) = scale * calibCamMatrix.at<double>(0,0);
  newCamMatrix.at<double>(1,1) = scale * calibCamMatrix.at<double>(1,1);
  newCamMatrix.at<double>(0,2) = scale * calibCamMatrix.at<double>(0,2);
  newCamMatrix.at<double>(1,2) = scale * calibCamMatrix.at<double>(1,2);

calibCamMatrix is the matrix generated by the calibration process (where frame had x resolution of calibWidth). newCamMatrix is the modified matrix to be passed to initUndistortRectifyMap(), along with the new resolution (newWidth/yyy)

2016-07-01 14:46:53 -0600 asked a question Calibrate at one resolution, remap at another resolution?

According to the docs, since the distortion coefficients do not depend on the resolution, I should be able to make use of the camera calibration data with a different resolution (from that which was used during calibration).

Unfortunately I haven't be able to make this work. Is there an example out there that makes use of initUndistortRectifyMap() and remap() when the original camera matrix and distortion coefficients were gathered at a different resolution from the image being passed to remap() ?

Thanks!

2016-07-01 09:13:22 -0600 asked a question Camera calibration and aspect ratio

Hello!

I am calibrating a wide-angle IP Cam. Here as an example of the image after calling remap() with the calibration data:

image description

Lines are straight. Good. However, notice that the checkboard squares are tall. They are rectangles instead of squares. Bad. Should the calibration process have made this good? How do I correct for this?

When performing the calibration (for the example posted), I am not using the CALIB_FIX_ASPECT_RATIO flag. However, I have tried it with the flag (using it correctly as per the sample) and end up with the same problem. I understand that this flag indicates that there is a known pixel-size-ratio, but if I don't provide the flag, I assumed that this would have been calculated, right?

Edit: I think think this is definitely related to the camera not having square pixels (at least the resolution I was trying here), but I'm still not clear as to whether OpenCV's calibration should be able to correct for this automatically or not...?

2016-06-29 03:53:45 -0600 received badge  Necromancer (source)
2016-06-28 11:03:10 -0600 commented answer VideoCapture cant read url

I wasn't able to get RTSP URLs working with VideoCapture(). Also, I figured that the VLC implementation was more robust. It has been working well for me.

2016-06-28 10:38:21 -0600 edited answer VideoCapture cant read url

Not sure if this is possible using the OpenCV VideoCapture. I too needed to stream RTSP feeds, so I made use of libvlc for these. Here's some info:

http://study.marearts.com/2015/09/ope...

Here is the class that I created to allow for a similar interface to what you get with VideoCapture:

Header:

#ifndef VLCCAP_H
#define VLCCAP_H


#include <opencv2/opencv.hpp>
#include <vlc/vlc.h>
#include <mutex>
#include <atomic>


using namespace std;
using namespace cv;


class VlcCap {
    public:
        VlcCap();
        ~VlcCap();

        void open(const char* url);
        void release();
        bool isOpened();

        bool read(Mat& outFrame);


    private:
        unsigned format(char* chroma, unsigned* width, unsigned* height, unsigned* pitches, unsigned* lines);
        void*    lock(void** p_pixels);
        void     unlock(void* id, void* const* p_pixels);

        static unsigned format_helper(void** data, char* chroma, unsigned* width, unsigned* height, unsigned* pitches, unsigned* lines);
        static void*    lock_helper(void* data, void** p_pixels);
        static void     unlock_helper(void* data, void* id, void* const* p_pixels);


    private:
        mutex           m_mutex;
        string          m_url;
        Mat             m_frame;
        bool            m_isOpen;
        atomic<bool>    m_hasFrame;

        libvlc_instance_t*      m_vlcInstance;
        libvlc_media_player_t*  m_mp;
};


#endif //VLCCAP_H

Implementation:

#include "VlcCap.h"
#include <thread>
#include <cstring>


VlcCap::VlcCap()
  : m_isOpen(false)
  , m_hasFrame(false)
  , m_vlcInstance(NULL)
  , m_mp(NULL)
{
}


VlcCap::~VlcCap() {
  try {
    release();
  } catch(...) {
    // TODO log
  }
}


void VlcCap::open(const char* url) {
  release();
  m_url = url;

  const char* args[] = {"-I", "dummy", "--ignore-config"};
  m_vlcInstance = libvlc_new(3, args);  

  libvlc_media_t* media = libvlc_media_new_location(m_vlcInstance, m_url.c_str());  
  m_mp = libvlc_media_player_new_from_media(media);  
  libvlc_media_release(media);

  libvlc_video_set_callbacks(m_mp, lock_helper, unlock_helper, NULL, this);  
  libvlc_video_set_format_callbacks(m_mp, format_helper, NULL);

  int resp = libvlc_media_player_play(m_mp);
  if(resp == 0) {
    m_isOpen = true;
  } else {
    release();
  }
}


void VlcCap::release() {
  if(m_vlcInstance) {
    libvlc_media_player_stop(m_mp);
    libvlc_release(m_vlcInstance);
    libvlc_media_player_release(m_mp);
    m_vlcInstance = NULL;
    m_mp = NULL;
  }
  m_hasFrame = false;
  m_isOpen   = false;
}


bool VlcCap::isOpened() {
  if(!m_isOpen)
      return false;

  libvlc_state_t state = libvlc_media_player_get_state(m_mp);
  return (state != libvlc_Paused && state != libvlc_Stopped && state != libvlc_Ended && state != libvlc_Error);
}

bool VlcCap::read(Mat& outFrame) {
  while(!m_hasFrame) {
    this_thread::sleep_for(chrono::milliseconds(100));
    if(!isOpened())
      return false; // connection closed
  }

  {
    lock_guard<mutex>  guard(m_mutex);
    outFrame = m_frame.clone();
    m_hasFrame = false;
  }
  return true;
}


unsigned VlcCap::format(char* chroma, unsigned* width, unsigned* height, unsigned* pitches, unsigned* lines) {
  // TODO: Allow overriding of native size?
  lock_guard<mutex>  guard(m_mutex);
  //cout << "VlcCap::format - " << chroma << " - " << *width<<"/"<<*height << endl;
  memcpy(chroma, "RV24", 4);
  pitches[0] = (*width) * 24/8;
  lines[0]   = *height;
  m_frame.create(*height, *width, CV_8UC3);

  return 1;
}


void* VlcCap::lock(void** p_pixels) {
  //cout << "VlcCap::lock" << endl;
  m_mutex.lock();
  *p_pixels = (unsigned char*)m_frame.data;
  return NULL;  
}  

void VlcCap::unlock(void* id, void* const* p_pixels) {
  m_hasFrame = true;
  m_mutex.unlock();
}  


unsigned VlcCap::format_helper(void** data, char* chroma, unsigned* width, unsigned* height, unsigned* pitches, unsigned* lines) {
  return ((VlcCap*)(*data))->format(chroma, width, height, pitches, lines);
}


void* VlcCap::lock_helper(void* data, void** p_pixels) {
  return ((VlcCap*)data)->lock(p_pixels);
}

void VlcCap::unlock_helper(void* data, void* id, void* const* p_pixels) {
  ((VlcCap*)data)->unlock(id, p_pixels);
}