Ask Your Question

MRDaniel's profile - activity

2019-06-06 07:06:23 -0500 received badge  Popular Question (source)
2018-02-11 18:57:06 -0500 commented question OpenCVjs with Features2D

A little further forward here. 1. Enable the build flag: https://github.com/opencv/opencv/blob/master/platforms/js

2018-02-07 21:01:30 -0500 edited question OpenCVjs with Features2D

OpenCVjs with Features2D Hello, I am trying to build the OpenCV js bindings however, i think this may in fact be a cmak

2018-02-07 21:00:38 -0500 asked a question OpenCVjs with Features2D

OpenCVjs with Features2D Hello, I am trying to build the OpenCV js bindings however, i think this may in fact be a cmak

2018-01-07 18:11:57 -0500 received badge  Critic (source)
2018-01-07 17:41:52 -0500 commented answer WarpPerspective Advice with correct BBox Pixels

Thank you for your input, but this doesn't quite address the issue i am facing. Also src_vertices is the wrong way aroun

2018-01-07 16:22:38 -0500 edited question WarpPerspective Advice with correct BBox Pixels

WarpPerspective Advice with correct BBox Pixels Hello, I am trying to do some template matching with OpenCV. The templa

2018-01-07 16:02:54 -0500 edited question WarpPerspective Advice with correct BBox Pixels

WarpPerspective Hello, I am trying to do some template matching with OpenCV. The templates in the new image could be wa

2018-01-07 16:01:18 -0500 edited question WarpPerspective Advice with correct BBox Pixels

WarpPerspective Hello, I am trying to do some template matching with OpenCV. The templates in the new image could be wa

2018-01-07 15:59:32 -0500 edited question WarpPerspective Advice with correct BBox Pixels

WarpPerspective Hello, I am trying to do some template matching with OpenCV. The templates in the new image could be wa

2018-01-07 15:58:43 -0500 asked a question WarpPerspective Advice with correct BBox Pixels

WarpPerspective Hello, I am trying to do some template matching with OpenCV. The templates in the new image could be wa

2017-11-30 15:19:39 -0500 asked a question Create Templates from Feature Homography

Create Templates from Feature Homography Hello, Currently i am working on texture tracking. Presently, i have extracte

2017-11-02 01:14:18 -0500 received badge  Popular Question (source)
2017-06-15 01:43:24 -0500 received badge  Notable Question (source)
2017-05-17 18:46:15 -0500 received badge  Notable Question (source)
2017-04-05 07:51:45 -0500 received badge  Popular Question (source)
2017-04-05 00:53:48 -0500 commented question Why does Haar cascade classifier performance change when I crop an image?

The cropped image size should be an octave of the original size. The scaling will affect which window sizes the detection occurs at. Perhaps when you cropped the image, the particular scale at which your object is detected is not used and is skipped. Make the scale increments smaller, which will affect performance.

2017-04-05 00:52:05 -0500 asked a question Reverse Engineer Features

Hello,

Is it possible to reverse engineer an image feature?

Given a feature2D detection/description method (SIFT, SURF, FREAK, AKAZE etc) is it possible to create image features that are likely to be detected in an image? I want to create an alphabet of features. I don't think BOW is quite right here, but the usage of a vocabulary may be necessary.

Let's say we have 10 images, and we want to add a sticker with one of our features on it. We can print out these images onto giant pieces of cardboard and move them in front of the camera.

When shown to a camera, the feature detector/descriptor/matcher will be able to tell which image is current in view of the camera, despite it's scale/translation/rotation very quickly.

image description

I know QR codes are probably better for the scenario i am describing, however, QR codes are not viable. I just want one giant image feature that can be easily matched.

Is there such a method to know all possible features for a detector/descriptor/matcher ahead of time? And in particular, the ones that will match well.

i.e. SURF is a 9x9 patch, so possibly create a large image say 900 x 900, then according to a 10x10 grid on this surface, we could colour squares making a detectable feature.

image description

Please ask for clarification on any points here......

UPDATE:

Found a paper for Maximum Detector Response Markers for SIFT and SURF

http://www.lmt.ei.tum.de/forschung/pu...

2016-10-11 02:37:25 -0500 marked best answer World Co-ordinates and Object Co-ordinates

Hello,

I am working with cv::SolvePnp(), cv::ProjectPoints(). We are working with a fully calibrated camera with known camera matrix and distortion coefficients.

Given a detected marker, it is possible to get the rvec and tvec for a given 3D model.

This has been done for two types of model, a board model and a ball model. We then get three sets of tvecs/rvecs, one for the board and then two more models for the balls. As shown below.....

image description

How do we relate these? We can project the model points into the image usings the result of solvepnp.

How do the rvecs and tvecs relate in this case? Is it possible to get the location of each ball on the board in terms of it's x,y,z location relative to the board model?

image description

The board is shaped as (0,0), (1,0), (1,1), (0,1).

The ball are circles centered at 0, with radius 0.1, which is in scale with the real world objects.

Process so far....

  • Detect board corners.
  • Detect ball locations, and fit a circle.
  • Calculate rvecs and tvecs for board and the two balls.
  • Use projectPoints to project model into image.

Can we get more information on where the balls are on the board? The ultimate aim is collision detection/prediction once locations and velocity are determined.

Kind regards,

Daniel

2016-09-06 13:19:28 -0500 received badge  Popular Question (source)
2016-08-18 10:21:31 -0500 commented question BOWKMeansTrainer Max Images?

@berak. I will investigate in the morning. :)

2016-08-18 07:30:05 -0500 asked a question BOWKMeansTrainer Max Images?

Hello.

I am training BOWKMeansTrainer with a dataset of 30,000 images using AKAZE features and descriptors.

When i hit the 3500 sample on call int BowTrainer.add(descriptor), i receive an error. My machine has 16Gb of RAM and over 9Gb is available at the time of the error. Project is VS2015 with x64 configuration. The number of clusters is set to 1000 initially, then 400 and finally 16. All yielding the same result. Do i need more?

All cv::Mat created in the loop are released correctly.

What is the limit to the number of images that can be trained? Is there a way to use multiple BOW trainers if i had sub-classes of images?

Is this a limitation with debug versus release dlls for memory allocation?

OpenCV Error: Insufficient memory (Failed to allocate 883200 bytes) in cv::OutOfMemoryError, file C:\OpencvFull\opencv\modules\core\src\alloc.cpp, line 52 OpenCV Error: Assertion failed (u != 0) in cv::Mat::create, file C:\OpencvFull\opencv\modules\core\src\matrix.cpp, line 432

883200 bytes = 0.0008832 Gb?? 9Gb available..... Am i taking crazy pills here?

Regards,

2016-08-17 07:12:32 -0500 asked a question Using ifstream in VS2015

Hello,

I am calling a function from OpenCV that uses ifstream in C++. ifstream doesn't seem to work when tried in isolation.

Ptr<ERFilter> er_filter1 = createERFilterNM1(loadClassifierNM1("trained_classifierNM1.xml"), 8, 0.00015f, 0.13f, 0.2f, true, 0.1f);

But getting.....

OpenCV Error: Bad argument (Default classifier file not found!) in cv::text::ERClassifierNM1::ERClassifierNM1, file ~\opencv_contrib-3.1.0\modules\text\src\erfilter.cpp, line 1039

The offending line is.....

 if (ifstream(filename.c_str())) 
 { 
       .. Stuff works ....
 } 
 else 
 { 
       error!!
 }

Everything worked in VS2013 but i have to switch to VS2015.

I cannot get ifstream to work at all, even though the file is located in the .exe folder itsself or project directory. Everywhere really.

It's like ifstream has stopped working during the switch to VS2015

https://github.com/opencv/opencv_cont...

I have also tried the full path and adding files to VS2015 project resources folder.

2016-08-08 16:30:55 -0500 commented question Matrix Multiplication Values Not Correct

@LorenaGdL I've updated the numbers and i am still confused,. Any clues?

2016-08-08 16:30:15 -0500 received badge  Associate Editor (source)
2016-08-08 10:41:19 -0500 commented question Matrix Multiplication Values Not Correct
2016-08-08 10:14:08 -0500 asked a question Matrix Multiplication Values Not Correct

Matrix multiplication.....

So i am converting some Opencv code to Opencvjs and i cannot get the correct matrix multiplication result.

In C++ which appears to work.

cv::Mat translation = -rotation * S;

Print out of each matrix. (std::cout << cv::Mat() << std::endl)

-rotation = [-0.49347955, -0.015945198; 0.015945198, -0.49347955]

S = [295.79922; 245.82167]

Print out of the result.

Translation [-149.89055; -116.59139]

Then verification via an online Matrix Multiplication tool gives a different answer, as well as opencvjs using GEMM.

image description

Is the cv::Mat * (multiplication) operator the same as GEMM?

This is the opencvjs code i expected to give the same result, but the online matrix calculator gives yet another result.

cv.gemm(rotation, S, -1, emptyMat, 0, translation, 0);

I am missing something obvious, i think.

UPDATE: EDIT ADDED CORRECT VALUES.

image description

The answers is still incorrect....... can somebody please tell me what is METHODOLOGICALLY WRONG WITH WHAT I AM DOING HERE.

image description

Am i taking crazy pills here?

2016-07-14 06:11:59 -0500 answered a question Find Peaks in Histogram

You want local maxima.

The histogram is a mat, so you can get the value of each index.

How you choose to do this is up to you. But a sliding window, where you have the previous value, current value and next value. If prev < current > next then you have a peak.

That's a pretty crude approach so perhaps you may want to smooth or normalize you values first.

2016-07-13 17:38:48 -0500 commented question Mock Camera Intrinsics

@Tetragramm Nope. It's an arbitrary image from an unknown camera. Should work for all imagew, i.e. even downloaded from the internet etc. What if it was a drawing from photoshop etc? The camera intrinsics may not be available. Interesting problem, right? :)

2016-07-13 17:04:12 -0500 edited question Mock Camera Intrinsics

Hello,

I am following this post an amazing blog post to create a Perspective Transform for an arbitrary image.

https://jepsonsblog.blogspot.co.nz/20...

This works well for square images, but has a problem with the aspect ratio when it comes to rectangular images.

I suspect it is the Camera Intrinsics that uses the field of view.

cv::Mat A2 = (cv::Mat_<double>(3, 4) <<
    f, 0, w / 2, 0,
    0, f, h / 2, 0,
    0, 0, 1, 0);

There are comments on the blog mentioning suggested values of f for 200-250 when using this method.

In the documentation for OpenCV, it is a little more precise stating that it is in fact fx and fy. Focal length can be found if the field of view is known, and vice versa.

fx, fy are the focal lengths expressed in pixel units.

What is the solution here?

Example image of the problem, look at how it is oddly stretched in the x axis which becomes more severe as the rotation increases. Works fine when they are square images.

image description

2016-07-13 08:42:07 -0500 commented answer How to detect weather the circle is green or black...

Yup. Black should be (0,0,0) and Green should be ~(0,255,0). There are other tricks but that's the essential formula. To get good contours, you should convert to grayscale, and equalizeHist before you run FindContours.

2016-07-13 08:39:31 -0500 commented answer Where is opencv_core310.dll ?

Have you tried building from source?

2016-07-13 08:39:21 -0500 commented answer Where is opencv_core310.dll ?

Hmmm. It seems like it should be there...... http://docs.opencv.org/3.1.0/dc/d88/t...

2016-07-13 08:36:06 -0500 received badge  Commentator
2016-07-13 08:36:06 -0500 commented answer How to extract specific feature in pattern recognition algorithm in image processing for object detection?

Remove the features found in common airline logos. i.e. have a database of logos a reject any features that score highly when matched against those reference features.

2016-07-13 08:35:15 -0500 commented answer How to extract specific feature in pattern recognition algorithm in image processing for object detection?

Also, to answer your question, deep learning could probably do this task but it is complete overkill in terms of preparation and output.

2016-07-13 08:34:22 -0500 answered a question How to extract specific feature in pattern recognition algorithm in image processing for object detection?

Hello.

I saw the other post.

You should edit your question and link it so that people can see the images associated with this question.

My personal opinion is that you are trying to be too bold. You want to detect the door in one step. Sure, this would be simple if the door was the only object in the scene, but as you've experienced, applied computer vision is never so clean and simple.

You have to do two things. Locate the parts of the image that you are interested in, and remove parts of the image that you are not interested in.

Instead of a one step detector, you may want to detect the fuselage, or the cockput window to begin with. This would give you a rough estimate as to where the door should be located, and where there may be artwork or details.

Then feature analysis or a cascade classifier may be useful at this point to narrow down and postiviely detect the door.

http://coding-robin.de/2013/07/22/tra...

2016-07-13 08:28:41 -0500 answered a question Problems using createsamples function for lbp cascade classifier

Yeah. You need to change to fopen_s as fopen is unsafe.

You'll see the parameters for these functions are similar

stream = fopen(a,b,c)

to

fopen_s(&stream, a, b, c)

It's more of a C++ problem than an OpenCV one.

If you build OpenCV it should create a utility for generating samples.

Here's a good tutorial....

http://coding-robin.de/2013/07/22/tra...