Ask Your Question

Barry's profile - activity

2020-03-23 12:54:03 -0600 received badge  Popular Question (source)
2019-07-01 14:39:25 -0600 received badge  Popular Question (source)
2019-05-15 01:07:27 -0600 received badge  Famous Question (source)
2018-10-20 08:47:14 -0600 received badge  Notable Question (source)
2017-10-16 00:44:15 -0600 received badge  Popular Question (source)
2016-10-19 00:35:49 -0600 received badge  Notable Question (source)
2016-06-21 03:57:05 -0600 received badge  Popular Question (source)
2015-05-13 20:58:32 -0600 asked a question Android C++ Speedup

Hello, I have some code that uses template matching, the simple blob detector, and some Mat operations - no camera or video.

I originally wrote the code in C++ to run on a PC. Everything works satisfactorily and it's amazingly fast on a PC.

I ported the code to Java for Android. It runs satisfactorily, but is much slower. I'm sure a lot of this has to do with the CPU differences between a PC and Android device.

I'm wondering if it would be worth the effort to try implementing the code for Android using C++ and the NDK rather than Java. Could I expect to see a significant speed up with a C++ implementation versus Java?

I am also wondering about the function of the OpenCV Manager that one must load into Android?

Does the Manager just handle things like the camera and keyboard interface on Android or does it execute the code for some of the basic operation functions like SimpleBlob.detect and matchTemplate.

If the Manager executes code, is the code optimized the way using C++ would be optimized versus Java?

Thanks, Barry.

2015-05-12 18:42:06 -0600 commented question Simple Blob Detector Params Meaning

Thanks. Nice. @theodore that is exactly the type of thing I am looking for. I did not see any mention of the "minRepeatability" parameter. Any idea what this does? I see it set to "2" in many examples.

2015-05-12 12:42:36 -0600 asked a question Simple Blob Detector Params Meaning

Hello, I'm using the Simple Blob Detector in Java on Android. I am transferring an algorithm that I developed in C++. Once I found out how to set parameters in the Java version, the results are comparable. But, I have not found a resource that describes the function of the parameters.

To set parameters in Java, by accident, I found: https://code.google.com/p/dronelander...

which uses: blobDetector.read("blobparams");

The Blob Detector parameters that are available:

float thresholdStep; float minThreshold; float maxThreshold; size_t minRepeatability; float minDistBetweenBlobs; bool filterByColor; uchar blobColor; bool filterByArea; float minArea, maxArea; bool filterByCircularity; float minCircularity, maxCircularity; bool filterByInertia; float minInertiaRatio, maxInertiaRatio; bool filterByConvexity; float minConvexity, maxConvexity;

Is there any documentation about what these parameters mean?

I most interested in the first four.

Thanks, Barry.

2015-04-24 17:47:14 -0600 commented question Locate peaks in grayscale image

Thanks for your reply. I find that using the SimpleBlobDetector pretty much does what I want if I set the MinArea and MaxArea parameters. Before I set these parameters I was missing a few of the lower value peaks. They were not being detected as blobs. Another way to look at what I am trying to do – picture a grayscale image as a two-dimensional mountain range with the gray intensity shown as mountain peaks popping up. I would like to lower a large pane of glass until it touches one of the peaks, mark this point, get rid of the mountain beneath it, then continue lowering the glass until the next peak is touched. Mark this point, get rid of the mountain below it, and repeat until all the peaks are detected.

2015-04-23 22:59:03 -0600 asked a question Locate peaks in grayscale image

Hello, I have a grayscale 2-D image that is produced as the result of a template match. The image is grayscale and contains about a dozen maximum points (showing each location where the template matched). I would like to identify each maximum point and include the area around that point down to about 80% of the of the maximum point's value. Outside these regions I will set the values to zero.

What I will end up with is a bunch of irregularly shaped grayscale dots within a black field. Each dot's Centerpoint is some maximum value. What the actual value is will be different for each dot.

The first approach would be to use a threshold to zero, but the value at the maximum points could be 255, 133, or even something as low as 51. An arbitrary threshold might get rid of low value, but important, maximums.

Can anyone suggest a way of accomplishing this or point me in the direction I should be looking?

Thanks, Barry.

2015-04-03 19:35:13 -0600 commented question Template Matching using Color

I think I found a good way to do it. Use three bands (upper, middle, bottom) within the template and test image matched area, compute the histograms for each band, and compare between the template and test images. There is even a really nice tutorial, "Histogram Comparison", http://docs.opencv.org/doc/tutorials/... which gets me 80% there.

2015-04-02 14:46:42 -0600 asked a question Template Matching using Color

Hello, My application is to find and count boxes on a shelf. As a test I'm using Apple and Fruit Punch Minute Maid juice boxes. The boxes look almost identical except for the color – Apple is green and Fruit Punch is red. When converted to grayscale, the intensities of the red and green are almost the same. Understandably the template matching gets confused, although otherwise I am absolutely amazed by its performance.

I have tried template matching using color images (the docs for 2.4.11 say all three color channels are used) and grayscale images. The performance for color is a little better, but still confuses one for the other.

Is there a standard way to introduce the actual color value into the template matching algorithm? I'm thinking of something like matching the color at half a dozen locations in the template versus the same location in identified match and somehow computing if they are close.

Can anyone offer suggestions?

Thanks, Barry.

2015-03-28 13:50:43 -0600 commented question traincascade - train for detecting logo

Thanks for the reply. I tried feature detection using car logos instead of cereal boxes. See this post: http://stackoverflow.com/questions/29...

I ended up with many feature matches. There are more matches for the desired objects, but I'm not sure how to cluster those while ignoring the undesired matches. Someone suggested cascade detection.

I've also tried template matching and had the best results with this, although I'm not sure how it would respond with very slight scaling, rotation, and skew differences.

2015-03-27 18:32:16 -0600 asked a question traincascade - train for detecting logo

Hello,

Your problem is pretty simple, basically your training says that it can no longer improve your cascade with the current training samples and settings beyond stage 3.... ... with only 5 positive samples will never work decently.

Thanks for your reply.

My ultimate application is to detect a cereal box on a shelf among other different cereal boxes.

In the documentation, I don't remember where, I've seen discussion of training for logos. I think this is basically what I'm doing.

What I will probably do is generate 50 or so copies of the cereal box rotated and scaled slightly overlaid within a random blurry background of Mosaicy colors. My background images will consist of probably around 100 of these Mosaicy colored images.

These positive and negative images will all be the same size.

I have almost no experience with this – I have no idea if this will work.

Can someone offer some pointers? Does this seem like a good approach? How would you approach the detection of a cereal box on the shelf?

Thanks for any insights, Barry.

2015-03-23 18:45:59 -0600 asked a question traincascade - Train dataset for temp stage can not ...

Hello, I'm going nuts trying to get opencv_traincascade to work. It always stops with the error:

Train dataset for temp stage can not be filled. Branch training terminated

I'm using OpenCV 2.4.11 on a Windows 7 64-bit machine. I've used the distribution exe from the x64 VC12 directory. I've also recompiled from source code using Visual Studio 2012. I get the same results either way.

The command line I'm using is below. Test1 is my compiled version of traincascade.

test1 -data cscd -vec out/samples.vec -bg bgt.txt -numPos 5 -numNeg 1200 -numStages 10
  -featureType HAAR -minHitRate 0.999 -maxFalseAlarmRate 0.5 -w 24 -h 24
  -precalcValBufSize 2048 -precalcIdxBufSize 2048

The output I receive is:

PARAMETERS:
cascadeDirName: cscd
vecFileName: out/samples.vec
bgFileName: bgt.txt
numPos: 5
numNeg: 1200
numStages: 10
precalcValBufSize[Mb] : 2048
precalcIdxBufSize[Mb] : 2048
stageType: BOOST
featureType: HAAR
sampleWidth: 24
sampleHeight: 24
boostType: GAB
minHitRate: 0.999
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100
mode: BASIC

Stages 0-2 are loaded

===== TRAINING 3-stage =====
<BEGIN
POS count : consumed   5 : 21
310: numNeg 1200, posCount 5, numPos 5, proNumNeg 1200, negConsumed 1024, negCount 0
Train dataset for temp stage can not be filled. Branch training terminated

Note that I added a printf statement that begins with "310:" corresponding to line 310 in the cascadeclassifier.cpp file. Line 310 falls within the following subroutine:

bool CvCascadeClassifier::updateTrainingSet( double minimumAcceptanceRatio, double& acceptanceRatio)

Can anyone see what I'm doing wrong?

Is there an example available – a zip file that contains positive images, negative images, the descriptor files, the command line to run, basically all the stuff just to verify that it works?

Thanks, Barry.

-- UPDATE --

I found a great example at:

http://abhishek4273.com/2014/03/16/tr...

TRAINCASCADE AND CAR DETECTION USING OPENCV by ABHISHEK KUMAR ANNAMRAJU

He works through a simple problem from start to finish providing the sample images, createsamples and traincascade command lines, and the info files.

He seems to be using Linux/UNIX. I use Windows 7. The "find" command is not available on Windows. I entered dir /B > cars.info from a command window and then and edited the output file to add the directory prefix and a 1 0 0 100 40 suffix to each line.

Everything worked and I got the output XML file.

2015-03-20 11:31:26 -0600 commented question identify, count items

Thanks. By chance, in my searching, I found cascade classifiers late last night. I was going to try to train for each individual box and have the classifier identify box. I like your method of training for all boxes and then use the feature detection on the region to identify box. I am very new to this, so I don't have the experience to know which method will work better. I did not think of your approach.

2015-03-19 10:54:52 -0600 asked a question identify, count items

Hello, I've been exercising the tutorials and examples, mainly for feature detection and extraction.

I'm using C++, Visual Studio 2012, and open CV 2.4.11.

I've gotten to the point where I can train on a template, a box of cereal for instance, then hold that item in front of a WebCam and have a cluster of feature lines drawn between the video image and trained image – you've seen the tutorial and example.

But, what I haven't seen, so far, is the next step. How do you use this cluster of feature detections between the test and train images to determine that " Oh yeah, it's a box of corn flakes" (or maybe I have two or three boxes).

I have feeling it might involve in inliers and outliers, but I haven't found the right stuff yet.

My ultimate problem will be to detect, identify, and count boxes of cereal on a shelf. Imagine there are two boxes of Cheerios, one box cornflakes, and three boxes of Froot Loops. I want to identify and count the items.

Can someone point me in the right direction – maybe to an example or the OpenCV functions I should be looking at?

Thanks, Barry.

2015-03-17 18:54:43 -0600 commented question ORB example fails

Thanks for your response. Yes, I was thinking that depth for a Mat was the same as size for a vector. I am very new to C++, I'll have to dig through some docs. I upgraded from OpenCV 2.4.10 to 2.4.11. The example now works as expected. Thanks.

2015-03-15 19:54:24 -0600 asked a question ORB example fails

Hello, I am working through the examples in chapter 8 of the book Practical OpenCV by Samarth Brahmbhatt.

I'm running example 8.4, copied from their included source code:

vector<KeyPoint> train_kp;
Mat train_desc;

OrbFeatureDetector featureDetector;
featureDetector.detect(train_g, train_kp);

cout << "Key Point size" << train_kp.size() << endl;

OrbDescriptorExtractor featureExtractor;
featureExtractor.compute(train_g, train_kp, train_desc);

cout << "Descriptor depth " << train_desc.depth() << endl;

flann::Index flannIndex(train_desc, flann::LshIndexParams(12, 20, 2), cvflann::FLANN_DIST_HAMMING);

For train_kp.size(), I get 496. For train_descr.depth(), I get 0.

When executing I get an exception at the line, flann:: ...

I'm using their supplied image. I'm running on Windows compiling with Visual Studio 2012, while I believe they wrote and tested their code running on Linux. I have successfully run previous examples in the book.

I'm guessing that the 0 descriptors is at the heart of the problem and causes the exception at flann::.

Does anyone know why I'm getting this error? Is the problem, in fact, 0 descriptors?

Thanks, Barry.

2015-03-08 16:42:10 -0600 received badge  Enthusiast
2015-03-06 16:36:21 -0600 received badge  Supporter (source)
2015-03-06 16:35:51 -0600 commented answer Image matching/warping/alignment

I hope 3.0 comes for Android soon. I found the following very good: http://www.codeproject.com/Articles/2... I hope I can convert this to Android from Windows CPP (it uses opencv not opencv2).

2015-03-06 16:30:28 -0600 received badge  Scholar (source)
2015-03-05 04:39:23 -0600 received badge  Student (source)
2015-03-02 21:32:20 -0600 received badge  Editor (source)
2015-03-02 21:12:23 -0600 asked a question Image matching/warping/alignment

Hello, I'm not even sure what question to ask…

I have two images taken by a handheld camera, taken maybe a couple days apart. Think of images of a painting on a wall. The images will be taken from the same position, same distance, as close as humanly possible. I will use the first image as the template. I would like to adjust the second image (adjust size, shift left right up or down pixels, rotate a few degrees, or warp little bit) to match the first image as closely as possible. I will then take the difference of the two images to see if anything major has changed.

I am sure this type of thing has been done many times before. Can someone point me in the right direction toward what I should be looking for? Maybe there is a solution available for this? What is this type of image processing properly called? In my searching I've come across alignment, matching, homography … Nothing seems to be quite what I have in mind.

Thanks, Barry.

2015-01-01 16:03:13 -0600 asked a question absdiff gives zero alpha

Hello,

I am using Core.absdiff(src1, src2, dst) in OpenCV for android to compute an image difference.

Both source images originate from Android. I write both source and destination images to files, transfer to a PC, and open in Paint.Net.

The alpha channels for both source images are 255, but the alpha channel for the destination image is zero. My guess is that the absdiff is subtracting the alpha channel of both source images (255-255=0) making the alpha channel of the destination image zero – as it should.

I would like to preserve the alpha of 255. My solution is to zero the alpha channel of the src2 image before the absdiff. I'm thinking use Core.bitwise_and(src2, in2, out1) where in2 is a Scalar.

For the bitwise_and operation I'm not quite sure what values to set for the in2 scalar. Scalar(1, 1, 1, 0) seems like a good choice, but they say it's a bitwise_and. This probably would not do what I want.

Is there a better way to approach this image difference problem?

If this is a good approach, what values would I need for the four element scalar to zero the fourth channel of my second input image?

Thanks, Barry.