Ask Your Question

ZachTM's profile - activity

2020-05-03 01:32:22 -0600 received badge  Famous Question (source)
2019-04-12 00:38:14 -0600 received badge  Popular Question (source)
2019-03-11 23:00:49 -0600 received badge  Nice Question (source)
2016-11-14 09:05:17 -0600 received badge  Famous Question (source)
2016-06-12 04:20:24 -0600 received badge  Good Question (source)
2016-05-16 05:38:16 -0600 received badge  Famous Question (source)
2016-02-04 06:56:58 -0600 received badge  Notable Question (source)
2015-12-30 14:06:27 -0600 received badge  Taxonomist
2015-04-20 09:46:58 -0600 received badge  Notable Question (source)
2015-04-08 00:57:25 -0600 received badge  Famous Question (source)
2015-04-02 11:09:03 -0600 received badge  Good Question (source)
2015-04-02 11:08:57 -0600 received badge  Guru (source)
2015-04-02 11:08:57 -0600 received badge  Great Answer (source)
2015-01-17 09:25:59 -0600 received badge  Popular Question (source)
2014-12-09 13:51:32 -0600 marked best answer Why is my call to putText creating a wavy image of lines?

I have the following code trying to display the letter A on a window:

Mat letter = Mat(80,60,CV_8UC1);
putText(letter,"A",Point(0,0),CV_FONT_HERSHEY_PLAIN,8,Scalar(255,255,255));
namedWindow("Display",CV_WINDOW_NORMAL);
imshow("Display",letter);
imwrite("A.jpg",letter);
cvWaitKey(0);

That outputs this: image description

I do not understand what is going wrong can anyone tell me? I would really appreciate it!

2014-12-09 13:46:45 -0600 marked best answer What is the best way to detect lines or corners from possibly wavy handwritting?

I didnt know how to best word my title but I will try to explain it here. It is pretty simple, for example if I have a hand drawn square root symbol, for the most part it is made up of 3 lines.

If somone uses a ruler, I could easily get 3 lines extracted from the image, but the problem is that sometimes the lines will be quite wavy, like the above though that is a more exaggerated case. Does anyone know of a way I can get lines or even the corners from something like this, or should I just use my character classifier for these types of symbols too. I was able to use hough lines to detect a dash without using a classifier by just looking at the horizontal vs vertical lines but this is a bit more complex. (As far as I know)

Thanks for any suggestions, I know its maybe an unusual question.

Edit: In case this is of interest to anyone else: Using Mostafa Stataki's answer, I was able to acheive the following result with an extremely wavy square root symbol and a very high epsilon of 50 in approxPolyDP :

image description

I couldn't have imagined a better result!

2014-12-09 13:46:28 -0600 marked best answer Are 4 or more dimensions allowed in a Mat?

Hi guys, I have been using opencv for a while now, but I now need to make matrices of 4, or 5 dimensions. I can create a matrix like:

int sz[] = {1,5,5,16,16};
Mat x = (5,sz,CV_32F, Scalar::all(0));

but x.at<float>(1,1,1,1,1) is invalid. Are opencv Mat objects not able to handle anything greater than 3 dimensions? Or is there another way of accessing the elements. Thanks for any help!

2014-12-09 13:45:13 -0600 marked best answer Is this working code for Convolutional Neural Networks in the OpenCV source?

Hello I have been trying to figure out how to use convolution neural networks over the past few days and I am having some trouble because of the limited documentation on it that I have found on the internet. I was sure opencv didnt have any classes that handled convolutional neural networks so I was trying to do research on them myself. Today I was looking through opencv source code, and in modules/ml/src there is a file called cnn.cpp. To my surprise the code looks like it is for convolutional neural networks (that and the comments in it say it is!). From what I see there is no documentation anywhere on this mysterious code so I was wondering if anyone knew if this was a working convolution neural network implementation? If so does it also handle backpropogation training? I have seen an opencv implementation of CNNs but it did not allow you to train the network. I am really interested in using a CNN so I would be willing to go through all the code myself to try make sense of it even without documentation I just am wondering if this code works? Thank you for any answers!

2014-12-09 13:44:37 -0600 marked best answer How do the rho and theta values work in HoughLines?

I have found some source code that finds lines in an image like I want and it uses the following HoughLines:

HoughLines( edges, lines, 1, CV_PI/180, 50, 0, 0 );

What I want to do is add the top, left, right, and bottom borders into the lines vector after Houghlines. From what I read in the documentation:

lines – The output vector of lines. Each line is represented by a two-element vector (rho, theta) . rho is the distance from the coordinate origin (0,0) (top-left corner of the image) and theta is the line rotation angle in radians

lines uses the rho and theta values to represent lines. And that the 1, and CV_PI/180 arguments are called rho and theta. So I did some research on this algorithm and found this diagram: image description

This looks like a good explanation of what im trying to understand but I still cant wrap my head around how to add the borders using the appropriate rho and theta values. Can somone explain this a little more so that I can possibly understand it? I would really appreciate it! Thanks.

2014-12-09 13:44:15 -0600 marked best answer How can I extract writing on paper when part of the paper is in more shadow than the other?

Hi guys I dont know how to phrase this better so Ill just give you an example: image description

I have gotten this far from an extremely low quality picture so I am amazed. The things I need to extract are the dark letters which you can clearly see. I could use something like edge detection to pick this out but in the top left corner there is alot of shadow. YOu can still clearly see the letter A, but something like Binary Threshold wont work when the lighting is uneven. Does anyone know how I can reduce the effects of lighting on an image like this so I can evenly extract the text?

Thanks for ANY suggestions!

2014-12-09 13:44:13 -0600 marked best answer Why do I get an error from imread when running my program in eclipse CDT, but not when running from terminal.

Hi guys I have had this problem for a while but I finally want to figure out how to fix it. In eclipse I can have this code:

int main(int argc, char** argv){
    Mat image = imread( argv[1], 0);
    namedWindow("ImageBefore",CV_WINDOW_NORMAL);
    imshow("ImageBefore",image);
}

If I specify argv[1] in eclipse to be "filename.jpg" I get the following error:

OpenCV Error: Bad flag (parameter or structure field) (Unrecognized or unsupported array type) in cvGetMat, file /home/refinedcode/Development/OpenCV-2.4.2/modules/core/src/array.cpp, line 2482
terminate called after throwing an instance of 'cv::Exception'
  what():  /home/refinedcode/Development/OpenCV-2.4.2/modules/core/src/array.cpp:2482: error: (-206) Unrecognized or unsupported array type in function cvGetMat

But if I run the executable from the terminal with

./ProgramName filename.jpg

It runs fine. I can change the code to:

int main(int argc, char** argv){
    const string filename "filename.jpg";
    Mat image = imread(filename, 0);
    namedWindow("ImageBefore",CV_WINDOW_NORMAL);
    imshow("ImageBefore",image);
}

Yet I get the exact same error. The file is in my debug folder, the same folder as the compiled program, but if I write in the full path it still does not work.

I would really appreciate it if anyone could give me insight on why this is happening?

Thanks, Zach

2014-12-09 13:44:10 -0600 marked best answer Using Mat.at<>() is giving me numbers much larger than 255.

Hi guys I am trying to get the value of a single channel (greyscale matrix). WHen I output it i get a bunch of values like this :

 [26, 189, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 55, 0;
  70, 255, 18, 0, 0, 0, 0, 0, 0, 0, 0, 53, 212, 170, 0;
  79, 249, 6, 0, 0, 0, 0, 0, 0, 37, 176, 227, 101, 1, 0;
  99, 237, 0, 0, 0, 0, 0, 1, 106, 236, 121, 8, 0, 0, 0;
  125, 213, 0, 0, 0, 0, 28, 181, 206, 42, 0, 0, 0, 0, 0;
  150, 194, 0, 0, 10, 122, 235, 134, 7, 0, 0, 0, 0, 0, 0;
  150, 193, 0, 49, 217, 173, 38, 0, 0, 0, 0, 0, 0, 0, 0;
  169, 170, 92, 241, 111, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
  191, 252, 204, 55, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
  202, 255, 169, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
  202, 158, 176, 215, 63, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
  203, 124, 0, 93, 242, 105, 0, 0, 0, 0, 0, 0, 0, 0, 0;
  202, 141, 0, 0, 52, 225, 150, 3, 0, 0, 0, 0, 0, 0, 0;
  202, 141, 0, 0, 0, 24, 213, 177, 22, 0, 0, 0, 0, 0, 0;
  202, 141, 0, 0, 0, 0, 10, 153, 222, 32, 0, 0, 0, 0, 0;
  181, 156, 0, 0, 0, 0, 0, 1, 135, 231, 77, 0, 0, 0, 0;
  176, 167, 0, 0, 0, 0, 0, 0, 0, 76, 225, 161, 7, 0, 0;
  159, 167, 0, 0, 0, 0, 0, 0, 0, 0, 19, 195, 185, 5, 0;
  196, 137, 0, 0, 0, 0, 0, 0, 0, 0, 0, 17, 208, 180, 28;
  12, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 141, 121]

Here is how i declare the matrix:

Mat downSize = Mat(20,15,CV_32F,Scalar(0));

The matrix get filled up with any non-zero values and still displays properly like I have shown above. Now I am trying to get the value of any pixel so I use Mat.at, but its giving me huge numbers and really small numbers and anything in between. Isnt CV_32F a float? I was using float, but when that didnt work I used every other type (unsigned short, signed short ... double) and I still get thes outrageous values. Does anyone know what my problem is?

Edit: The image values get set by:

resize(image,downSize,downSize.size(),0,0,INTER_AREA);

image is a larger greyscale image. The above grid of values is what gets outputted after this resize. with:

cout << downSize << endl;

By the way I am getting extremely small values too.

Or values like -2.14381e+09.

2014-12-09 13:44:00 -0600 marked best answer Why doesnt QT get detected when I want to build openCV?

I am trying to build openCV with QT so I installed libqt4-dev but when I run cmake I get this line:

 QT 4.x:                      NO

I ran ldconfig so I do not know why it isnt being detected. DOes anyone know?

2014-12-09 13:43:23 -0600 marked best answer How can I extract handwritten text from lined paper without the noise caused by the lines to use in a text detection algorithm?

I have been using just a test piece of paper while learning opencv. So far I have taken an image like this: image description

I identify the corners of the page, perform a perspective transform. Subtract a mean shift filtered version of the image to the original: image description

This gives me a white page with very little shadow: image description

This fixed any problems I was having with adaptive threshold. The paper can be any size and in any lighting so I think this step really cleaned up alot of the problems I was having.

The big problem i have now, is that sometimes images will have lines in them and can cause alot of noise as shown image description

I think I can get rid of all the other noise easily but its just that when alot of the letters on the page are very small (in between the lines on the page) it really makes it hard to separate everything. In an ideal world I would like to just have a blank background with all of the written letters and symbols on it, and none of the noise and lines. I do not know if this is possible, but if anyone has an idea on how I can get closer to that I would be extremely grateful. The letters will always have whitespace separating them from another letter so a Complex text recognition algorithm would not be needed in this case if i manage to get all the noise gone. Thanks for your time!

2014-11-05 06:00:02 -0600 received badge  Notable Question (source)
2014-10-01 18:28:40 -0600 received badge  Notable Question (source)
2014-08-14 17:46:43 -0600 received badge  Popular Question (source)
2014-04-08 06:28:04 -0600 received badge  Famous Question (source)
2014-02-11 08:33:37 -0600 received badge  Popular Question (source)
2014-02-05 17:06:33 -0600 received badge  Notable Question (source)
2014-01-14 13:06:09 -0600 received badge  Popular Question (source)
2014-01-06 00:29:13 -0600 marked best answer What is a good thinning algorithm for getting the "skeleton" of characters for OCR?

Hi guys I have a few thousand training examples for my neural network that looks like:

image description

The thickness does vary in my training set. The accuracy of the neural network on the test set isnt bad, as its around 97% but I have problems when the characters are super small, with a high thickness. I want to normalize the characters to have a standard thickness if possible using a thinning algorithm. I have found many papers that talk about them, but never explain in detail how they work. I was wondering if anyone knew a nice way to do this in OpenCV? I would be very greatful! Thanks.

2014-01-06 00:28:15 -0600 received badge  Favorite Question (source)
2013-09-21 09:23:01 -0600 marked best answer Unresolved inclusions in OpenCV android tutorial 4.

Hi guys I am trying to get the android openCV tutorial 4 to mix native and java code. I followed all the steps but in the jni_part.cpp I am getting a bunch of errors:

Unresolved inclusion: <opencv2/core/core.hpp>
Unresolved inclusion: <opencv2/imgproc/imgproc.hpp>
Unresolved inclusion: <opencv2/features2d/features2d.hpp>
Unresolved inclusion: <vector>

Symbol 'std' could not be resolved
Symbol 'cv' could not be resolved

Type 'Mat' could not be resolved
Type 'FastFeatureDetector' could not be resolved

I think you get the point, there is basically one or more of these on every line. I tried cleaning the project, I tried this solution. and unfortunately it did not work. Does anyone know what I might be doing wrong? Thanks!

2013-09-15 13:19:26 -0600 received badge  Notable Question (source)
2013-08-24 06:25:39 -0600 marked best answer Why am I getting assertion failed in cvtColor even when convertTo was called in the line before?

Hi everyone, I am having a pretty small error but I cannot figure out why I am getting it.

//Setup the input
Mat* image=(Mat*)addrImg;
Mat character = *image;
character.convertTo(character,CV_8UC1);
threshold( character, character, 0, 255,1 );
Mat color_dst;
cvtColor(character,color_dst,CV_GRAY2BGR);

I have this code which is in the android jni, the way I am calling cvtColor should only require a one channel image, since I am going from grey to BGR, but I keep getting this error:

01-19 00:37:44.269: E/cv::error()(32379): OpenCV Error: Assertion failed
 (scn == 1 && (dcn == 3 || dcn == 4)) in void 
cv::cvtColor(cv::InputArray, cv::OutputArray, int, int), 
file /home/reports/ci/slave/opencv/modules/imgproc/src/color.cpp, line 3355

It really cannot figure otu why I am getting the error, can anyone help me out here? It would be greatly appreciated.