Ask Your Question

Junglee's profile - activity

2018-01-05 14:10:58 -0500 received badge  Popular Question (source)
2016-11-17 06:27:56 -0500 received badge  Notable Question (source)
2016-08-31 01:36:51 -0500 commented question Text Recognition

@berak , i tried it first and the results were very bad, only few words were recognized and i want to recognize atleast 70% of text in input image. input image is white background with black text. Here's the image FYR which is output of end_to_end Reco. this is not exact image i want to recognize but its similar, results of my images are even poor.

2016-08-31 00:52:10 -0500 asked a question Text Recognition

I have done half of Text detection using below Program, so next part is the text recognition. I would like to know how to integrate end_to_end_recognition with the below code???

#include <opencv2/core/core.hpp>
#include <opencv2/text/ocr.hpp>
#include <opencv2/text.hpp>
#include <opencv2/core/utility.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>

using namespace cv;
using namespace std;
using namespace cv::text;

#define INPUT_FILE              "Example3.png"
#define OUTPUT_FOLDER_PATH      string("C:\\Users\\<UserEx>\\Desktop\\Reco_Pics\\")

int main(int argc, char* argv[])
{
#pragma region Text Detection(OpenCV)

Mat large = imread(INPUT_FILE);
imshow("Original Image", large);
Mat rgb;
// downsample and use it for processing -- Grayscale
//pyrDown(large, rgb);
Mat small;
//cvtColor(rgb, small, CV_BGR2GRAY);
cvtColor(large, small, CV_BGR2GRAY);
imshow("BGR2GRAY", small);

// morphological gradient -- Morph
Mat grad;
Mat morphKernel = getStructuringElement(MORPH_ELLIPSE, Size(2, 2));
morphologyEx(small, grad, MORPH_GRADIENT, morphKernel);
imshow("MORPH_GRADIENT", grad);

// binarize -- Threshold
Mat bw;
threshold(grad, bw, 255.0, 255.0, THRESH_BINARY | THRESH_OTSU);
imshow("Threshold", bw);

// connect horizontally oriented regions -- dilate
Mat connected;
morphKernel = getStructuringElement(MORPH_RECT, Size(9, 1));
morphologyEx(bw, connected, MORPH_CLOSE, morphKernel);
imshow("MORPH_CLOSE", connected);

// find contours
Mat mask = Mat::zeros(bw.size(), CV_8UC1);
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;

try
{
    findContours(connected, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
    imshow("Contours", connected);
}
catch (Exception ex)
{
    cout << ex.what();
}

// filter contours
for (int idx = 0; idx >= 0; idx = hierarchy[idx][0])
{
    Rect rect = boundingRect(contours[idx]);
    Mat maskROI(mask, rect);
    maskROI = Scalar(0, 0, 0);
    // fill the contour
    drawContours(mask, contours, idx, Scalar(255, 255, 255), CV_FILLED);
    // ratio of non-zero pixels in the filled region
    double r = (double)countNonZero(maskROI) / (rect.width*rect.height);

    if (r > .05 /* assume at least 45% of the area is filled if it contains text */
        &&
        (rect.height > 8 && rect.width > 8) /* constraints on region size */
        /* these two conditions alone are not very robust. better to use something
        like the number of significant peaks in a horizontal projection as a third condition */
        )
    {
        //rectangle(rgb, rect, Scalar(0, 255, 0), 2);
        rectangle(large, rect, Scalar(0, 255, 0), 2);
    }
}
//imshow("Final_Output", rgb);
imshow("Final_Output", large);
//imwrite(OUTPUT_FOLDER_PATH + string("_Output.jpg"), rgb);

#pragma endregion

waitKey();

return 0;
}

Any suggestion will be helpful.

2016-08-31 00:42:17 -0500 answered a question Solution for Multiple Object Detection and Tracking

I Know its very old post of mine but i have been working on this field for long and moved on with some other libraries. Firstly i achieved my goal using Metaio, but unfortunately they shutdown the company without intimation. so i was back to square one and i have to repeat the whole thing so used Vuforia with Unity3d, and the results were not that great but ok.

Look for Vuforia, Wikitude for this functionalities.

2016-08-24 02:48:51 -0500 commented answer ERFilter_Train in Visual studio C++

my bad i never built things on GCC so didn't get that. i'll look for some GCC Tuts anyway Thanks @break That fixed everything, 'll share fixed code via Drive that will help some.

2016-08-24 02:36:11 -0500 commented answer ERFilter_Train in Visual studio C++

@break , struck at line# 66 from this https://github.com/lluisgomez/erfilte... ,

error is "Error 1 error C2057: expected constant expression"

2016-08-24 02:34:40 -0500 received badge  Supporter (source)
2016-08-24 02:08:40 -0500 commented answer ERFilter_Train in Visual studio C++

Thanks for the answer, 'll fix and update the complete answer shortly.

2016-08-24 01:22:24 -0500 asked a question ERFilter_Train in Visual studio C++

HI All,

I found this Library for extending and training our own NM Classifiers but how to build it in Visual studio c++,CMake was successful and i am facing problems while building as below

error C2039: 'Params' : is not a member of 'cv::ml::Boost'

I tried to findout the solution, This Syntax was used in older versions of OpenCV but not in OpenCV 3.1, can anyone please tell me how to fix this.

2016-01-01 10:31:22 -0500 received badge  Popular Question (source)
2014-05-28 07:23:58 -0500 asked a question Getting Lesser stages in haartraining.exe output

Hi All,

I have gone through how to create .xml file for Haartraining, Gone through below steps,

1. Successfully created Info.txt file using positive images.
2. Successfully created bg.txt file using negative images.
3. Successfully created .vec file using createsamples.exe.

4. Successfully created Stage folders in Cascade folder file using haartraining.exe.

5. At Last Successfully Created .xml file using haarconv.exe.

In 4th point, i am using 10 stages that means there should be 0 to 9 (N to N-1 where N is No. of Stages given)folders in Cascade folders, but i am getting only 6. I am afraid that this is wrong output, because the tutorial i am referring is explained clearly that there should be N-1 folders.

Please help me.

Regards, Jithendra.

2014-05-25 23:46:31 -0500 commented question BackgroundSubstract

@isarandi, I didn't get you can you please elaborate.

2014-05-23 07:30:15 -0500 asked a question BackgroundSubstract

Hi All,

i have tried below example to subtract Image's background, its working well and updates position of the object but for the first time i mean when camera starts if i move an object from its initial position to some other position, its initial position Blob is not getting erased. i have attached image for your reference, what am i missing to clear that Initial Position of Original object. Here's my code:

//opencv
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/video/background_segm.hpp>
#include <opencv2/video/video.hpp>
//C++
#include <iostream>
#include <sstream>
//namespace
using namespace cv;
using namespace std;

//global variables
Mat frame; //current frame
Mat fgMaskMOG; //fg mask generated by MOG method
Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method
Ptr<BackgroundSubtractor> pMOG; //MOG Background subtractor
Ptr<BackgroundSubtractor> pMOG2; //MOG2 Background subtractor
int keyboard;

//function declarations
void processVideo();

int main(int argc, char* argv[])
{
//create GUI windows
namedWindow("Frame");
namedWindow("FG Mask MOG");
namedWindow("FG Mask MOG 2");

//create Background Subtractor objects
pMOG = new BackgroundSubtractorMOG(); //MOG approach
pMOG2 = new BackgroundSubtractorMOG2(); //MOG2 approach

//input data coming from a video
processVideo();

//destroy GUI windows
destroyAllWindows();
return EXIT_SUCCESS;
}

void processVideo() 
{
//create the capture object
VideoCapture capture;
capture.open(0);
if(!capture.isOpened()){
    //error in opening the video input
    cerr << "Unable to open video Camera " << endl;
    exit(EXIT_FAILURE);
}
//read input data. ESC or 'q' for quitting
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
    //read the current frame
    if(!capture.read(frame)) {
        cerr << "Unable to read next frame." << endl;
        cerr << "Exiting..." << endl;
        exit(EXIT_FAILURE);
    }
    //update the background model
    pMOG->operator()(frame, fgMaskMOG);
    pMOG2->operator()(frame, fgMaskMOG2);
    //get the frame number and write it on the current frame
    stringstream ss;
    rectangle(frame, cv::Point(10, 2), cv::Point(100,20),cv::Scalar(255,255,255), -1);
    ss << capture.get(CV_CAP_PROP_POS_FRAMES);
    string frameNumberString = ss.str();
    putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
                    FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
    //show the current frame and the fg masks
    imshow("Frame", frame);
    imshow("FG Mask MOG", fgMaskMOG);
    imshow("FG Mask MOG 2", fgMaskMOG2);
    //get the input from the keyboard
    keyboard = waitKey( 30 );
}
//delete capture object
capture.release();
}

Here's my Image:

image description

2014-05-23 01:28:13 -0500 commented question Solution for Multiple Object Detection and Tracking

@Witek, Thanks for your valuable suggestion, 'll try this if i am facing any prob. 'll post it.

2014-05-22 23:10:16 -0500 commented question Prob. in Multi Object Detection using Kalman Filter

@berak, Thanks for your suggestion, its not the belt these objects kept on table.

2014-05-22 07:20:44 -0500 commented question Prob. in Multi Object Detection using Kalman Filter

@berak, can you just post the changes , i am newbee to OpenCV. Please find attached image.

2014-05-22 07:10:55 -0500 asked a question Prob. in Multi Object Detection using Kalman Filter

Hi All, i am succeed to recognize single object using KalmanFilter, and trying to detect Multiple objects but i am facing prob. Here's my code and attachment. This code works fine but instead of detecting whole object as one it is detecting sub objects in that. as you can see in image, i just want to draw only single rectangle on Nut, but unfortunately i am failed, and one more thing i have downloaded this code on Net and tried some modifications.

#include <iostream>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/video/tracking.hpp>

using namespace std;
using namespace cv;

int H_MIN = 0;
int H_MAX = 256;
int S_MIN = 0;
int S_MAX = 256;
int V_MIN = 0;
int V_MAX = 256;

const string trackbarWindowName = "Trackbars";

#define drawCross( img, center, color, d )\
line(img, Point(center.x - d, center.y - d), Point(center.x + d, center.y + d), color, 2, CV_AA, 0);\
line(img, Point(center.x + d, center.y - d), Point(center.x - d, center.y + d), color, 2, CV_AA, 0 )\

    int main()
    {
    Mat frame, thresh_frame;
vector<Mat> channels;
VideoCapture capture;
vector<Vec4i> hierarchy;
vector<vector<Point> > contours;

capture.open(0);

if(!capture.isOpened())
    cerr << "Problem opening video source" << endl;

KalmanFilter KF(4, 2, 0);
Mat_<float> state(4, 1);
Mat_<float> processNoise(4, 1, CV_32F);
Mat_<float> measurement(2,1);   measurement.setTo(Scalar(0));

KF.statePre.at<float>(0) = 0;
KF.statePre.at<float>(1) = 0;
KF.statePre.at<float>(2) = 0;
KF.statePre.at<float>(3) = 0;

KF.transitionMatrix = *(Mat_<float>(4, 4) << 1,0,1,0,   0,1,0,1,  0,0,1,0,  0,0,0,1); // Including velocity
KF.processNoiseCov = *(cv::Mat_<float>(4,4) << 0.2,0,0.2,0,  0,0.2,0,0.2,  0,0,0.3,0,  0,0,0,0.3);

setIdentity(KF.measurementMatrix);
setIdentity(KF.processNoiseCov, Scalar::all(1e-4));
setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1));
setIdentity(KF.errorCovPost, Scalar::all(.1));

while((char)waitKey(1) != 'q' && capture.grab())
{
    capture.retrieve(frame);

    cvtColor(frame,thresh_frame,COLOR_BGR2GRAY);

    Canny(thresh_frame,thresh_frame,100,200,3);//Detect Edges.
    imshow("Edge Detection",thresh_frame);

    medianBlur(thresh_frame, thresh_frame, 5);

    findContours(thresh_frame, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));

    Mat drawing = Mat::zeros(thresh_frame.size(), CV_8UC1);
    for(size_t i = 0; i < contours.size(); i++)
    {
        //          cout << contourArea(contours[i]) << endl;
        if(contourArea(contours[i]) > 500)
            drawContours(drawing, contours, i, Scalar::all(255), CV_FILLED, 8, vector<Vec4i>(), 0, Point());
    }
    thresh_frame = drawing;     

    // Get the moments
    vector<Moments> mu(contours.size() );
    for( size_t i = 0; i < contours.size(); i++ )
    { mu[i] = moments( contours[i], false ); }

    //  Get the mass centers:
    vector<Point2f> mc( contours.size() );
    for( size_t i = 0; i < contours.size(); i++ )
    { mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 ); }

    Mat prediction = KF.predict();
    Point predictPt(prediction.at<float>(0),prediction.at<float>(1));

    for(size_t i = 0; i < mc.size(); i++)
    {
        drawCross(frame, mc[i], Scalar(255, 0, 0), 5);//Scalar is color for predicted ...
(more)
2014-05-22 06:58:58 -0500 commented question Solution for Multiple Object Detection and Tracking

@Witek, okay 'll answer you in detail, Lets say if i keep parts of my computer in front of camera then it should recognize(Not simply detect) what is what and these parts will be same they won't change, only their orientation and their place on tray may change. i want to recognize above objects for what i have now, of course there are 6 objects in above image and i just want to recognize all of 'em at the same time or i may place only 4 and show its name when it is recognized(in future i may want to display description), But let me recognize them first.

2014-05-21 23:53:05 -0500 received badge  Editor (source)
2014-05-21 23:52:11 -0500 commented question Solution for Multiple Object Detection and Tracking

@Witek , Tray is nothing but its just a plain moving sheet , it is black in color no texture(i missed it in attachment), yes it is moving continuously in one direction either back or forth, lightning condition will be normal or we could just make adjustments on top of tray to avoid shadows beside object, yes they are constant in unique place but orientation may change, objects will be same they won't differ, for example i got some parts from my garage ,can refer image attached.

2014-05-21 07:37:25 -0500 asked a question Solution for Multiple Object Detection and Tracking

Hi All, I am newbie, sorry if i am talking anything irrelevant because i have started learning recently. I wanted to detect multiple objects which are in motion in front of camera, and parallel when it gets detected i just want to display text on screen that Object is detected. well i have tried single object detection without color that comes with OpenCV_Source, i have tried examples but i didn't get how to made with many objects, objects comes on tray may be in different orientation and they may be in same color, so can anyone please suggest whats the best way to do it???

Objects that i want to detect

Regards, Junglee.