Ask Your Question

OpenCV_Learner's profile - activity

2017-02-28 23:09:42 -0600 commented question Camera Calibration

Well its outdoor and has one time deployment. Once deployed in site regular maintenance is not feasible, so need an approach such that it performs in any given situation for years together.

2017-02-28 04:22:40 -0600 asked a question Camera Calibration

For tarffic survaillence appilications using static cameras installed in highways, which is a better approach to calibration, manual or automatic? Can anybody give an idea on how to start working on it?

Thanks in advance.

2017-01-16 23:06:30 -0600 asked a question opencv 3.1.0 building source with ocl disabelled

I would like to build opencv 3.1.0 source for Windows 8.0 intel i7, I am using VS2013, with ocl disabelled. As while running my application i am continously getting the following exception :

C:\fakepath\Error.png

But i have never built opencv from the source, I have no idea how to use CMake, i need guidance on that.

2017-01-02 06:34:04 -0600 commented question Vehicle Tracking, Labelling

Automated labeling. I store the bounding box Rect info in a vector but sometimes the same car has two bounding boxes and they are labelled with two labels.And sometimes when two cars come nearby the ROIs overlap and two connected blobs labelled with single label. So which mechanism can sort our this scenario at basic level. I am looking into non minimal suppression for clubbing multiple bounding box to single one, still working on it. But unable to get a general idea to label at a basic level.

2017-01-01 22:57:27 -0600 asked a question Vehicle Tracking, Labelling

Hi,

I am doing vehicle tracking, and my vehicles are detected using camshift algorithm. Now when I draw bounding box around each vehicle, I want to label them unique way to track and count each vehicle. Can any one suggest the criteria to label each vehicle with unique numbers. I tried labelling them but the lebels are shifting between first and next vehicle.

Any suggestions how to achieve this?

Thanks in advance.

2016-08-23 04:07:43 -0600 commented question Tesseract user patterns

Ok. Sorry I was using Tesseract api in opencv environment.

2016-08-23 03:43:13 -0600 asked a question Tesseract user patterns

Hi,

I am using Tesseract for OCr in my project. I would like to recognize words that follow specific patterns. Patterns that can take any digit or character but in a certain order. I would like to use the user patterns operation in Tesseract. I tried implementing it but tesseract is not reading it from the file.

Is there any body who could give insight into it.

My patterns are of the format:

ARM110001 TCP67893 LTR676666

Here spacings may also vary in the image.

Thanks in Advance

2016-07-11 03:15:17 -0600 asked a question What is the effect of border type parameter of Gaussian bur function?

Hi,

I would like to know how the border type parameter in Gaussian_Blur function of OpenCV work on a greay scale image. Is it applied to the edge of the entire image or does it apply it to kernel window. Can anybody explain how changing the fourth parameter affect the output.

Thanks in advance.

2016-02-02 03:30:28 -0600 asked a question Wide Angle Dataset

Is there any computer vision websites where we can download wide angle images surrounding a car, so that I can work on 360 degree birds eye view? If no, will taking images using mobile camera give similar results? I tried taking images using mobile camera for 360 view, but I'm unable to get enough overlap hence cant stitch them to get wide angle camera view. Kindly suggest.

2016-01-11 23:47:18 -0600 answered a question Stitching 2 image with findHomography

I guess using opencv Stitcher class will help stitching multiple images:

vector<mat> images; images.push_back(imread("Set1/stretched1.jpg"));//Reading first image images.push_back(imread("Set1/stretched2.jpg"));//Reading second image Mat img;

Stitcher stitcher = Stitcher::createDefault(); Stitcher::Status stitcherStatus = stitcher.stitch(images, img);

imshow("First",img); images.clear(); images.push_back(img); images.push_back(imread("Set1/pic3 - Copy.jpg"));////Reading third image stitcherStatus = stitcher.stitch(images, img); imshow("Second",img)

In this way i tried to stitch 5 images, though it takes a lot of time. Is there any better way to do it for frames captured in video?

2015-12-17 00:52:31 -0600 received badge  Supporter (source)
2015-10-16 06:55:11 -0600 commented answer detecting car from moving vehicle

Thank you for these steps, I shall implement and shall share the output.

2015-10-16 03:39:50 -0600 received badge  Editor (source)
2015-10-15 06:44:54 -0600 asked a question detecting car from moving vehicle

I want to detect car from a moving vehicle by finding the rear light. I have extracted red color and using findContour detected them too, now how do I pair the lights? Is it by thresholding the distance between the red contours detected? Can anyone suggest the method for this thresholding?

My code is as follows:

            Mat src=imread("773.jpg");
Mat roi=src(Rect(5,400,1350,400));(same width but half the height)
Mat src1=roi.clone();
cvtColor(roi,roi,CV_BGR2HSV);
cv::Scalar   min(129, 150, 10);//min(220/2, 0, 0); Hue=140-179(red),
cv::Scalar   max(179,255, 255);//max(260/2,255, 255);
Mat dst;
inRange(roi,min,max,dst);
imshow("roi",dst);
Mat gray_roi;
cvtColor(roi,gray_roi,CV_BGR2GRAY);
bitwise_and(gray_roi,dst,gray_roi);
imshow("gray",gray_roi);
dilate(gray_roi, gray_roi, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
imshow("gray_dilate",gray_roi);
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
/// Find contours
findContours( gray_roi, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
vector<Rect> boundRect( contours.size() );
            for( int i = 0; i< contours.size(); i++ )
{
    boundRect[i] = boundingRect( Mat(contours[i]) );
    area=boundRect[i].width*boundRect[i].height;
                            if(area>40)//(3.0<=aspect_ratio || (
    {
    count++;
    rectangle( src1, boundRect[i].tl(), boundRect[i].br(), Scalar(255,0,0), 2, 8, 0 ); 
    imshow( "Contours1", src1 );
    waitKey(0);
    std::cout<<"index values"<<boundRect[i].x<<"  "<<boundRect[i].y;
    index.push_back(i);

    }
}
           for( int i = 0; i< boundRect.size()-1; i++ )
{
              if((boundRect[i+1].y==boundRect[i].y-1)||(boundRect[i+1].y==boundRect[i].y)||(boundRect[i+1].y==boundRect[i].y+1))//(3.0<=aspect_ratio || (
    {
    rectangle( src1, boundRect[i+1].tl(), boundRect[i].br(), Scalar(255,0,0), 2, 8, 0 ); 
    imshow( "Contours1", src1 );
    }
}

imshow( "Contours", src1 );

Kindly tell me how to pair the contours for rear lights and detect vehicle.

2015-10-12 23:22:52 -0600 commented answer vector Mat

But this is just a normal c++ program written in QT which i try to run in Linux using raspi camera. Can anyone help me or suggest me how to make the above code run in 50 ms.

2015-10-11 22:57:49 -0600 answered a question vector Mat
 Here is my code:  
#include <iostream>
#include <time.h>
#include <stdio.h>
#include <raspicam/raspicam_cv.h>
#include <raspicam/raspicam.h>
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp> 
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/nonfree/features2d.hpp"
#include <cstdlib>
#include <fstream>
#include <string>
#include <sstream>
#include <opencv2/stitching/stitcher.hpp>
#include "camera.h"
#include <pthread.h>

using namespace std;
using namespace cv;

Mat dst;

int main(int argc,char **argv) 
{
    time_t start,end;
    Mat template1 = imread( "100_1.jpg");//, CV_LOAD_IMAGE_GRAYSCALE ); //loading object to be detectedimg_object
    Mat template2 = imread( "NoEntry.jpg");
    Mat template3 = imread( "Speed80.png");

    raspicam::RaspiCam_Cv Camera;
    OpenPiCamera(Camera);

cv::Mat rawMat;
    IplImage *frame;

    cvNamedWindow("Capture Frame",1);

    time(&start);
    int counter=0;

    while(1)
    {
        //if(counter%5==0)
        {

        Camera.grab();
        Camera.retrieve(rawMat);
        Mat img_scene=rawMat.clone();
                cvtColor(rawMat,rawMat,CV_RGB2HSV);
        cv::Scalar   min(220/2, 0, 0);
        cv::Scalar   max(260/2,255, 255);
        inRange(rawMat,min,max,dst);
                vector<cv::Vec3f> circles;
        HoughCircles( blur, circles, CV_HOUGH_GRADIENT,1,dst.rows/4,70, 20, 1, 40);//3, 5,400, 10, 0, 80    1, img_scene.rows/6, 100,70,0, 50    dst.rows/8
        cv::Rect borders(Point(0,0), dst.size());
for( size_t i = 0; i < circles.size(); i++ ) 
        {
            Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
            int radius = cvRound(circles[i][2]);
            cv::circle( dst, center, 3, Scalar(255,255,255), -1);
            cv::circle( dst, center, radius, Scalar(255,255,255), 1 );
            imshow("hough",dst);

            int x=cvRound(circles[i][0])-cvRound(circles[i][2]);
            int y=cvRound(circles[i][1])-cvRound(circles[i][2]);
            Rect r(abs(x),abs(y),radius*2,radius*2);
            if(r.area()>100)
            {
                Mat roi( radius*2, radius*2, CV_8UC1);

                roi=img_scene( Rect(abs(x),abs(y),radius*2,radius*2)& borders);//save ROI
                Mat mask( roi.size(),roi.type(),Scalar::all(0));
                circle(mask,Point(radius,radius),radius,Scalar::all(255),-1);//circle(mask,Point(radius,radius),radius,Scalar::all(255),-1);
                Mat roi_cropped=roi & mask;

                int W=0,H=0;
                W=dst.cols;
                H=dst.rows;
                int w=0,h=0;
                w=roi_cropped.cols;
                h=roi_cropped.rows;
                Mat res_32f1(W - w + 1, H - h + 1, CV_32FC3);
                Mat res_32f2(W - w + 1, H - h + 1, CV_32FC3);
                Mat res_32f3(W - w + 1, H - h + 1, CV_32FC3);

                Mat resizedTemplate1,resizedTemplate2,resizedTemplate3;
                //resizedTemplate1=template1.clone();
                resize(template1, resizedTemplate1, roi_cropped.size());//resizing template into ROI
                resize(template2, resizedTemplate2, roi_cropped.size());
                resize(template3, resizedTemplate3, roi_cropped.size());
                matchTemplate(resizedTemplate1, roi_cropped, res_32f1, CV_TM_CCORR_NORMED);
                matchTemplate(resizedTemplate2, roi_cropped, res_32f2, CV_TM_CCORR_NORMED);
                matchTemplate(resizedTemplate3, roi_cropped, res_32f3, CV_TM_CCORR_NORMED);


                double minval1, maxval1, minval2, maxval2,minval3, maxval3, minval4, maxval4,minval5, maxval5,threshold1 = 0.74;//.78
                Point minloc1, maxloc1, minloc2, maxloc2,minloc3, maxloc3,minloc4, maxloc4,minloc5, maxloc5;
                minMaxLoc(res_32f1, &minval1, &maxval1, &minloc1, &maxloc1);
                minMaxLoc(res_32f2, &minval2, &maxval2, &minloc2, &maxloc2);
                minMaxLoc(res_32f3, &minval3, &maxval3, &minloc3, &maxloc3);
                double maxInddex;
                double init[]={maxval1, maxval2,maxval3};//,maxval4,maxval5
                valarray<double> myvalarray (init,3);
                double max=(double)myvalarray.max();

                if (max >= threshold1)
                {

                    if(max==maxval1)rectangle(img_scene,    Point(abs(x),abs(y ...
(more)
2015-10-11 11:18:53 -0600 received badge  Enthusiast
2015-10-10 11:06:24 -0600 commented answer vector Mat

Hey, thanks for the code.

2015-10-10 09:57:10 -0600 commented question vector Mat

Actually I am trying to do traffic sign detection. In which I try to extract ROI using Hough circles, so I get a lot of ROI from a single image(frame: in case of camera). Hence for each roi i do template matching with 3 signs. As Hough circle is time consuming running matchTemplate 3 times for each ROI makes it more slow. I thought of using threads to run 3 matchTemplate in parallel, for which i want to send the images. So I created the struct.

Purpose is to make it more fast and real time.

2015-10-09 05:48:22 -0600 commented question vector Mat

I want to send the pointer to thread function call match1. pthread_t tid0, tid1,tid2; pthread_create(&tid0, NULL, match1, images ); hence to pass the images i push them into obj->images. Purpose is: I am trying to match template for 3 sign boards, hence i want to do this using threads. At each thread function I want to call matchTemplate of opencv to speed up processing.

2015-10-09 01:35:42 -0600 asked a question vector Mat

Is it possible to insert images of different format in vector<Mat>?

I created a struct as:

struct MatImage
{
vector<Mat> images;
};
Inside main I create an object MatImage *obj; I try to insert 3 images into obj as:
obj->images.push_back(res_32f1);
obj->images.push_back(roi_cropped);
obj->images.push_back(resizedTemplate1);

where res_32f1 is CV_32FC3, roi_cropped and resizedTemplate1 is UC3. But on running im getting segmentation fault, as only the first image is getting inserted.

2015-10-08 03:24:01 -0600 commented question raspi camera and pyrMeanShiftFiltering

Is there any way we can use pyrMeanShiftFiltering in raspi with Ubuntu+QT 5.2+Opencv 2.4.10?

2015-10-07 06:11:27 -0600 answered a question How do I approach training dataset for HoG, using images which are larger than 64 x 128 pixels ?

resize image to 100X100 and use follo parameters HOGDescriptor d( Size(32,64), Size(32,32), Size(16,16), Size(16,16), 9);

2015-10-07 06:08:52 -0600 asked a question HOG training using ANN

how to train and predict from multiple xml files?

I created 2 xml(positive and negative) files for a sign board(100 speed). The positive files are the HOG descriptor values of one sign(100 speed) and negative files(80 speed and stop sign) the HOG descriptor values of other two signs. Now i want to train the ANN using these 2 files to detect 100 speed sign board. How should i train for all 3 signs and predict the sign for any of the given sign?

2015-10-06 07:14:43 -0600 received badge  Student (source)
2015-10-06 06:54:37 -0600 asked a question raspi camera and pyrMeanShiftFiltering

I am trying to perform pyrMeanShiftFiltering on the frame captured using camera. It is working fine in VS2010 for a color image. But on using the same operation on the frame captured using raspi Camera it is getting stuck. No error reported though. Is it possible to implement opencv pyrMeanShiftFiltering(frame,frame,30,30,3) in raspi?