Ask Your Question

RBishnoi's profile - activity

2013-04-10 06:25:16 -0600 received badge  Student (source)
2013-04-09 22:52:40 -0600 asked a question How can I generate java bindings for non free module?

How can I generate java bindings for nonfree module? I want to use SURFFeatureDetector so as to run SURF with different parameters. Also is there any way to run SURF with different parameters while creating detector with the algorithm name?

2013-04-03 03:31:12 -0600 received badge  Supporter (source)
2013-04-03 03:30:03 -0600 received badge  Scholar (source)
2013-04-03 03:29:41 -0600 commented answer createsamples.exe crashing

Thanks Steven!! The image size in which i have to detect is 4960 * 3506. So I gave these values. When I gave small values, the sample images that are generated have unreasonable object to imagescene ratio which is never going to occur. This may be a reason why the createsamples utility does not give accurate results. I need to know one more thing. Should the size of the object image file (which is 4644*342) be also reduced? As you said, I will run with set of already marked images. I am new to opencv and i would like to know how many marked images should I consider for proper object detection? The scene image files are expected to be noisy...will this method be able to detect object correctly? Big Thanks!!

2013-04-02 22:20:23 -0600 asked a question createsamples.exe crashing

I am trying to create samples through createsamples.exe. But I am able to generate samples for smaller value of num (num< 20). It crashes for higher values(num>20). I am running this utility with these args:

-vec "D:\train.vec" -img "D:\object.png" -bg "D:\files.txt" -maxxangle 0 -maxyangle 0 -maxzangle 0 -maxidev 100 -bgcolor 0 -bgthresh 0 -w 4960 -h 3506 - num 50

object.png is of 4644*342 size in pixels.

I am getting this exception:

Unhandled exception at 0x000000013FCBB284 in opencv_createsamples.exe: 0xC0000005: Access violation reading location 0x0000000002AB4000.

While debugging, getting exception in line no. (340 or 345) in cvsamples.cpp inside function void cvWarpPerspective( CvArr* src, CvArr* dst, double quad[4][2] ). The value of src_step at the time of exception is 4644.

if( isrc_x >= 0 && isrc_x <= src_size.width && //line no. 337 isrc_y >= -1 && isrc_y < src_size.height ) //line no. 338 { i01 = s[src_step]; // line no. 340 }

if( isrc_x >= -1 && isrc_x < src_size.width && // line no.342 isrc_y >= -1 && isrc_y < src_size.height ) // line no. 343 { i11 = s[src_step+1]; // line no. 345 }

It is also giving the same exception when giving non zero values of max?angle. Is there any issue in the way I am using this utility or is there any issue with the code? Using OpenCv 2.4.9 (src code 30th March build with TBB).

Thanks for any help!!

2013-04-01 22:41:22 -0600 commented answer Which is the best way to detect object in this case?

Thanks for your suggestions!! In title block, company logo will always be there . I am trying the cascade classifier. I want to know if the cascade classifiers are invariant to scaling, rotation and brightness? Yeah you are right, the title block will be known in advance but the drawing can have any orientation. Although the title block can have slightly different brightness but it will have same structure. I will also try the hierarchical template-matching given by you. I didn't understand the second approach you mentioned...which 2 long lines you are referring to? Yeah you are right we are doing ocr to retrieve the values!! Thanks again!!

2013-04-01 00:50:38 -0600 received badge  Editor (source)
2013-04-01 00:45:38 -0600 asked a question Which is the best way to detect object in this case?

I I am new to opencv. I will need some help. I want to detect titleblock in technical drawings through opencv. There will be a sample title block and there will be set of drawing files in which I have to detect the location of title block. The drawings can have (90n +/- 5) degree of rotation. The drawings will have contain title block of different dimensions. I have tried surf,freak, template matching. But all of these are not giving perfect results. I also would like to know in which cases object should be detected via surf like feature detection and in which cases LBP, Haar like cascade training? Sample Drawing File Sample Title block title.png

The approved by, date, title are filled with different values in different drawings but the overall structure and some of the text is same in title block.

Any suggestions would be of great help!! Thanks

2013-03-30 03:19:34 -0600 answered a question The homography tutorial in java

I have been trying to write this in java but couldn't get correct results...Even after using the above code and changes from comments I am not getting correct results oDetection1.png. Please tell me what mistake I am doing?.

Using the above code and changes,code:

public class FindO { public void run() {

    System.out.println("\nRunning FindObject");
    System.loadLibrary("opencv_java244");


    String object_filename = "D:\\box.png";
    String scene_filename = "D:\\box_in_scene.png";

    Mat img_object = Highgui.imread(object_filename, Highgui.CV_LOAD_IMAGE_GRAYSCALE); // 0 =
                                                            // CV_LOAD_IMAGE_GRAYSCALE
    Mat img_scene = Highgui.imread(scene_filename, Highgui.CV_LOAD_IMAGE_GRAYSCALE);

    FeatureDetector detector = FeatureDetector.create(FeatureDetector.SURF); // 4 = SURF

    MatOfKeyPoint keypoints_object = new MatOfKeyPoint();
    MatOfKeyPoint keypoints_scene = new MatOfKeyPoint();

    detector.detect(img_object, keypoints_object);
    detector.detect(img_scene, keypoints_scene);

    DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.SURF); // 2 =
                                                                    // SURF;

    Mat descriptor_object = new Mat();
    Mat descriptor_scene = new Mat();

    extractor.compute(img_object, keypoints_object, descriptor_object);
    extractor.compute(img_scene, keypoints_scene, descriptor_scene);

    DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.FLANNBASED); // 1 =
                                                                // FLANNBASED
    MatOfDMatch matches = new MatOfDMatch();

    matcher.match(descriptor_object, descriptor_scene, matches);
    List<DMatch> matchesList = matches.toList();

    Double max_dist = 0.0;
    Double min_dist = 100.0;

    for (int i = 0; i < matchesList.size(); i++) {
        Double dist = (double) matchesList.get(i).distance;
        if (dist < min_dist)
            min_dist = dist;
        if (dist > max_dist)
            max_dist = dist;
    }

    System.out.println("-- Max dist : " + max_dist);
    System.out.println("-- Min dist : " + min_dist);

    LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
    MatOfDMatch gm = new MatOfDMatch();

    for (int i = 0; i < matchesList.size(); i++) {
        if (matchesList.get(i).distance < (3 * min_dist)) {
            good_matches.addLast(matchesList.get(i));
        }
    }


    gm.fromList(good_matches);

    Mat img_matches = new Mat();
    Features2d.drawMatches(img_object, keypoints_object, img_scene,
            keypoints_scene, gm, img_matches, new Scalar(255, 0, 0),
            new Scalar(0, 0, 255), new MatOfByte(), 2);

    String filename = "D:\\oDetection.png";

    System.out.println(String.format("Writing %s", filename));
    Highgui.imwrite(filename, img_matches);

    LinkedList<Point> objList = new LinkedList<Point>();
    LinkedList<Point> sceneList = new LinkedList<Point>();

    List<KeyPoint> keypoints_objectList = keypoints_object.toList();
    List<KeyPoint> keypoints_sceneList = keypoints_scene.toList();

    for (int i = 0; i < good_matches.size(); i++) {
        objList.addLast(keypoints_objectList.get(good_matches.get(i).queryIdx).pt);
        sceneList
                .addLast(keypoints_sceneList.get(good_matches.get(i).trainIdx).pt);
    }

    MatOfPoint2f obj = new MatOfPoint2f();
    obj.fromList(objList);

    MatOfPoint2f scene = new MatOfPoint2f();
    scene.fromList(sceneList);

    Mat H = Calib3d.findHomography(obj, scene);





    Mat obj_corners = new Mat(4,1,CvType.CV_32FC2);
    Mat scene_corners = new Mat(4,1,CvType.CV_32FC2);



    obj_corners.put(0, 0, new double[] {0,0});
    obj_corners.put(1, 0, new double[] {img_object.cols(),0});
    obj_corners.put(2, 0, new double[] {img_object.cols(),img_object.rows()});
    obj_corners.put(3, 0, new double[] {0,img_object.rows()});

    Core.perspectiveTransform(obj_corners,scene_corners, H);

    Mat img = Highgui.imread(scene_filename, Highgui.CV_LOAD_IMAGE_COLOR);

    Core.line(img, new Point(scene_corners.get(0,0)), new Point(scene_corners.get(1,0)), new Scalar(0, 255, 0),4);
    Core.line(img, new Point(scene_corners.get(1,0)), new Point(scene_corners.get(2,0)), new Scalar(0, 255, 0),4);
    Core.line(img, new Point(scene_corners.get(2,0)), new Point(scene_corners.get(3,0)), new Scalar(0, 255, 0),4);
    Core.line(img, new Point(scene_corners.get(3,0)), new Point(scene_corners.get(0,0)), new Scalar(0, 255, 0),4);

    filename = "D:\\oDetection1.png ...
(more)