Ask Your Question

patri's profile - activity

2016-07-02 14:41:47 -0600 asked a question Get size of ArrayList<DMatch> in Android

I'm working in Android and I need to use the parameters goodMatches size for further processing:

ArrayList<DMatch> goodMatches=new ArrayList<DMatch>();

In C++ I've done it like this:

std::vector< DMatch > good_matches;
/*....
i=(int)good_matches.size();

But I don't know the proper way to do this in Android. Please help me find a solution.

2016-07-02 12:05:18 -0600 commented question ORB save video frame if object is detected

I'm thinking of adding something like this: if(goodMatches>some value) then save image... but with what value should I test?

2016-07-02 11:42:43 -0600 commented question ORB save video frame if object is detected

Version 2.4.11. The problem is that I can save the frames, but I'm saving all of them, even though the object is not there. I want to store in the bitmap file only the frame from video that contains the object.

2016-07-02 11:34:47 -0600 received badge  Editor (source)
2016-07-02 11:28:45 -0600 asked a question ORB save video frame if object is detected

I'm working on an Android app that uses ORB algorithm to find an object in the video stream received from the camera. What I'm trying to do is save the video frame where the object appears. I don't know what I'm doing wrong, but I save all the video frames. Here is my code:

//partea de feature description si matching
    int maximumNuberOfMatches=10;
    Mat greyInputImage=new Mat();
    Mat frameToMatch=cameraFrameRgba;

    Imgproc.cvtColor(mInputImage, greyInputImage, Imgproc.COLOR_RGB2GRAY);
    Imgproc.cvtColor(frameToMatch, cameraFrameGray, Imgproc.COLOR_RGB2GRAY);
    MatOfKeyPoint keyPoints=new MatOfKeyPoint();
    MatOfKeyPoint keyPointsToMatch=new MatOfKeyPoint();

    FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
    detector.detect(greyInputImage, keyPoints);
    Features2d.drawKeypoints(greyInputImage, keyPoints, greyInputImage);
    // displayImage(greyImage);

    detector.detect(greyInputImage, keyPoints);
    detector.detect(cameraFrameGray, keyPointsToMatch);

    DescriptorExtractor dExtractor = DescriptorExtractor.create(DescriptorExtractor.ORB);

    Mat descriptors=new Mat();
    Mat descriptorsToMatch=new Mat();

    dExtractor.compute(greyInputImage, keyPoints, descriptors);
    dExtractor.compute(cameraFrameGray, keyPointsToMatch, descriptorsToMatch);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
MatOfDMatch matches=new MatOfDMatch();
matcher.match(descriptorsToMatch,descriptors,matches);
ArrayList<DMatch> goodMatches=new ArrayList<DMatch>();
List<DMatch> allMatches=matches.toList();

double minDist = 100;
for( int i = 0; i < descriptorsToMatch.rows(); i++ )
{
    double dist = allMatches.get(i).distance;
    if( dist < minDist ) minDist = dist;
}
for( int i = 0; i < descriptorsToMatch.rows() && goodMatches.size()
        <maximumNuberOfMatches; i++ )
{
    if(allMatches.get(i).distance<= 2*minDist)
    {
        goodMatches.add(allMatches.get(i));
    }
}
MatOfDMatch goodEnough=new MatOfDMatch();
goodEnough.fromList(goodMatches);
Mat finalImg=new Mat();
Features2d.drawMatches(cameraFrameGray, keyPointsToMatch, mInputImage, keyPoints, goodEnough, finalImg, Scalar.all(-1),Scalar.all(-1),new MatOfByte(), Features2d.DRAW_RICH_KEYPOINTS + Features2d.NOT_DRAW_SINGLE_POINTS);

int w = 1920, h = 1080;

Bitmap.Config conf = Bitmap.Config.ARGB_8888; // see other conf types
Bitmap imageToStore = Bitmap.createBitmap(w, h, conf); // this creates a MUTABLE bitmap

Utils.matToBitmap(cameraFrameGray,imageToStore);

I've included the matching part, because I assume that there is the part where I'm doing the mistake. The Bitmap imageToStore is the image where i want to save the video frame.

Please help me find the problem.

2016-06-30 09:23:01 -0600 asked a question Android ORB not showing both image and video

Hey guys!

I'm working on an Android application that should get an input image from my database and compare it to a video stream from my phone's camera. I've tried to open just the image and it's showing. I've tried to open just the video and it's also showing. I don't know what I'm doing wrong, but when I open both the image and the video, only the video it's showing on my screen. Please help me find the problem.

MainActivity

package com.example.patri.orbimagini;


import android.graphics.BitmapFactory;
import android.os.Bundle;
import android.app.Activity;
import android.graphics.Bitmap;

import android.util.Log;
import android.widget.ImageView;

import org.opencv.android.BaseLoaderCallback;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.android.Utils;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfByte;
import org.opencv.core.MatOfDMatch;
import org.opencv.core.MatOfKeyPoint;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.features2d.DMatch;
import org.opencv.features2d.DescriptorExtractor;
import org.opencv.features2d.DescriptorMatcher;
import org.opencv.features2d.FeatureDetector;
import org.opencv.features2d.Features2d;
import org.opencv.imgproc.Imgproc;

import java.util.ArrayList;
import java.util.List;

import android.app.Activity;
import android.os.Bundle;
import android.util.Log;
import android.view.SurfaceView;
import android.view.WindowManager;

import org.opencv.android.BaseLoaderCallback;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.JavaCameraView;
import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.core.Mat;

public class MainActivity extends Activity implements CameraBridgeViewBase.CvCameraViewListener2{

private Bitmap inputImage,imgMatch;
private Mat cadru;

private static final String TAG = "HelloVisionWorld";
//A class used to implement the interaction between OpenCV and the
//device camera.
private CameraBridgeViewBase mOpenCvCameraView;
//This is the callback object used when we initialize the OpenCV
//library asynchronously
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
            @Override
//This is the callback method called once the OpenCV //manage is connected
            public void onManagerConnected(int status) {
                switch (status) {
//Once the OpenCV manager is successfully connected we can enable the camera interaction with the defined     OpenCV camera view
                    case LoaderCallbackInterface.SUCCESS:
                    {
                        Log.i(TAG, "OpenCV successfully loaded");
                        mOpenCvCameraView.enableView();
                    } break;
                    default:
                    {
                        super.onManagerConnected(status);
                    } break;
                }
            }
        };

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    Log.i(TAG, "called onCreate");


    getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);

    setContentView(R.layout.activity_main);

    inputImage= BitmapFactory.decodeResource(getResources(),R.drawable.test1);
    //imgMatch=BitmapFactory.decodeResource(getResources(),R.drawable.test2);

    mOpenCvCameraView=(JavaCameraView) findViewById(R.id.videoView);

    //Set the view as visible
    mOpenCvCameraView.setVisibility(SurfaceView.VISIBLE);
    //Register your activity as the callback object to handle //camera frames
    mOpenCvCameraView.setCvCameraViewListener(this);
}

@Override
public void onResume() {
    super.onResume();
    OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_11, this, mLoaderCallback);
    // you may be tempted, to do something here, but it's *async*, and may take some time,
    // so any opencv call here will lead to unresolved native errors.
}

public void helloworld() {
    // make a mat and draw something
    Mat ...
(more)
2016-06-22 06:40:04 -0600 asked a question ROI/ Bounding Box selection of Mat images in OpenCV

I'm trying to write a program that opens my laptops camera and gets a video stream and writes it to a video file on my computer. You should be able to pause and unpause the video, the pause option is added because I want to select an object within the video and the location of the object might differ. The issue I am facing is, that every time I run the program on Visual C++(with OpenCV 3.1), when I first press the r button(for pause), even though the video appears to be paused, if the location of the object changes(from the moment I press pause till the moment I've selected the ROI), in the new window that contains the ROI I'll have what it is in that moment in that area, even though the object has moved, so the video is not actually paused, only the writing to the video file stops. Another problem I am facing, is that after selecting the ROI, I cant pause/unpause the writing to the video file(it works just fine when I'm not selecting the ROI). I really need a still image because I have to use it in further processing.

Please help me find the problem.

#include <iostream>     // std::cout, std::endl
#include <iomanip>      // std::setfill, std::setw
#include <opencv\cv.h>
#include <opencv2/highgui/highgui.hpp>
#include <time.h>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include<opencv2\imgproc.hpp>

using namespace std;
using namespace cv;

Point point1, point2; /* vertical points of the bounding box */
int drag = 0;
Rect rect; /* bounding box */
Mat img, roiImg; /* roiImg - the part of the image in the bounding box */
int select_flag = 0;

void mouseHandler(int event, int x, int y, int flags, void* param)
{
if (event == CV_EVENT_LBUTTONDOWN && !drag)
{
/* left button clicked. ROI selection begins */
point1 = Point(x, y);
drag = 1;
}

if (event == CV_EVENT_MOUSEMOVE && drag)
{
/* mouse dragged. ROI being selected */
Mat img1 = img.clone();
point2 = Point(x, y);
rectangle(img1, point1, point2, CV_RGB(255, 0, 0), 3, 8, 0);
imshow("PausedVideo", img1);
}

if (event == CV_EVENT_LBUTTONUP && drag)
{
point2 = Point(x, y);
rect = Rect(point1.x, point1.y, x - point1.x, y - point1.y);
drag = 0;
roiImg = img(rect);
}

if (event == CV_EVENT_LBUTTONUP)
{
/* ROI selected */
select_flag = 1;
drag = 0;
}
}

string intToString(int number) {


std::stringstream ss;
ss << number;
return ss.str();
}

int main(int argc, char* argv[])
{
bool recording = false;
bool startNewRecording = false;
int inc = 0;
bool firstRun = true;

VideoCapture cap(0); // open the video camera no. 0
VideoWriter oVideoWriter;//create videoWriter object, not initialized yet

if (!cap.isOpened())  // if not success, exit program
{
cout << "ERROR: Cannot open the video file" << endl;
return -1;
}

namedWindow("MyVideo", CV_WINDOW_AUTOSIZE); //create a window called "MyVideo"

double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH); //get the width of frames of the video
double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT); //get the height of frames of the video

cout << "Frame Size = " << dWidth << "x" << dHeight << endl;

//set framesize for use with videoWriter
Size frameSize(static_cast<int>(dWidth), static_cast<int ...
(more)
2016-05-26 00:38:29 -0600 received badge  Enthusiast
2016-05-18 12:32:03 -0600 commented question ORB/BruteForce-drawing matches when there are none

So I've changed the value of min_dist to 15(not 100) and it seems that it works very well for patterns... but this was done by trial and error. Now I have one question: when I'm trying to recognize my face, the keypoints that were found in the video aren't uniformly distributed, they are mostly where there are high changes in the intensity... Do you have any idea how can I make a uniform distribution of them?

2016-05-17 13:53:40 -0600 asked a question ORB/BruteForce-drawing matches when there are none

I'm trying to write a program that uses ORB algorithm to detect and compute the keypoints of an image and a video and matches descriptor vectors using BruteForce matcher. The issue I am facing is, that every time I run the program on Visual C++, when the object that I'm trying to detect is not visible, my algorithm is drawing all the supposed matching lines between the keypoints detected(it matches all the keypoints). When the object that I'm trying to detect appears in the image I don't face this issue, in fact, I hardly get any mismatches.

This is a brief sequence of the main test:

• convert input image to grayscale

• convert input videos to grayscale

• detect keypoints and extract descriptors from input grayscale image

• detect keypoints and extract descriptors from input grayscale videos

• match descriptors(see below)

BFMatcher matcher(NORM_HAMMING);
vector<DMatch> matches;
matcher.match(descriptors_1, descriptors_2, matches);

double max_dist = 0; double min_dist = 100;

////calcularea distantelor max si min distances intre keypoints
for (int i = 0; i < descriptors_1.rows; i++)
{
    double dist = matches[i].distance;
    if (dist < min_dist) min_dist = dist;
    if (dist > max_dist) max_dist = dist;
}

printf("-- Max dist : %f \n", max_dist);
printf("-- Min dist : %f \n", min_dist);


std::vector< DMatch > good_matches;

for (int i = 0; i < descriptors_1.rows; i++)
{
    if (matches[i].distance <= max(2 * min_dist, 0.02))
    {
        good_matches.push_back(matches[i]);
    }
}

////-- Desenarea matches-urilor "bune"
Mat img_matches;
drawMatches(img1, keypoints_1, cadruProcesat, keypoints_2,
    good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
    vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

////-- Afisare matches
imshow("good matches", img_matches);
int gm = 0;
for (int i = 0; i < (int)good_matches.size(); i++)
{
    printf("-- Good Match [%d] Keypoint 1: %d  -- Keypoint 2: %d  \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx);
    gm += 1;
}

printf("%d",gm);
///////////////////////////////////////////////////////////////////
//imshow(windowName2, cadruProcesat);

switch (waitKey(10)) {

case 27:
    //tasta 'esc' a fost apasata(ASCII 27) 
    return 0;
}

Please help me find the problem.

2016-05-14 11:22:42 -0600 received badge  Student (source)
2016-05-13 07:16:03 -0600 asked a question AKAZE and ORB planar tracking

Hey! I'm new and I'm really trying to learn, but I can't run the code from here http://docs.opencv.org/3.0-beta/doc/t... . It seems that is because of the stats.h and utils.h. I've also tried to remove them, but i get an error here

    if (matched1.size() >= 4) {

    homography = findHomography(Points(matched1), Points(matched2),
        RANSAC, ransac_thresh, inlier_mask);
    }

saying that

identifier Points is undefined

Could you please help me?