Ask Your Question

Vintez's profile - activity

2020-11-02 15:45:16 -0600 received badge  Famous Question (source)
2018-09-24 06:07:56 -0600 received badge  Notable Question (source)
2018-02-15 07:39:29 -0600 received badge  Popular Question (source)
2016-12-19 04:55:21 -0600 received badge  Nice Answer (source)
2016-12-14 08:07:23 -0600 commented question How to use my cpp project on Android ?

I dont know how to achieve what you want in the first question, but for the second Question, you have to build OpenCV for android Kernel. Here I explained how to do that, and how you could link it with the old Android.mk and Application.mk style.

2016-12-13 08:19:08 -0600 commented answer Calibration from Images results in 0 Successes

@Ice_T02 I think I know what you want to say. If I resize the Image and do the calibration, but want to use the Camera Matrix and everything else for a larger Image scale, I have to multiply the Values of the Camera Matrix with the scalefactor and they should be 'ok' right?

2016-12-12 09:46:07 -0600 commented answer Calibration from Images results in 0 Successes

Well it solves my Problem now I get 17 Results with my Image set. But just one Question for clarification, doesn't this approach could also manipulate the Result of the Calibration?

2016-12-12 09:02:38 -0600 commented question Calibration from Images results in 0 Successes

Yes I already checked that the List is correct (the method which generates list scans the files of a folder with dirent.h) Also the cv::Mat imageis not empty and has (As far as I can see it) good values.

2016-12-12 08:17:10 -0600 commented question Calibration from Images results in 0 Successes

Oh, so it was a typo again! :X Well, however I still get 0 successes when the method finishes. :/

2016-12-12 07:34:46 -0600 commented question Calibration from Images results in 0 Successes

No I don't clear it anywhere. It is just filled once and is never touched after that. The reference Points to objectCorners. Also the Error occurs with the first Iteration of the code.

2016-12-12 07:02:07 -0600 commented question Calibration from Images results in 0 Successes

unfortunately I get an Error now at this Line corners.push_back(cv::Point3f((2*j + i % 2) * squareSize, i*squareSize, 0)); the Error is signal SIGKILL. The last Task in the Thread List is this: template<typename _Tp> inline Point3_<_Tp>::Point3_(const Point3_& pt) : x(pt.x), y(pt.y), z(pt.z) {}

2016-12-12 06:42:08 -0600 commented question Calibration from Images results in 0 Successes

Ok, was not sure with that! And Unit of Measurement? mm would be okay am I right?

2016-12-12 06:17:51 -0600 commented question Calibration from Images results in 0 Successes

ok, I will implement that method now, but one Question left, what should I put there as squareSize? I printed the circleGrid from the Docs on a DINA4 Paper.

2016-12-12 04:13:58 -0600 commented question Calibration from Images results in 0 Successes

I also noticed this a couple of minutes ago (shame on me!) But when I change the boardSize I don't get another Result :/. I also changed the flag to cv::CALIB_CB_ASYMMETRIC_GRID But still with (11,4) and (4,11) I still get 0 successes Could it be, that my Images are not good?

2016-12-12 03:21:55 -0600 answered a question how i use opencv on android studio

If you want to use the OpenCV4Android SDK follow these steps:

  1. Download the latest Version of the OpenCV4Android sdk and decompress the folder.

  2. Import it to a Android Studio Project (e.g. one of its samples) with File -> New -> Import Module. Then choose the sdk/java folder from the decompressed folder.

  3. Update the the Gradle file so that compileSdkVersion, buildToolsVersion, minSdkVersion and targetSdkVersion of your app and the opencvLibrary match.

  4. Add the module dependencie of your app. Go to the Projectstructure choose the module folder from your app go to "Dependencies" and add the dependencie with the "+" and module dependencie.

  5. Copy the Libs Folder from sdk/native from your decompressed Folder into your app main folder app/src/main. After that rename Libs int jniLibs

When you finished these Tasks successfully everything should work.

2016-12-12 03:01:38 -0600 asked a question Calibration from Images results in 0 Successes

I try to calibrate the Camera of a Smartphone with a console application because the speed of the Android example is very slow and It is hard to make proper Images.

The Console Application is inspired by a Book I found: OpenCV 2 Computer Vision: Application Programming Cookbook

Instead of a Chessboard like mentioned in the Book I take the Circle Grid from openCV which can be found in the "data" Folder.

Unfortunately when I use the "addChessboardPoints" Method I get the result of 0 successes and no Points are added into my Calibration. Here the Code:

int Calibrator::addChessboardPoints(const std::vector<std::string>& fileList, cv::Size& boardSize){

double onePerc = fileList.size() / 100.0f;

std::vector<cv::Point2f> imageCorners;
std::vector<cv::Point3f> objectCorners;

calcBoardCornerPositions(boardSize, 13.40, objectCorners, ASYMCIRCLE);

cv::Mat image;
int successes = 0;

for(int i = 0; i < fileList.size(); i++){

    double state = i+1.0f;
    double progress = state / onePerc;

    image = cv::imread(fileList[i], 0);

    bool found = cv::findCirclesGrid(image, boardSize, imageCorners, cv::CALIB_CV_ASYMMETRIC_GRID);

    if(!found){
        std::cout << "Progres: " << progress << "%" << std::endl;
        continue;
    }

    cv::cornerSubPix(image, imageCorners, cv::Size(5,5), cv::Size(-1,-1),
                     cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS, 30, 0.1));

    if(imageCorners.size() == boardSize.area()){
        addPoints(imageCorners, objectCorners);
        successes++;
    }
    std::cout << "Progress: " << progress << "%" << std::endl;
}
return successes;
}

The Images I took with my Camera can be looked up here. The boardSize used here is cv::Size(11,4)

Does anyone know, what I did wrong?

UPDATE corrected BoardSize and findCirclesGrid flag. Problem still occurs.

UPDATE added calcBoardCornerPositions and usage in addChessboardPoints

void Calibrator::calcBoardCornerPositions(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, PatternType flag){
//BoardSize is 11,4
switch(flag){
    case CHESSBOARD:
    case CIRCLEGRID:
        for(int i = 0; i < boardSize.height; ++i)
            for(int j = 0; j < boardSize.width; ++j)
                corners.push_back(cv::Point3f(j*squareSize, i*squareSize, 0));
        break;
    case ASYMCIRCLE:
        for(int i = 0; i < boardSize.height; ++i)
            for(int j = 0; j < boardSize.width; ++i)
                corners.push_back(cv::Point3f((2*j + i % 2) * squareSize, i*squareSize, 0));
        break;
    default:
        break;
}

}
2016-12-09 06:51:31 -0600 asked a question Camera Calibration with Nexus 5x

I try to get the Intrinsic Parameters from the Nexus 5x Camera. I used the camera calibration sample from the android samples and modified the manifest, so the screen orientation is the same as the sensor orientation.

When i start the app everything seems to be fine, but I can't capture any Images and I can't see the Menu with the calibrate option. Does anyone know a solution for this?

Also I'm a little bit confused, that there is no Documentation or Tutorial which guides the User how to use this sample.

I also started the App with my Samsung Galaxy S5 there I also can't see a Menu or sth.

2016-12-08 03:36:50 -0600 answered a question Image processing with openCv in android ?

With,

Would it help?

You probably want to know, if OpenCV could help right?

OpenCV Could help you there. It has text recognition with OCRTesseract. But I'm not quite sure if it is in the OpenCV4Android SDK, you have to check that! -Look at the Update-

If it is not, you can also link native c++ OpenCV to your Android Project and make JNICalls to use the OpenCV Library on a Android Device (Plus here, it would probably be faster than in plain Java) You can link natice c++ OpenCV with the explanation given in this Answer.

UPDATE: Please Notive Beraks comment on this Answer!

(no java bindings for this so far, and even if you can use JNI, it would require rebuilding the opencv4android sdk locally with opencv_contrib repo)

2016-12-07 08:00:48 -0600 commented answer There is an optimal size for images used with SIFT in object recognition?

Ok, and do you just save the descriptor? What about the (3x3x4) 36 element Solution? According to Lowe it just is 10% worse than the Original 128 Elements Vector. With that you could get a little less Memory usage. Also what about a Vocabulary Tree or Spill Forest Solution? (I don't have memory problems in my Application, because I just use a single Reference Image -> BFM).

2016-12-07 01:57:50 -0600 commented answer There is an optimal size for images used with SIFT in object recognition?

(just to be clear) You try to build a KD-Tree with that amount of KeyPoints and that Descriptor Dimension? Or do you even save the Images too? Because, you just have to build on KD-Tree with the Descriptors in it, not anything else. If you already do this, try to ask a new Question, because I'm no expert in Memory usage

2016-12-05 05:01:03 -0600 answered a question There is an optimal size for images used with SIFT in object recognition?

I can't give you an answer with a Paper or sth. similar (I don't even know, if someone tested sth. like this). But I can tell you what are the advantages or disadvantages of a small or a high Image size.

You probably already know, that SIFT is not the fastest existing Feature Descriptor which exists. If you want to accelerate the Recognition, you can try to get smaller Image Sizes. The processing will be by far better than. For Example, in an Android App I used a combined FAST Detector and SIFT Descriptor algorithm (so a already speeded up style of SIFT). The Initial Size I used, was 3264 x 2448 (so a large resolution for a smartphone) The Algorithm for training an Image runned in ~10 seconds (on a Nexus 5 device). After that I tested the 1920 x 1080 Resolution on the sam device and the training of the image runned in ~4 seconds.

So if you want to process the recognition or training as fast as possible, you should use a small Image size.

But, if you take a small Image size, you will probably get a smaller Image Pyramid, which also means, that you will have a smaller range of scales that you can provide with your Algorithm. e.g. my created Pyramid with the 3264 x 2448 resolution was slower to build, but it also had more Images in the Pyramid, so my Object could be detected from a bigger range than it could with the image pyramid builded by the 1920 x 1080 resolution Image.

So if you want to support a large set of distances ( scales ) it is better if you use a large Image Size, which will result in a slower processing. If you want a fast recognition and training of a Image you should pick a smaller Size.

What would be a good combination is: Use a rather large Image to train your Object for the recognition, thus you gain a good range, in which the Image could be detected, and for recognition use a smaller Image (but not too small!) which you compare to your reference Image so you get a good performance while you want to track your object.

2016-12-05 04:02:11 -0600 answered a question What is the best format to use with SIFT?

Not really, because you always have to convert the Image into a grayscale Image for the SIFT algorithm. But beside that, a format which has a lossless conversion from the raw camera data would be good (e.g. JPEG is no lossless conversion of the original raw data).

On the other hand, if you use a JPEG as reference Image and just try the recognition also in JPEG, the conversion of JPEG is nonrelevant.

For some calrification opencv's SIFT checks for a CV_8U Type Image and throws an error if the Image is not that type:

void SIFT_Impl::detectAndCompute(InputArray _image, InputArray _mask,
                  std::vector<KeyPoint>& keypoints,
                  OutputArray _descriptors,
                  bool useProvidedKeypoints)
{
int firstOctave = -1, actualNOctaves = 0, actualNLayers = 0;
Mat image = _image.getMat(), mask = _mask.getMat();

if( image.empty() || image.depth() != CV_8U )
    CV_Error( Error::StsBadArg, "image is empty or has incorrect depth (!=CV_8U)" );
2016-12-05 03:58:56 -0600 commented question calculate the bottles in fridge

hmm. I'm just asking, because even for a human its not that easy to locate every Bottle inside there. But probably if the camera could be lifted a little bit more, you can see every Bottle Cap. Then you could try sth. to detect them (I think a object Recognition wouldn't work for counting) . And I think a Seperation of the Image into different brands would be usefull (e.g. a Image stripe for 7Up one for Pepsi Zero etc.) And then start the counting.

2016-12-05 03:52:51 -0600 commented answer How to remove bad lighting conditions or shadow effects in images using opencv for processing face images

The Y-Component of a YUV Image holds the Luminance Information. Normalizing the Luminance would end in the effect, that you try to normalize the Light or Darkness in a Image the effect is shown in the Answer of Aurelius. I can't tell exactly how it differs from a regular equalization as you are mentioning it (As said, it is not actually my answer) If you want to know more, try to comment the Answer in stackoverflow so Aurelius could answer it.

2016-12-05 02:40:58 -0600 answered a question SIFT implementation in openCV

Well the first loop for( i = 0; i < d+2; i++ ) {...} is used, to fill the Histogramm of the descriptor where d is the width of the descriptor + 2 (e.g. with the 128 elements vector 4 + 2). It is initially filled with zeroes, so it would not occur the possibility, that a histogram bin would point to null instead of a value.

The second loop for( i = -radius, k = 0; i <= radius; i++ ) {...} is used pick the pixel values around the certain keypoint and to calculate the orientation and magnitude of it. The radius value determines what size this window where pixels will be looked up have. e.g. if radius is 15 the algorithms lookup window of the KeyPoint 100,100 would be a rectangle with the upperLeft and bottomRight coordinates 85, 85 and 115, 115

If you still don't know what is meant, I strongly suggest you to read a good tutorial about the theory of the algorithm.

2016-12-05 02:23:38 -0600 commented question calculate the bottles in fridge

Will the Images be just like the one provided in your link? Isn't it possible for you to position the camera above the bottles?