Ask Your Question

ThomasB's profile - activity

2017-11-11 15:37:20 -0600 received badge  Good Answer (source)
2014-07-22 08:36:28 -0600 commented question iPhone 4(S) vs iPad2 computer vision performance problems

Hey,

I'm having similar issues. We have a code running on Android and iOS and it works fine everywhere except the iPhone 4 and 4s. As far as I could track down the issue the very slow part is the parallel_for loop used in various methods just as the very common cvtColor method.

Do you have any findings on this issue already? Could you resolve it?

2014-03-26 03:23:36 -0600 answered a question Problem with imread in debug mode

You have to escape backslashes in C/C++ and most other languages (no clue which one you use, but looks like C++).

Thus your string should be

const string ImageName("D:\\SomeImage.tif");

Because you state it does not work with Debug-Mode, did it work in "normal" mode?

2013-12-18 05:55:30 -0600 commented question opencv 2.4.3

This is not related to your program. It is an Eclipse internal problem, which is obvious from the error message (java.lnag.NullPointerException). Your program is written in C++, thus the error message is definitely caused elsewhere and your program most likely not even run. If you're familiar, I'd recommend using Commandline compilation to verify your program is ok, alternatively add a cout << "hello" << endl; as the first line in main function, to see when your program is run.

2013-11-10 04:34:59 -0600 commented question Change resolution of extracted frame in Opencv

Well, you might not be able to extract something better than there's encoded in the video. Normally the VideoCapture captures the video frames with their full resolution, thus your video resolution is that poor.

H264 is a compression format which uses the relation between frames to cut down on file size. I'm not entirely sure how the Videocapture behaves here, I'd expect it delivers you useable frames, but might also be it just delivers the "difference"-frames. Does your code perform well on uncompressed video data?

2013-11-08 07:43:28 -0600 commented answer I can't compile JNI file

your error "description" is not very helpful

2013-11-02 12:43:22 -0600 commented answer I can't compile JNI file

haha, you're welcome - in the end you helped yourself ;) probably mark that thing here as solved so that anybody knows you found a solution

2013-11-01 10:40:48 -0600 commented answer I can't compile JNI file

Hey, here's the project, I'll remove the download in 2 or 3 days again, so make sure to download and save it ;) http://thomasbergmueller.com/share/testApp.zip Have you already checked what what the origin of the include error is? Does the file (algorithm.h???) exist on your filesystem?

2013-10-30 16:47:00 -0600 commented answer I can't compile JNI file

Have you used my other files as well? probably something went wrong whilst adding C++ nature (have you added c++ nature or probably C-Nature?) however, It seems to be a quite uncommon error, I don't think I can help you from scratch, sorry. Such uncommon behaviour is often observed if there are syntax errors somewhere in the headers or include files - could that have happened? I hope you removed the bracket in the prototype as well - by the way, you don't need the function prototype here ;)

2013-10-30 05:59:10 -0600 answered a question I can't compile JNI file

Hey,

don't give up that quick, might be a simple error. In your CPP-File:

JNIEXPORT void JNICALL Java_com_slani_tracker_OpenCamera_findObject((JNIEnv *env, jlong addRgba, jlong addHsv)

There is a ( too much at the beginning of arguments. Furthermore I'd recommend to configure the OpenCV-Type a bit more (see Android.mk later). I did a quick demo application that calculates the HSV-Value of an RGBA-Value and prints it to logcat as a fatal error. Your project settings seem to be correct (as soon as the ndk-build is invoked, everything is fine).

Build looks something like:

    11:56:03 **** Incremental Build of configuration Default for project testApp ****
/home/tbergmueller/bin/avEclipse/android-ndk-r8d/ndk-build 

Compile++ thumb  : testLibrary <= detect_jni.cpp
Prebuilt       : libopencv_contrib.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_legacy.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_ml.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_stitching.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_objdetect.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_ts.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_videostab.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_calib3d.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_photo.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_video.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_features2d.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_highgui.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_androidcamera.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_flann.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_imgproc.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : libopencv_core.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../libs/armeabi/
Prebuilt       : liblibjpeg.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../3rdparty/libs/armeabi/
Prebuilt       : liblibpng.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../3rdparty/libs/armeabi/
Prebuilt       : liblibtiff.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../3rdparty/libs/armeabi/
Prebuilt       : liblibjasper.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../3rdparty/libs/armeabi/
Prebuilt       : libIlmImf.a <= /home/tbergmueller/bin/avEclipse/opencv-2-4-5-android-sdk/sdk/native/jni/../3rdparty/libs/armeabi/
Prebuilt       : libgnustl_static.a <= <NDK>/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi/
SharedLibrary  : libtestLibrary.so
Install        : libtestLibrary.so => libs/armeabi/libtestLibrary.so

11:56:05 Build Finished (took 1s.817ms)

As I stated, you might want to configure OpenCV in the makefile a bit (no camera modules). Furthermore I'd recommend to use libtype static for the opencv AND most important set the OPENCV_INSTALL_MODULES, otherwise they might not be exported to your device when you install the app.

To build the application, I used the following Android.mk

LOCAL_PATH := $(call my-dir)

include $(CLEAR_VARS)

OPENCV_INSTALL_MODULES:=on
OPENCV_CAMERA_MODULES:=off
OPENCV_LIB_TYPE:=STATIC

$(info $(NDK_MODULE_PATH))

include ${OPENCVROOT}/sdk/native/jni/OpenCV.mk ...
(more)
2013-10-28 02:46:04 -0600 answered a question how to write video at 150f/s

I'm not entirely sure that I get your question, but probably that works for you;

Have look on VideoWriter. In the .open()-Function you can define the Framerate. With the normal VideoCapture(which you already use as far as I understand) you can read then images @ 30fps and write them at 150fps. But that would not be slowing, that would be speeding the video up by a factor of 150/30 = 5.

So simply (Pseudocode)

Videocapture in;
VideoWriter out;

out.open(..., 150, ..)

Mat m << in; // reads @ input video's frame rate

// probably process

out << m; // writes @ 150fps, because we specified that in open function
2013-10-09 01:06:15 -0600 answered a question Where is the source codes of cv::getRectSubPix?
2013-10-06 10:07:36 -0600 answered a question human recognition

How about using Google?! First or second hit is this;

http://stackoverflow.com/questions/2188646/how-can-i-detect-and-track-people-using-opencv

OpenCV-Doc: http://docs.opencv.org/modules/gpu/doc/object_detection.html

Ahja, how about mentioning that you made another post on that topic already and got answers there? (Even the same as mine...) Why does that not work for you?

2013-10-04 03:42:24 -0600 received badge  Autobiographer
2013-10-03 13:07:10 -0600 received badge  Critic (source)
2013-10-03 12:54:26 -0600 commented question Very simple application crashing on close

Have a look on this; http://answers.opencv.org/question/6495/visual-studio-2012-and-rtlfreeheap-error/ It seems to be a common issue with Microsoft IDEs, also with older ones and older versions of OpenCV, for example here: http://opencv-users.1802565.n2.nabble.com/new-to-openCV-have-question-about-cvReleaseImage-error-in-in-VC-2003-td2268910.html . I'm developing with Linux, never experienced these errors so far :)

2013-10-03 08:22:08 -0600 answered a question How to use opencv to find circles

Adopt this code: http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html#code

To operate with YUV422 images, try the following modification to the mentioned code sample;

Mat src, src_gray;

src = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);

if( !src.data )
{ return -1; }

// YUV422 has 1.5 times the height of the captured frame when read as grayscale, thus you can extract the grayscale by using the two upper thirds of the image only
Rect grayscaleImageAra(0,0,src.cols, src.rows*2/3);
src_gray = src(grayscaleImageAra); // crop the Grayscale part fromt he YUV422 image


// FROM THIS LINE UNCHANGED CODE FROM http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html#code
/// Reduce the noise so we avoid false circle detection
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );

// ..... //
2013-10-03 07:55:18 -0600 commented question Very simple application crashing on close

Do you use exactly this code? Could you post the assertation / exception / stacktrace or whatever additional info you have? Probably try to call destroyAllWindows() before return 0. This closes the windows created with imshow.

2013-10-03 02:48:29 -0600 commented answer Clustring white pixels in binary image

Its actually "just" a pre-processing step combined with feature detection (recognition of the black dots on the random dot markers). The LLAH uses the coordinates retrieved with this method to calc the id of a random dot marker based on the nearest neighbors of one, two or more points. So it's just a very small but important part of the Random Dot Marker identification process.

2013-10-02 09:05:48 -0600 commented answer Clustring white pixels in binary image

Good =) Note that Ushiyama's, from whom parts of the code are written, allows you with his license to use his code only for non-commercial projects only.

2013-10-02 07:21:42 -0600 commented answer Clustring white pixels in binary image

I guess you just tried to run the code from the post here? Download the complete Source from the link I provided (http://thomasbergmueller.com/share/src.zip), it contains 4 source files (avtypes.h, avKeyExtraction.c/h and the main-file, which is included in the post here.) Don't forget to link with libraries opencv_core, opencv_highgui and opencv_imgproc

2013-10-02 05:34:18 -0600 commented answer Clustring white pixels in binary image

I never profiled it in detail, but it's way faster (and simpler) as a contour-detection since it just accumulates pixels by applying a small kernel (5 pixels as far as I remember) instead of the rather complex contour detection that has far more logic behind. By the way, you might want to skip the thresholding-process and some other parts in my implementation or simply adopt the mylabel.cpp/.h files in Uchiyama's code, where I got the labelling process from. His code is available here: http://hvrl.ics.keio.ac.jp/uchiyama/me/code/UCHIYAMARKERS/index.html

2013-10-02 03:56:37 -0600 received badge  Nice Answer (source)
2013-10-01 10:07:46 -0600 answered a question Clustring white pixels in binary image

Uchiyama did a Paper on his so called "Random dot markers" where he searches for black blobs (inverse of your binary-image..) before applying the LLAH to identify the markers. I'm not entirely sure whether I used parts of his algorithm (Source available at http://hvrl.ics.keio.ac.jp/uchiyama/me/code/UCHIYAMARKERS/index.html ) or was unsatisfied and implemented it on my own, at least my comment in the header says it's somewhere grabbed from there.

However, I found a pretty nice implementation I did a year ago - not really tested but working. The output of it is the following;

Cluster Results clusters_result.png

I hope that works for you as well.

#include <iostream>

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#include "keyExtraction.h"

using namespace cv;
using namespace std;

int main() 
{

    Mat m = imread("/home/tbergmueller/clusters.png", CV_LOAD_IMAGE_GRAYSCALE);

    Mat debug; // for output only
    cvtColor(m,debug, CV_GRAY2BGR); // convert to BGR to allow red numbers printed to this mat


    bitwise_not(m,m); // Invert because my algorithm is for searching black regions

    CvMat cMat = m;
    avPoint* clusters = NULL; // important: init with NULL, otherwise algo crashes

    int nrOfPoints = avExtractKeys(&cMat,&clusters);

    cout << "Found " << nrOfPoints << " points" << endl;

    for(int i=0; i<nrOfPoints; i++)
    {
        stringstream ss;
        ss << i;

        Point p(clusters[i].center.x, clusters[i].center.y);
        putText(debug,ss.str(),p,CV_FONT_HERSHEY_COMPLEX, 0.5, CV_RGB(255,0,0), 1, CV_AA);
    }

    imshow("Debug", debug);
    waitKey();

    return 0;
}

I uploaded the complete source if you want; http://thomasbergmueller.com/share/src.zip

2013-09-27 02:18:56 -0600 answered a question chose and tracking object.

You may want to try Good Features to Track and search the neighborhood of the click-location for features, choose one and track it.

In case you know the shape of the object that is clicked (and you have some descriptors of an object detection algorithm for it already) you can first check if the correct object was clicked and then track it.

2013-09-26 06:57:54 -0600 commented question Bad argument (Array should be CvMat or IplImage)

You need to provide a little more information or a working code. Might be you load images that do not exist? OpenCV typically does not crash if you try to read a non-existent image but when you first work with the Mat / IplImage you thought to have loaded the image to. I have no clue of the Java-API, but try to check for empty() or if dimensions / height / width are correct or 0

2013-09-26 04:40:23 -0600 commented answer Draw the lines detected by cv::HoughLines

These are no pointers, they are POINTS, to be more precise the start and end-point of a line. OpenCV's drawLine-Algorithm simply draws a line between two given points. By respecting the angle theta and r one can construct the line with some simple geometry and the knowledge that a line (red) defined by (r, theta) is normal to the vector r (blue). Since the polar form does not hold any information on the length of the line, the author of this code used a large-enough number (1000) to get the illusion of an "endless" line since it usually exceeds the image's width and height.

2013-09-23 22:40:32 -0600 received badge  Teacher (source)
2013-09-23 02:05:14 -0600 answered a question Draw the lines detected by cv::HoughLines

I think he tries to understand what the code in the tutorial does.

double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;

Is just the transformation from polar coordinates to Cartesian coordinates. This is the point where the blue and the red line meets.

Illustration from Tutorial

pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));

These next four lines of code "calculate" the points x and y, which are then used to draw. I wrote "calculate" because they don't really do, but just move a 1000 pixels in both directions, horizontally and vertically. If you have an image much larger than thousand pixels, you'll find that most lines won't reach the outer borders of the image but end somewhere with a total x-distance of 2000 pixels and a total y-distance of 2000 pixels from end to end. However, all lines include the point (x0,y0), th e one where blue and red line meet. From this point to x, deltaY is 1000 and deltaX is also 1000, same is valid for the distance between this point and y, just deltaY is -1000 and deltaX is also -1000

2013-09-20 05:57:36 -0600 commented answer Histogram outputs always same picture

The histogram is calculated on the same data - thus it has also the same output The assign-operator in Mat saturatedImage=grayImage does NOT copy the data it just creates another Mat-header around the same set of data. Since you calculate the histogram of the grayImage AFTER you did the saturation-cast-stuff, the histogram is already calculated on the saturated data. Try to move the line imshow("calcHist Grey image", histo(grayImage) ); before Mat saturedImage=grayImage;, then it should also work with your original code (but it's not what you intend to achieve ;))

2013-09-20 05:54:52 -0600 received badge  Supporter (source)
2013-09-16 04:10:47 -0600 commented answer remove image borders

I just recognized the Desk is on the bottom-right corner missing in my cropped image - I might have some mistake in the ROI-downscaling-policies, I'll check that later

2013-09-16 03:48:06 -0600 received badge  Editor (source)
2013-09-16 03:42:49 -0600 answered a question remove image borders

Ok, I have no idea if you have any performance-requirements, attached is a straight-forward algorithm based on trial-and-error. It continuously decreases the size of the cropped image and checks if the current region of interest is valid (through examination of the image's borders, namely if the background-color is contained in the border,, then the corresponding side of the rectangle has to be moved further towards the image's midpoint)

I'd further recommend to use a transparent channel instead of the black background of the image, since you then have a fourth channel (A channel in BGRA) and don't have to implement a complex decision algorithm whether a detected black pixel now belongs to the image or it belongs to the background. (Could be done with examination of the local neighborhood for instance)

//============================================================================
// Name        : panoStitch.cpp
// Author      : Thomas Bergmueller
// Version     :
// Copyright   : Your copyright notice
// Description : Hello World in C++, Ansi-style
//============================================================================

#include <iostream>

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace std;
using namespace cv;


#define BGRA_A      0


bool checkColumn(const cv::Mat& roi, int x)
{
    for(int y=0; y<roi.rows; y++)
    {
        if(roi.at<cv::Vec4b>(y,x)[3] ==  BGRA_A) // Check Transparency-Channel
        {
            // Background found..
            return false;
        }
    }

    return true;
}

bool checkLine(const cv::Mat& roi, int y)
{
    // check top and bottom
    // Mat is BGR-Coded
    for(int x=0; x<roi.cols; x++)
    {

        if(roi.at<cv::Vec4b>(y,x)[3]  == BGRA_A) // Check Transparency-Channel
        {
            // Background found..
            return false;
        }
    }

    return true;
}




bool cropLargestPossibleROI(const cv::Mat& source, cv::Mat& output, cv::Rect startROI)
{
    // evaluate start-ROI
    Mat possibleROI = source(startROI); // crop, writes a new Mat-Header

    bool topOk = checkLine(possibleROI, 0);
    bool leftOk = checkColumn(possibleROI, 0);

    bool bottomOk = checkLine(possibleROI, possibleROI.rows-1);
    bool rightOk = checkColumn(possibleROI, possibleROI.cols-1);


    if(topOk && leftOk && bottomOk && rightOk)
    {
        // Found!!
        output = source(startROI);
        return true;
    }

    // If not, scale ROI down



    Rect newROI(startROI.x, startROI.y, startROI.width, startROI.height);

    if(!leftOk) { newROI.x++; newROI.width--; } // if x is increased, width has to be decreased to compensate
    if(!topOk) { newROI.y++; newROI.height--; } // same is valid for y
    if(!rightOk) {newROI.width--; }
    if(!bottomOk) {newROI.height--; }

    cout << "Try it with ROI = " << newROI << endl;

    if(newROI.x + startROI.width < 0 || newROI.y + newROI.height < 0)
    {
        cerr << "Sorry no suitable ROI found" << endl;
        return false; // sorry...
    }

    return cropLargestPossibleROI(source,output,newROI);
}


int main()
{

    Mat src = imread("/home/tbergmueller/pano.png", CV_LOAD_IMAGE_UNCHANGED); // Image has BGRA

    assert(src.channels() == 4); // Check if transparency-Channel is there

    Mat roi;
    //Rect startROI(18,57,900, 200); // start as the source image - ROI is the complete SRC-Image
    Rect startROI(0,0,src.cols,src.rows); // start as the source image - ROI is the complete SRC-Image


    //roi = src(startROI);
    cropLargestPossibleROI(src,roi,startROI);

    imshow("ROI", roi);
    imshow("Source", src);
    waitKey();

    return 0;
}

Base image:

image description

Cropped ROI:

image description

2013-09-16 02:44:11 -0600 commented question remove image borders

Would you mind posting the images and how they should be aligned to each other?

2013-09-16 02:42:48 -0600 answered a question remove image borders

Would you mind posting the images and how they should be aligned to each other?