2017-11-11 15:37:20 -0600 | received badge | ● Good Answer (source) |
2014-07-22 08:36:28 -0600 | commented question | iPhone 4(S) vs iPad2 computer vision performance problems Hey, I'm having similar issues. We have a code running on Android and iOS and it works fine everywhere except the iPhone 4 and 4s. As far as I could track down the issue the very slow part is the parallel_for loop used in various methods just as the very common cvtColor method. Do you have any findings on this issue already? Could you resolve it? |
2014-03-26 03:23:36 -0600 | answered a question | Problem with imread in debug mode You have to escape backslashes in C/C++ and most other languages (no clue which one you use, but looks like C++). Thus your string should be Because you state it does not work with Debug-Mode, did it work in "normal" mode? |
2013-12-18 05:55:30 -0600 | commented question | opencv 2.4.3 This is not related to your program. It is an Eclipse internal problem, which is obvious from the error message (java.lnag.NullPointerException). Your program is written in C++, thus the error message is definitely caused elsewhere and your program most likely not even run. If you're familiar, I'd recommend using Commandline compilation to verify your program is ok, alternatively add a cout << "hello" << endl; as the first line in main function, to see when your program is run. |
2013-11-10 04:34:59 -0600 | commented question | Change resolution of extracted frame in Opencv Well, you might not be able to extract something better than there's encoded in the video. Normally the VideoCapture captures the video frames with their full resolution, thus your video resolution is that poor. H264 is a compression format which uses the relation between frames to cut down on file size. I'm not entirely sure how the Videocapture behaves here, I'd expect it delivers you useable frames, but might also be it just delivers the "difference"-frames. Does your code perform well on uncompressed video data? |
2013-11-08 07:43:28 -0600 | commented answer | I can't compile JNI file your error "description" is not very helpful |
2013-11-02 12:43:22 -0600 | commented answer | I can't compile JNI file haha, you're welcome - in the end you helped yourself ;) probably mark that thing here as solved so that anybody knows you found a solution |
2013-11-01 10:40:48 -0600 | commented answer | I can't compile JNI file Hey, here's the project, I'll remove the download in 2 or 3 days again, so make sure to download and save it ;) http://thomasbergmueller.com/share/testApp.zip Have you already checked what what the origin of the include error is? Does the file (algorithm.h???) exist on your filesystem? |
2013-10-30 16:47:00 -0600 | commented answer | I can't compile JNI file Have you used my other files as well? probably something went wrong whilst adding C++ nature (have you added c++ nature or probably C-Nature?) however, It seems to be a quite uncommon error, I don't think I can help you from scratch, sorry. Such uncommon behaviour is often observed if there are syntax errors somewhere in the headers or include files - could that have happened? I hope you removed the bracket in the prototype as well - by the way, you don't need the function prototype here ;) |
2013-10-30 05:59:10 -0600 | answered a question | I can't compile JNI file Hey, don't give up that quick, might be a simple error. In your CPP-File: There is a ( too much at the beginning of arguments. Furthermore I'd recommend to configure the OpenCV-Type a bit more (see Android.mk later). I did a quick demo application that calculates the HSV-Value of an RGBA-Value and prints it to logcat as a fatal error. Your project settings seem to be correct (as soon as the ndk-build is invoked, everything is fine). Build looks something like: As I stated, you might want to configure OpenCV in the makefile a bit (no camera modules). Furthermore I'd recommend to use libtype static for the opencv AND most important set the OPENCV_INSTALL_MODULES, otherwise they might not be exported to your device when you install the app. To build the application, I used the following Android.mk (more) |
2013-10-28 02:46:04 -0600 | answered a question | how to write video at 150f/s I'm not entirely sure that I get your question, but probably that works for you; Have look on VideoWriter. In the .open()-Function you can define the Framerate. With the normal VideoCapture(which you already use as far as I understand) you can read then images @ 30fps and write them at 150fps. But that would not be slowing, that would be speeding the video up by a factor of 150/30 = 5. So simply (Pseudocode) |
2013-10-09 01:06:15 -0600 | answered a question | Where is the source codes of cv::getRectSubPix? |
2013-10-06 10:07:36 -0600 | answered a question | human recognition How about using Google?! First or second hit is this; http://stackoverflow.com/questions/2188646/how-can-i-detect-and-track-people-using-opencv OpenCV-Doc: http://docs.opencv.org/modules/gpu/doc/object_detection.html Ahja, how about mentioning that you made another post on that topic already and got answers there? (Even the same as mine...) Why does that not work for you? |
2013-10-04 03:42:24 -0600 | received badge | ● Autobiographer |
2013-10-03 13:07:10 -0600 | received badge | ● Critic (source) |
2013-10-03 12:54:26 -0600 | commented question | Very simple application crashing on close Have a look on this; http://answers.opencv.org/question/6495/visual-studio-2012-and-rtlfreeheap-error/ It seems to be a common issue with Microsoft IDEs, also with older ones and older versions of OpenCV, for example here: http://opencv-users.1802565.n2.nabble.com/new-to-openCV-have-question-about-cvReleaseImage-error-in-in-VC-2003-td2268910.html . I'm developing with Linux, never experienced these errors so far :) |
2013-10-03 08:22:08 -0600 | answered a question | How to use opencv to find circles Adopt this code: http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html#code To operate with YUV422 images, try the following modification to the mentioned code sample; |
2013-10-03 07:55:18 -0600 | commented question | Very simple application crashing on close Do you use exactly this code? Could you post the assertation / exception / stacktrace or whatever additional info you have? Probably try to call destroyAllWindows() before return 0. This closes the windows created with imshow. |
2013-10-03 02:48:29 -0600 | commented answer | Clustring white pixels in binary image Its actually "just" a pre-processing step combined with feature detection (recognition of the black dots on the random dot markers). The LLAH uses the coordinates retrieved with this method to calc the id of a random dot marker based on the nearest neighbors of one, two or more points. So it's just a very small but important part of the Random Dot Marker identification process. |
2013-10-02 09:05:48 -0600 | commented answer | Clustring white pixels in binary image Good =) Note that Ushiyama's, from whom parts of the code are written, allows you with his license to use his code only for non-commercial projects only. |
2013-10-02 07:21:42 -0600 | commented answer | Clustring white pixels in binary image I guess you just tried to run the code from the post here? Download the complete Source from the link I provided (http://thomasbergmueller.com/share/src.zip), it contains 4 source files (avtypes.h, avKeyExtraction.c/h and the main-file, which is included in the post here.) Don't forget to link with libraries opencv_core, opencv_highgui and opencv_imgproc |
2013-10-02 05:34:18 -0600 | commented answer | Clustring white pixels in binary image I never profiled it in detail, but it's way faster (and simpler) as a contour-detection since it just accumulates pixels by applying a small kernel (5 pixels as far as I remember) instead of the rather complex contour detection that has far more logic behind. By the way, you might want to skip the thresholding-process and some other parts in my implementation or simply adopt the mylabel.cpp/.h files in Uchiyama's code, where I got the labelling process from. His code is available here: http://hvrl.ics.keio.ac.jp/uchiyama/me/code/UCHIYAMARKERS/index.html |
2013-10-02 03:56:37 -0600 | received badge | ● Nice Answer (source) |
2013-10-01 10:07:46 -0600 | answered a question | Clustring white pixels in binary image Uchiyama did a Paper on his so called "Random dot markers" where he searches for black blobs (inverse of your binary-image..) before applying the LLAH to identify the markers. I'm not entirely sure whether I used parts of his algorithm (Source available at http://hvrl.ics.keio.ac.jp/uchiyama/me/code/UCHIYAMARKERS/index.html ) or was unsatisfied and implemented it on my own, at least my comment in the header says it's somewhere grabbed from there. However, I found a pretty nice implementation I did a year ago - not really tested but working. The output of it is the following; I hope that works for you as well. I uploaded the complete source if you want; http://thomasbergmueller.com/share/src.zip |
2013-09-27 02:18:56 -0600 | answered a question | chose and tracking object. You may want to try Good Features to Track and search the neighborhood of the click-location for features, choose one and track it. In case you know the shape of the object that is clicked (and you have some descriptors of an object detection algorithm for it already) you can first check if the correct object was clicked and then track it. |
2013-09-26 06:57:54 -0600 | commented question | Bad argument (Array should be CvMat or IplImage) You need to provide a little more information or a working code. Might be you load images that do not exist? OpenCV typically does not crash if you try to read a non-existent image but when you first work with the Mat / IplImage you thought to have loaded the image to. I have no clue of the Java-API, but try to check for empty() or if dimensions / height / width are correct or 0 |
2013-09-26 04:40:23 -0600 | commented answer | Draw the lines detected by cv::HoughLines These are no pointers, they are POINTS, to be more precise the start and end-point of a line. OpenCV's drawLine-Algorithm simply draws a line between two given points. By respecting the angle theta and r one can construct the line with some simple geometry and the knowledge that a line (red) defined by (r, theta) is normal to the vector r (blue). Since the polar form does not hold any information on the length of the line, the author of this code used a large-enough number (1000) to get the illusion of an "endless" line since it usually exceeds the image's width and height. |
2013-09-23 22:40:32 -0600 | received badge | ● Teacher (source) |
2013-09-23 02:05:14 -0600 | answered a question | Draw the lines detected by cv::HoughLines I think he tries to understand what the code in the tutorial does. Is just the transformation from polar coordinates to Cartesian coordinates. This is the point where the blue and the red line meets. These next four lines of code "calculate" the points x and y, which are then used to draw. I wrote "calculate" because they don't really do, but just move a 1000 pixels in both directions, horizontally and vertically. If you have an image much larger than thousand pixels, you'll find that most lines won't reach the outer borders of the image but end somewhere with a total x-distance of 2000 pixels and a total y-distance of 2000 pixels from end to end. However, all lines include the point (x0,y0), th e one where blue and red line meet. From this point to x, deltaY is 1000 and deltaX is also 1000, same is valid for the distance between this point and y, just deltaY is -1000 and deltaX is also -1000 |
2013-09-20 05:57:36 -0600 | commented answer | Histogram outputs always same picture The histogram is calculated on the same data - thus it has also the same output The assign-operator in Mat saturatedImage=grayImage does NOT copy the data it just creates another Mat-header around the same set of data. Since you calculate the histogram of the grayImage AFTER you did the saturation-cast-stuff, the histogram is already calculated on the saturated data. Try to move the line imshow("calcHist Grey image", histo(grayImage) ); before Mat saturedImage=grayImage;, then it should also work with your original code (but it's not what you intend to achieve ;)) |
2013-09-20 05:54:52 -0600 | received badge | ● Supporter (source) |
2013-09-16 04:10:47 -0600 | commented answer | remove image borders I just recognized the Desk is on the bottom-right corner missing in my cropped image - I might have some mistake in the ROI-downscaling-policies, I'll check that later |
2013-09-16 03:48:06 -0600 | received badge | ● Editor (source) |
2013-09-16 03:42:49 -0600 | answered a question | remove image borders Ok, I have no idea if you have any performance-requirements, attached is a straight-forward algorithm based on trial-and-error. It continuously decreases the size of the cropped image and checks if the current region of interest is valid (through examination of the image's borders, namely if the background-color is contained in the border,, then the corresponding side of the rectangle has to be moved further towards the image's midpoint) I'd further recommend to use a transparent channel instead of the black background of the image, since you then have a fourth channel (A channel in BGRA) and don't have to implement a complex decision algorithm whether a detected black pixel now belongs to the image or it belongs to the background. (Could be done with examination of the local neighborhood for instance) Base image: Cropped ROI: |
2013-09-16 02:44:11 -0600 | commented question | remove image borders Would you mind posting the images and how they should be aligned to each other? |
2013-09-16 02:42:48 -0600 | answered a question | remove image borders Would you mind posting the images and how they should be aligned to each other? |