2020-11-16 15:01:15 -0600 | received badge | ● Great Question (source) |
2018-12-13 05:28:44 -0600 | received badge | ● Good Answer (source) |
2018-05-14 04:10:13 -0600 | received badge | ● Notable Question (source) |
2018-02-05 18:01:02 -0600 | received badge | ● Notable Question (source) |
2016-11-21 06:26:51 -0600 | received badge | ● Popular Question (source) |
2016-03-30 06:30:04 -0600 | received badge | ● Popular Question (source) |
2015-12-16 15:34:45 -0600 | received badge | ● Famous Question (source) |
2015-10-13 08:05:00 -0600 | received badge | ● Taxonomist |
2014-12-09 13:43:04 -0600 | marked best answer | optical flow state of art in version 2.4.2 hi, I've just update my opencv at 2.4.2 version. In documentation i don't see anymore Horn and Schunck algorithm and i'm asking why (i have some snippet that use it). Then i've read about an excellent method called "Combining Local and Global" that mixes Lucas Kanade and Horn Schnuck and i'm wondering where to find some code of it compatible with opencv, or if it is planned to be released in the next versions. |
2014-10-08 06:56:42 -0600 | received badge | ● Nice Answer (source) |
2014-02-22 04:46:51 -0600 | answered a question | Comparing two HOG descriptors vectors A similar question is asked here: http://stackoverflow.com/questions/11626140/extracting-hog-features-using-opencv they just do a hog-to-hog distance accumulating the error.. nothing complicated, just an accumulation error between two float array (the two hog must be of the same size of course) |
2014-01-31 09:24:01 -0600 | commented question | Extract point for calcOpticalFlowPyrLK and cluster yes but sometimes the objects merge in one big blob and I'd to split them via clustering and optical flow |
2014-01-29 07:36:54 -0600 | asked a question | Extract point for calcOpticalFlowPyrLK and cluster Hi! I want to use calcOpticalFlowPyrLK to calculate optical flow inside some blob detected with Mog background subtractor. Due to the fact that LK optical flow is a sparse optical flow I have to give some points in input. Reading on the net I have 3 possibility:
Which method is preferred? Are there some benchmark or comparison? and then cluster them., wit - for example - kmeans. On what to cluster? Position of the point and optical flow intensity? Or what else? Thanks in advance |
2014-01-29 03:07:23 -0600 | commented answer | Unable to build documentation if you are on MAC OS X using BREW consider this for sphinx: https://gist.github.com/terenceponce/3786784 |
2014-01-22 10:14:42 -0600 | received badge | ● Teacher (source) |
2014-01-22 09:45:44 -0600 | received badge | ● Necromancer (source) |
2014-01-22 08:22:25 -0600 | answered a question | How to training HOG and use my HOGDescriptor? Hi! I have found this repository: https://github.com/DaHoC/trainHOG It seems to be a nice tutorial on how to train an HOG detector using SVMlight 6.02. Even if I haven't tried myself I would give it a try! |
2014-01-22 08:07:24 -0600 | asked a question | HOG people detect example image Hi! I'm attempting to dive into people detection using HOG. I think at some point to train my own detector but first I want to give the standard people detector a try. So I'm starting with the example peopledetect.cpp sample in the opencv root. I'm using Opencv 2.4.3, and in the sample/cpp I have this example (I think it is the same of the newest version but I'm copying it here to be sure): With the image I have it does not find anything, so prob. I have to change some parameters, i.e. the size of the detected people in ... |
2013-11-18 08:02:48 -0600 | asked a question | exception in CountNonZero (too see if 2 mat are equal) Hi! I have this exception and I don't understan why. I use it in this function: |
2013-09-03 10:13:16 -0600 | asked a question | camera calibration with partially occluded patterns hi! in the documentation of calibrateCamera method I found:
where can I find more information about this? |
2013-08-30 10:09:47 -0600 | asked a question | cv::undistort and values of distortion coefficent Hi! I'm porting to opencv a little script for lens distortion correction. This program is doing the undistortion with the Brown normalization, and it uses some parameters that gets out of a (close source) software for camera calibration that has values in the so called photogrammetric representation. The main difference I have noticed till now is that it express values of focal length and principal points in mm while opencv undistort function takes values in pixel. Ok, I have pixel dimension and I can do this conversion. But.. Even after this conversion the cv::undistort function still gets me an image that is not correctly undistorted. I think that there should be some scaling factor that I'm not considering. So I'm asking: what are the values of distortion coefficent units? Are they in radians? Or there is some other conversion that I have to do? Some advice? EDIT: I report the name of the log and how I'm using it (referring to opencv cv::undistort documentation): Camera interior orientation: focal length (mm), principal point (mm). Radial distortion parameters: k1, k2, k3 Decentring distortion parameters: p1, p2 Affinity, non-orthogonality parameters: b1, b2 I'm using focal length parameter (scaled in pixel) for fx anf fy, principal point (scaled in pixel as well) as cx, cy. k1, k2, k3, p1, p2 as they are b1, b2: not using those parameters. |
2013-08-20 07:57:11 -0600 | asked a question | Common pre processing in blob extracting Hi! I'm trying to do some blob extracting and the normal procedure I've seen in a lot of example codes is:
I'd like to know if there are some common way of handling the pre-processing before find contours or if they can be avoided because of computationally too expencive |
2013-08-09 06:23:27 -0600 | asked a question | Systematically explore all parameters of BackgroundSubtractor objects Hi! I have some video with illumination changes (and no objects) on which I'd like to test some BackgroundSubtractor objects (MOG, MOG2, and gpu modules too, FGDStatModel, GMG_GPU), and find out which one is more robust against illumination changes. The problem is that each algorithm has a lot of settings, and tuning parameters could require ages. For now I've done a simple class that tries out all algorithm and I'm manually trying some different combinations of parameters. But I'm looking for some more systematically way of testing it. Any kind of advice? |
2013-07-26 05:22:05 -0600 | asked a question | are there some samples of legacy code for tracking? Hi! I'm interested in tracking and I've found in legacy module some tracking class (blobtracking, condensation, kalman post processing, etc). The files I'm talking about are listed in github here: https://github.com/Itseez/opencv/tree/master/modules/legacy/src I'd like to test them.. is there some samples? or just some notes about the reason those files are moved to legacy/deprecated module? Thanks. |
2013-07-26 05:16:05 -0600 | commented answer | Best method to track multiple objects? do you have some example for particle filter in multiple-object tracking? |
2013-07-26 05:08:14 -0600 | commented answer | syntax for particle filter in opencv 2.4.3 thank you, very nice. Is there a place where to download you patch? I don't understand exactly you fixes with diff command |
2013-06-11 06:34:33 -0600 | commented answer | OpenCV on Mac OS X 10.8 Mountain Lion why do this post has such a low votes? It is super important, and it works for me after days of troubles on mac compilers |
2013-06-10 04:08:29 -0600 | commented answer | reading opencv + qt code not much links are following.. |
2013-06-03 07:22:42 -0600 | commented answer | reading opencv + qt code thanks! i hope some link will follow : ) |
2013-06-03 06:28:23 -0600 | asked a question | reading opencv + qt code hi! i've managed to let qt and opencv run with threads, and i'm quite proud about it. but there are some problems and uncertainties i have and i think i can learn and clarify a lot reading the source of some qt + opencv application and see how other people make some kind of decisions. but i didn't find a lot in the net.. i'm wondering if someone can suggest me some opensource project that is using massively opencv. |
2013-05-29 05:22:04 -0600 | commented question | OpenCV + CUDA + OSX (10.8.3) I'm having the same problem (and in another machine with ubuntu is easy to resolved too but I still want to compile it in Mac). How did you add that flag? in cmake? |
2013-05-27 04:02:25 -0600 | commented answer | install OpneCV with CUDA in MAC no. it is very annoying because i have done an application under ubuntu that use gpu stuff a lot. but i can't run on my mac. i hope to find a solution.. just in case, keep in touch |
2013-04-29 13:17:17 -0600 | asked a question | install OpneCV with CUDA in MAC Hi! I'm trying to install opencv 2.4.5 on Mac 10.8 with no success. I'm using 2.4.5 opencv because it is the last but If there are some issues on that version i can downgrade, no problem. (I have already installed command line extension from Xcode and CUDA toolkit from NVIDIA) I'm stucked in setting the right compiler for CUDA. What I've done is: download opencv, run CMAKE, run make, get the error:
(with manual cmake and with macport same error) So, after some google i've found that the problem could be CUDA_HOST_COMPILER. I've tried to change it in /usr/bin/gcc, /usr/bin/llvm-g++ and i get on but after a little while I get another error:
I can post all output from cmake and make, if it needed. What can I do? I need to compile opencv with cuda, but I don't have other need like a particular version of opencv or gcc or clang or llvm (and I normally develop under ubuntu so I don't understand deeply the differences between those compilers). this is my system settings: and and and |
2013-03-04 11:17:40 -0600 | marked best answer | finding centroid of a mask hi, i have a mask obtained by threshold function. i wonder if i can find its centroid with builtin function, now i'm doing manual: but probabilly is there a better way.. |
2013-01-30 11:27:19 -0600 | asked a question | osx mountain lion and qt Hi! I am from a little while on opnecv programming on ubuntu. Now just shifted to mac osx 10.8. I have installed both opencv and qt with homebrew. The problem is that when i compile and execute an opencv program with highgui it open a window but not a qt window. in particular case when i'm showing with imshow some test matrixes like 4x4 or 5x5 mats.. in ubuntu i had a zoomed view, here i have a small window with only few pixel not zoomed. Then i don't have the upper control menu: well... i don't have qt on opencv highgui module. i know that homebrew is not a perfect packet manager so i'd like to make as less modification as possibile, and only from brew (don't recompile please) some hints? |
2013-01-14 06:29:49 -0600 | commented answer | differences in histogram equalization between equalizeHist and wikipedia example no minMacLoc give me 4 and 255 and i use |