Ask Your Question

stfn's profile - activity

2020-05-10 02:37:46 -0600 received badge  Notable Question (source)
2017-10-06 03:14:36 -0600 received badge  Popular Question (source)
2016-05-31 04:59:17 -0600 answered a question Ideal camera placement for vehicle detection

You can not expect an answer for that questions. The placement of you camera pretty much depends on the algorithm you use for tracking and your training data, if any. If you trained a model of any kind with train data, it'd be a good idea to match the camera placement with the one(s) in that data.

2015-03-13 17:02:32 -0600 received badge  Student (source)
2014-10-05 04:58:24 -0600 commented question Where do I find information about the methods of structures?
2014-08-29 19:07:51 -0600 commented question Is there any way to find an ellipse representation of a convex polygon stored as a set of vertex points in a vector?

In what sense does it not work?

2014-08-29 05:02:43 -0600 commented answer TBB parallel_for vs std::thread

Thanks, that was very helpful. But some questions are still open. 1st: Is there then a nice way to parametrize the parallel_for for each thread differently apart from the solution mentioned in the question? And 2nd: is writing access of Mat(rizes) (or any memory write) then thread safe when using parallel_for?

2014-08-27 17:33:55 -0600 commented question segfault with multithreaded gpu calls

So ... would it help to create a cuda context in each thread? The data processed is independent for each thread.

2014-08-27 17:30:48 -0600 commented answer Is it possible to map the 2D room map of room using opencv in android?

Yes, of course you are right. Translation is better than rotation. By rereading my answer ... it's not done just by projecting 3D points, that are computed by from two or more consecutive rgb frames, onto the ground and then 2D SLAM it. The reference frame for those points is needed of course. That being said ... you need a full 3D slam for creating your 2D map or at least restrict rotation to one pointing up (I guess that is what I meant in the answer). I tested rgbd-slam some time ago in our office kitchen and experienced and experienced frames, that were missaligned by like 10 degree rotation. BUT my purpose was to generate a 3D model. So "drifts a lot" might be an overstatement in terms of localization

2014-08-27 10:32:46 -0600 received badge  Teacher (source)
2014-08-27 07:12:10 -0600 received badge  Necromancer (source)
2014-08-27 07:01:41 -0600 answered a question Is it possible to map the 2D room map of room using opencv in android?

SLAM in 3D is hard and as far as I know still considered "unsolved". It's even harder when you have only 2d image data. First you'd have to extract depth data from that. This can be done by extracting 2D feature points from two consecutive frames and match them with RANSAC and friends in order to get the 6DOF transformation from one frame to the other. This is called structure from motion in this case. Maybe you can input some constraints here like not moving the camera but just rotating about one axis. This would reduce parameter space and therefore time complexity. After the registration you'd 3D data, which could be projected to the ground easily. The projected points can be threated as laserscanner data and fed into the mapping algorithm. The hardest part here is the error drift. Have a look at http://openslam.org/rgbdslam.html. Although they are using additional depth data, their algorithm drifts a lot over time, corrupting the map.

Summary: 1st look for a structure from motion implementation. 2nd, be aware of the fact, that this is not easy and needs some understanding of SLAM, mapping, bayes filters (resp. particle filters), error relaxation and ... stuff :)

PS Google is trying something similar but more advanced with project tango. I think they use stereo cameras there.

2014-08-26 16:15:19 -0600 asked a question segfault with multithreaded gpu calls

Hey,

I'm using gpu::HOGDescriptor from OpenCV 2.4.9.0 in a multithreaded application, that is multiple gpu HoGs are running simultaneously. Ocassionally, I get the following error:

OpenCV Error: Gpu API call (an illegal memory access was encountered) in extract_descrs_by_cols, file /home/stfn/libs/opencv-2.4.9/modules/gpu/src/cuda/hog.cu, line 545
OpenCV Error: Gpu API call (an illegal memory access was encountered) in mallocPitch, file /home/stfn/libs/opencv-2.4.9/modules/dynamicuda/include/opencv2/dynamicuda/dynamicuda.hpp, line 1134
OpenCV Error: Gpu API call (an illegal memory access was encountered) in normalize_hists, file /home/stfn/libs/opencv-2.4.9/modules/gpu/src/cuda/hog.cu, line 323
OpenCV Error: Gpu API call (an illegal memory access was encountered) in call, file /home/stfn/libs/opencv-2.4.9/modules/gpu/include/opencv2/gpu/device/detail/transform_detail.hpp, line 364
terminate called recursively
terminate called recursively
terminate called after throwing an instance of 'cv::ExceptionAbgebrochen (Speicherabzug geschrieben)

It seems that there is no rule for that, it happens completely randomly. Is there a guideline on Cuda resp. the OpenCV gpu module in multithreaded applications or stuff?

2014-08-25 05:36:37 -0600 answered a question Recording long videos: Memory management

What is wrong with the VideoWriter? Isn't it directly writing onto HDD?

2014-08-20 09:06:20 -0600 commented question TBB parallel_for vs std::thread

yes. I mentioned that. I also mentioned, that this data will be the same for each thread. Which is the problem :)

2014-08-20 06:29:05 -0600 received badge  Scholar (source)
2014-08-20 06:03:40 -0600 received badge  Editor (source)
2014-08-20 05:47:43 -0600 asked a question TBB parallel_for vs std::thread

Hi,

I'm starting with parallel processing in OpenCV an wonder why I should use parallel_for (from TBB) instead of just using multiple std::threads. As I understood the parallel_for functionality, you have to create a class extending cv::ParallelLoopBody with a method signature of void operator()(const cv::Range& range). This is where the processing then happens. But you cannot pass any arguments to this function, nor can you parametrise your parallel function in any way. All you have is this range of your thread, and the arguments you passed to the cv::ParallelLoopBody instance, which are the same for each thread. So you'd have to sort out your arguments with that range, e.g. passing a vector of Images to the cv::ParallelLoopBody instance and then using the range to extract the one you need. You'd have to do so for every single parameter that is thread-dependend.

So what's the benefit then compared to threads? I can bind any arbitrary function with (almost) arbitrary parameters with boost or C++11, without creating new classes for each task to be parallized. For this purpose I wrote a very primitive thread pool manager (.hpp, .cpp). Anything wrong with that?

cheers, stfn

P.S. I'm not an threading expert. I know there are memory access concerns when the functions I'm threading are using the same memory for writing. Reading is not the problem, but when two function write simultaniously e.g. on the same Mat, what is happening despite probable corrupted data due to race conditions? Is caching triggered, forcing the data to be up to date before writing? More generally: what do I need to take care of in terms of performance and data safety? Are those pitfalls already taken care of in TBB and this is why it is used in OpenCV?

EDIT: I ended up using tbb::task_group for parallelization and load balancing. Works like a charm.

2014-08-20 05:22:11 -0600 commented answer cmake configuration for maximum performance

thanks, running cmake with those options lead to the same performance then. Strangely I activated the very same options with ccmake, resulting in inferior performance. Something was happening there ...

2014-08-14 04:15:14 -0600 answered a question Why imshow doesn't always show pixel values in a window after zoom?

I guess that opencv has not that much todo with that, but the window manager of the operating system. Highgui is more fore debugging purposes and a fairly primitive interface to some basic gui functions. If you need a reliable gui, I'd suggest you use QT. It plays well with opencv images and vice versa.

2014-08-14 04:06:15 -0600 asked a question cmake configuration for maximum performance

Hi,

normally I use the opencv stack comming with ros-hydro. But since I need some gpu implementations right now I downloaded 2.4.9 and compiled the sources on my own. I noticed that the performance of my custom build is fa behind the ros stack. FAST_MATH, WITH_EIGEN, WITH_TBB, WITH_CUDA, WITH_CUBLAS was activated, I built the release configuration. Any ideas what optimization is still missing? Also I tried to find the cmake config the ros stack (apt package: ros-hydro-opencv2) is compiled with but didn't find that much despite the jenkins log.

cheers, stfn

2014-08-10 09:24:55 -0600 received badge  Critic (source)
2014-08-10 08:44:48 -0600 asked a question Virtual Memory Size with the gpu::HOGDescriptor

Hi, I don't know exactly weather this is a problem, but when I run the HoG app from samples, the program has a virtual memory size of more than 60GB as soon as gpu::HOGDescriptor is instanziated. Any ideas?

thanks and cheers, stfn

p.s. some system specs:

  • "GeForce GTX TITAN Black" 6136Mb, sm_35, 2880 cores, Driver/Runtimever.6.0/6.0
  • 32GB Ram
  • i7 Quadcore
  • opencv 2.4.9 stable
2013-12-07 15:27:15 -0600 received badge  Supporter (source)