2019-11-25 04:11:05 -0500 | received badge | ● Nice Answer (source) |
2019-06-20 07:52:17 -0500 | received badge | ● Notable Question (source) |
2018-01-06 03:04:06 -0500 | received badge | ● Good Answer (source) |
2017-06-20 08:36:52 -0500 | received badge | ● Nice Answer (source) |
2017-05-17 10:53:52 -0500 | received badge | ● Popular Question (source) |
2016-12-30 05:02:42 -0500 | commented answer | human detection with HOG. How to improve it? Hi. The "flickering" is a normal behavior. Since the HoGDetector (and all other algorithms for pedestrian detection) does not detect every person in each image of a video. In this video someone runs the opencv_gpu_hog example on the example video from opencv. As you can see, even in the opencv example you can some "flickering". The reason why the videos are played at slower rate is that HoG needs a lot of computational power. If you want more fps you can tune the parameters of the detectMultiScale(...) method. Reduce the resolution of the video (the opencv example video has only a resolution of 768x576 pixel). Or use the CUDA HoGDetector. |
2016-11-24 01:46:40 -0500 | commented question | error when read float pixel from a Mat One error is in the last line. You cast a float value to an int. |
2016-11-24 01:22:13 -0500 | edited question | error when read float pixel from a Mat Hi everyone, I just want to read a value from each pixel of the oIMG mat, but it is failed. Can you help to explain where is my errors in below code: |
2016-06-16 07:11:28 -0500 | received badge | ● Good Answer (source) |
2016-06-16 07:11:28 -0500 | received badge | ● Enlightened (source) |
2016-06-02 02:50:56 -0500 | received badge | ● Necromancer (source) |
2016-06-02 02:42:11 -0500 | answered a question | geometric models for lane detection and tracking Hi, I agree with the comment from StevenPuttemans. You should not use different models and try to combine them. To detected the lane markers use a model which approximates the lane markers good enough and which can be easily calculated (e.g. BSplines). Then after detecting the lane markers you can use this information to get the drive lane. At DARPA Urban Challenge the Team Caltech developed an approach which works very good and runs in real-time. See website from Mohamed Aly here.There he also presents pictures and videos of the results. He also published a paper
|
2016-04-22 16:01:59 -0500 | commented answer | Feature detection-based localization using OpenCV Hi, Point Cloud Library is a library quite similar to OpenCV. Or in other words, what OpenCV for computer vision is, is Point Cloud Library for point cloud processing. It's also an Open Source project released under BSD license. So you can use it in commercial applications. Do you really have not initial guess about the scan position? My experience says that scan robust scan matching with not initial guess and noisy data (as in your example) is very very hard. |
2016-04-22 02:08:24 -0500 | answered a question | Feature detection-based localization using OpenCV Hi, you problem is called scan matching or point cloud registration and is a well known problem in robotics. In applications as yours typically an initial guess at which location the laser scan was located is available (the pose from the last time step, odometry data, ...) . So you can also use the most used algorithm, ICP or variants of it (see here, here and here ). An implementation of ICP (GICP) from pointcloud library can be found here and a full scan matching component from ROS here. If you have no initial guess the problem is much harder, one solution then be the algorithm presented in this video |
2016-02-12 10:55:33 -0500 | received badge | ● Nice Answer (source) |
2016-02-03 08:21:04 -0500 | answered a question | compareHist Hi, take a look at the cv::compareHist(...) documentation. In the documentation of the enum cv::HistCompMethods all the math of the comparison methods is shown. |
2016-02-03 05:03:23 -0500 | commented question | Reducing greyscale to monochrome Do you mean: reducing greyscale to binary image (black and white)? Since, greyscale and monochrome are quite similar. If you want to get an binary image you can use cv::threshold. Here you can find an OpenCV tutorial about thresholding with some examples. |
2016-01-31 14:46:24 -0500 | received badge | ● Nice Question (source) |
2016-01-25 10:31:25 -0500 | received badge | ● Nice Answer (source) |
2015-11-07 16:23:01 -0500 | commented answer | Would like to use different scales with HOG people detection Hi. Can you tell us please whats the problem with the found locations? The found locations are in relation to the downscaled image. If you want the locations in relation to the input image you have to upscale the found locations (position and size) by the scale factor |
2015-11-04 04:48:54 -0500 | received badge | ● Good Answer (source) |
2015-10-29 05:13:44 -0500 | answered a question | Fitting a point-cloud Typically ICP (Iterative Closest Point) algorithm is used to solve such a problem. As far as I know, there exist no ICP in OpenCV. If you can use Point Cloud Library in your project, you can easily adapt the code from the ICP tutorial on the PCL website (see here) to solve your problem. |
2015-10-29 04:17:16 -0500 | commented question | Is it possible to run a GPU HOG with a custom window size? Hi. We run the GPU HoG with a descriptor size of 18x36 and 48x96 in our application without "stalling". The application was tested on a Tegra K1 and on a Geforce GTX 650 using the master branch of OpenCV 3.0 repository. |
2015-10-29 03:43:21 -0500 | answered a question | Would like to use different scales with HOG people detection Hi. To skip the first scales you can downscale the image before running detectMultiScale using resize method. |
2015-10-21 03:35:31 -0500 | commented question | Different output from GPU implementation of HOG Hi, in our test we got also different results for GPU and CPU HOG Implementation (See my question here ). |
2015-09-15 02:25:07 -0500 | commented question | Tuning OpenCV HOG method for reliable pedestrian detection using Thermographic camera I don't think that you have to invert the image. Since the HoG implementation in OpenCV 2.4.11 doesn't consider the sign of the gradient. The reason is that in a typical pedestrian detection scenario the color of the clothing of the pedestrian is unknown (e.g. bright clothing on dark background or dark clothing on bright background). In your scenario the person typically will be brighter than the background. Dalal wrote in his paper (section 6.3) that using signed gradients does help significantly. In OpenCV 3.0 C++ HoG implementation you can set an flag if the descriptor use signed or unsigned gradients. But then, you can not use the default descriptor anymore and you have to train your own descriptor. |
2015-05-04 03:34:03 -0500 | received badge | ● Nice Answer (source) |
2015-05-04 02:14:19 -0500 | answered a question | HOG descriptor output Hi, its not only the number of cells in the descriptor window multiplied by the number of bins. You have also to consider the block stride. The following code is from the implementation of This image show the structure of the HoG Descriptor and how it is build (source: slide 18 from B. Triggs talk at icvss http://class.inrialpes.fr/tutorials/triggs-icvss1.pdf). The overlap of blocks is what is missing in your computation. |
2015-04-22 11:29:31 -0500 | received badge | ● Nice Answer (source) |
2015-04-09 02:08:36 -0500 | commented question | Different results of gpu::HOGDescriptor and cv::HOGDescriptor Your are right, the differences are minor. In the OpenCV HoG example its possible to switch between the CPU and the GPU implementation and both use the same model (default people detector). The detection results looks quite similar. So there could be no big differences, but I wondering what is the reason for the "small" difference in the descriptor. Probably your right and its an floating point issue (for example 16bit float vs. 32bit float). |
2015-04-08 10:13:14 -0500 | received badge | ● Student (source) |
2015-04-08 06:56:51 -0500 | commented question | HOGDescriptor.computer error Hi. Are you sure that you have checked this source code? Your code in the question will not compile, it contains a lot of syntax errors. |
2015-04-08 04:05:53 -0500 | asked a question | Different results of gpu::HOGDescriptor and cv::HOGDescriptor Hi, we evaluate the performance improvement of the GPU implementation of the HOGDescriptor against the CPU implementation. During the tests, we compared the computed Descriptors of the GPU and the CPU implementation and saw that they are different. Why the Descriptors are different? Is this a bug? The following code show our test application. We use the default people detector with default parameters on an frame from the OpenCV example video. The OpenCV version is opencv-2.4.10 and CUDA 6.5. The descriptor was computed at Rect(130, 80, 64, 128). Here is the test frame. If we visualize the descriptors with the method from here. They look very similar but if you take a look at block (0,0) you see differences in magnitude and in block (3,5) different orientations. CPU HOGDescriptor visualization GPU HOGDescriptor visualization Could anyone give me some hints why the descriptors are different? Best regards, Siegfried |
2015-03-31 04:21:57 -0500 | received badge | ● Nice Answer (source) |
2015-03-30 08:00:04 -0500 | answered a question | Create a mat ROI for each contour blob? Since you already compute the bounding box of the contours, you can do something like this |
2015-03-17 12:50:50 -0500 | received badge | ● Nice Answer (source) |
2015-02-26 13:24:49 -0500 | received badge | ● Nice Answer (source) |
2015-02-25 11:22:53 -0500 | received badge | ● Nice Answer (source) |
2014-12-29 01:11:00 -0500 | received badge | ● Enthusiast |
2014-10-10 00:32:59 -0500 | answered a question | What is: vector<vector<Point2f> > imagePoints; You are right, the structure is a standard vector containing a standard vector of cv::Point2f. And cv::Point2f is a 2D point with data type float. OpenCV has a template class for 2D points (see here). For convenience there are some aliases |
2014-10-06 09:29:38 -0500 | answered a question | SIFT detector returns duplicate keypoints Hi, this is the behavior as described in the original SIFT paper Distinctive Image Features from Scale-Invariant Keypoints from David Lowe in section: 5 Orientation assignment on page 13. |
2014-09-27 06:35:18 -0500 | edited question | Use hog.detectMultiScale in multiple threads? When I do pedestrian detection width Hog + SVM in OpenCV ,I try to using it in two thread to improve the detect speed.But I got OpenCV error: My code: Can anyone help me ,thanks. |
2014-09-25 17:16:40 -0500 | received badge | ● Nice Answer (source) |
2014-09-10 07:45:13 -0500 | answered a question | fillPoly crashes Hi, check the parameter you put into cv::fillPoly(...). In the documentations of fillPoly other types are used as in your code. An example how to use fillPoly can be found in the opencv tutorials section here. There is also a question on stackoverflow regarding on how to draw a contour / polygon (using cv::fillPoly(...)). |
2014-08-01 01:00:15 -0500 | commented question | sir whilw coding in c language using vs 2013 a,open cv programme with the code given below our code compiles sucessfully,but while showing output shows image given below as error You should use the C++ API and avoid the old (deprecated) c API. Especially, when starting a new project. |