2020-10-05 12:50:01 -0600 | received badge | ● Popular Question (source) |
2019-08-05 21:50:26 -0600 | received badge | ● Famous Question (source) |
2019-06-03 03:58:58 -0600 | received badge | ● Notable Question (source) |
2017-04-13 01:33:10 -0600 | received badge | ● Popular Question (source) |
2016-12-18 11:08:22 -0600 | received badge | ● Notable Question (source) |
2016-04-15 06:56:29 -0600 | asked a question | cv:Mat Encoding (RGB vs BGR) How is the encoding information stored in the in a cv:Mat? Like how it can be known if it is an RGB or a BGR image? (there are other possibilities) As far as I know, it is only the channel number what is stored. (like 32FC1) |
2016-01-19 13:25:02 -0600 | received badge | ● Popular Question (source) |
2015-07-24 04:16:29 -0600 | asked a question | Get [R|t] of two kinnect cameras My aim is to determine the correct tranformation between the coordinate systems of two kinnect cameras, based on chessboard patterns. (the base unit would be meter) I am basicly using stereo_calib.cpp sample. (with the chessboard unit set correctly) Using 5 pairs, the reprojection error is 3.32. Having a valid point pair, P_k1 = {0.01334677 -0.3134326 2.604} and P_k2 = {-0.9516979 -0.3950531 2.483} the given R and T does not seem to be right. I am assuming the P_k2 = R*P_k1 + T formula. Any idea where the error comes from, or how to improve my results? |
2015-07-13 06:03:12 -0600 | asked a question | HoughCircles: Getting the biggest one Using the built in circle detection algorithm, I have trouble to find the biggest circle. For example:
I only draw the best detection, but it seem to be the average of the actually detected circles. How can I detect the biggest one? |
2015-01-16 03:58:16 -0600 | commented question | Normalized standard deviation In the meantime, I looked up the exact definition which I hadn't known before, that's why I asked the question. In fact, it is only a division by the squared mean. |
2015-01-16 02:39:31 -0600 | asked a question | Normalized standard deviation What is the easiest way to calculate normalized standard deviation for a certain region of an image? |
2015-01-08 09:51:14 -0600 | asked a question | HDR: Precalibrate http://docs.opencv.org/trunk/doc/tuto... Is it possible to generate the camera response matrix in advance? (for a given sensor) For example, can I use different times while generating the response matrix, and merging the hdr image? |
2014-12-09 13:49:32 -0600 | marked best answer | Detector for FREAK Which feature detector works best with the FREAK extractor? (with a good performance) And how should i set the parameters? (for example: threshold) |
2014-11-02 13:10:06 -0600 | asked a question | Pixelwise subtract, with negative numbers I would like to implement the following pixelwise operation between two images:
But, the problem is, that the cv::substract() and the operator - on cv::Mat fails to calculate negative values, and uses 0 instead. How can I easily implement the behavior I need? |
2014-10-20 19:29:35 -0600 | asked a question | cv::xfeatures2d::SURF abstract? With all default opencv + opencv_contrib build (vs2013), it seems that the mentioned class is abstract. Test code: Error: The nonfree.hpp: Any idea what the problem could be? |
2014-10-10 18:00:40 -0600 | asked a question | Build error under Windows With the following cmake settings:
Compilation fails: Any idea, how to fix? EDIT: CMake output: http://pastebin.com/3HVJAqwX MinGW output: http://pastebin.com/X3fZFPdk (4.8.1-4 version) opencv: 55f490485bd58dc972de9e0333cdff005fce1251 (master latest) opencv_contrib: 49102c7e7a44dd7c0cc992c27e52a9547aad745e (master latest) |
2014-10-02 17:28:21 -0600 | asked a question | Speeding up pixelwise operations I would like to compute an average LoG (Laplacian of Gauss) score over a given AOI. The problem, with the naive approach is, that it takes a really long time, and i have a stong feeling that it may have been done faster. Any tip, to make the code run faster? (gonna run on mobile ivy bridge CPU) |
2014-09-03 14:45:43 -0600 | asked a question | VideoWriter problem with Ubuntu (3.0 alpha) With opencv 3.0 (master branch) With libopencv-dev package installed from Ubuntu repo: Any idea how to solve whats wrong (no console error)?
UPDATE: I tried out the stable 2.4 branch, but it's broken as well. So there must be some package that comes with the libopencv-dev, that is needed. |
2014-09-01 07:45:33 -0600 | asked a question | CV_FOURCC missing? In the 3.0 version (post videoio), which header contains the CV_FOURCC macro? Or how can I avoid using it? |
2014-08-29 12:52:23 -0600 | commented question | How can i get the old nonfree functionality back? I use the SURF fp without CUDA. Is this bug expected to be fixed soon? |
2014-08-29 12:20:06 -0600 | asked a question | How can i get the old nonfree functionality back? I know its moved to the opencv_contrib, but if I build from the git repo, with OPENCV_EXTRA_MODULES_PATH set correcly, I have no xfeatures2d at all. (but i have xphoto ximgproc xobjdetect) Any idea? |
2014-08-25 04:47:49 -0600 | asked a question | Recording long videos: Memory management Hi, I get the frames as raw RGB888 data (I can convert it to cv::Mat), and I want to store it in a compressed video file, without storing all the frames in the memory. Is it possibe to use the hard drive to store the temporary data, or is there any easy way to achive my goal? |
2014-07-22 06:36:08 -0600 | asked a question | module.hpp vs module/module.hpp Im working with the latest OpenCV form the master branch. What is the difference between (for example): include <opencv2 imgproc.hpp="">//and include <opencv2 imgproc="" imgproc.hpp="">Is it safe, to use both a stable opencv from ubuntu repo, and a self built one? |
2014-07-22 06:25:14 -0600 | commented question | Apply infinite homography to image Look at the documentation of gemm(): dst = alphasrc1.t()src2 + beta*src3.t(); In c++ I would use the operator *, there must be a Java equivalent to that. |
2014-07-20 11:48:27 -0600 | commented answer | Apply rotation matrix to image Just multiple the matrices, and pass the result to the warpPerspective() function. |
2014-07-19 20:25:32 -0600 | commented question | Build from git repo fails under Linux, but ok with Windows I found it, as a bug report: http://code.opencv.org/issues/3821 |
2014-07-19 20:19:47 -0600 | answered a question | Apply rotation matrix to image The R matrix transforms from Cam1 system, to Cam2 system. Its a 3D->3D transformation. The warpPerspective() expects an 2D Image -> 2D Image tranformation (in normalized space). Luckily there is a function for that: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography#findhomography |
2014-07-19 19:46:14 -0600 | asked a question | Build from git repo fails under Linux, but ok with Windows Any idea what is the source of this error? Distro: Linux Mint 17 (all dependencies up to date) Used that tutorial: http://docs.opencv.org/doc/tutorials/introduction/linux_install/linux_install.html |
2014-06-23 19:24:28 -0600 | asked a question | Blending images with different focal length I'm trying to implement an automatic focus stacking algorithm. The scene and the camera position is static, but the view angle changes with focus length. I can take several images, and I have direct API controll over the focus lenses. Is there any robust automatic method to allign the images? Or any method to calibrate the setup? (measure the angles) |
2014-06-08 12:34:30 -0600 | received badge | ● Teacher (source) |
2014-06-07 14:20:39 -0600 | commented question | Except for OpticalFlow,Is there other way to calculate the new position of the corners points? You need to specify, what to track, because the majority of the image (the background) is still, and a hand is not easy to track. |
2014-06-07 14:04:06 -0600 | answered a question | Get smoothing point using B-spline curve C++ The p1 = 11; determines the number of evaluated points. But if only fixed number of points needed, its a waste using a generic B-spline. The exact behaviour can be achived with only weighting the four points with precalculated B-spline basis function values. And the k = 2; means that the segments are simple lines which are just connect the control points, so it should be 3 to be continuous in tangent, and 4 to be continuous in curvature. So quick fix is p1 = 4; and k = 3; But this is not the best solution to filter out points. An easy: Moving average A hard one: Kalman filter |
2013-08-29 04:09:53 -0600 | asked a question | Superresolution using feature points instead of opical flow I'd like to use feature points to detect a rigid transform instead of optical flow, which is much slower, but I cant find any documentation about what interface I must implement. (to make my algorithm act like an optical flow), or does it even work, or there is a better solution? |
2013-08-09 17:38:09 -0600 | received badge | ● Editor (source) |
2013-08-09 09:55:10 -0600 | asked a question | SuperResolution nextFrame bug In the superresolution sample (built with vc11 compiler) the the following line: //Ptr<superresolution> superRes; superRes->nextFrame(result); results the following error error (tried with multipe test videos): http://i.imgbox.com/abwNaL3z.jpg and if I change the optical flow method to simple, it takes forever to run (30 min with an i7 2600k) Any idea? Update: The program used 3.5 GB memory before it stopped. Its simply unreasonable. Must be a memory leak. |