2020-07-29 10:48:34 -0600 | received badge | ● Student (source) |
2019-07-01 02:45:17 -0600 | received badge | ● Famous Question (source) |
2018-01-26 02:53:02 -0600 | received badge | ● Notable Question (source) |
2017-12-29 09:23:24 -0600 | received badge | ● Necromancer (source) |
2017-07-19 21:36:51 -0600 | received badge | ● Popular Question (source) |
2016-08-10 11:09:14 -0600 | commented answer | How to know what to #include for a given function ? That's definitely true and I know that. The thing is that a lot of devs just google for a problem or documentation - it's very convenient. For OpenCV, it's easy to stumble upon the outdated documentation. |
2016-08-10 10:28:55 -0600 | received badge | ● Teacher (source) |
2016-08-10 10:27:20 -0600 | received badge | ● Self-Learner (source) |
2016-08-10 10:27:03 -0600 | received badge | ● Necromancer (source) |
2016-08-10 10:27:03 -0600 | received badge | ● Self-Learner (source) |
2016-08-10 06:59:20 -0600 | commented answer | How to know what to #include for a given function ? It's areal problem that google ranks the opencv 2.4 doc higher than the up-to date documentation. |
2016-08-10 06:57:55 -0600 | answered a question | Stitch images by Translation OpenCV has no support for translation in stitching, since that's highly use-case specific. You'll have to implement your own Stitcher that does alignment, warping, seam finding and blending. Have a look at the detailed stitching example. The OpenCV example does a lot of tuning (linke calculating seam finding etc. on smaller images) which you can throw out. If you really need to use bundle adjustment for translational alignment, you will have to implement your own version of BA. Usually you will be able to use simpler methods for most use-cases. You will also need to implement your own warper to project the images to your panorama's target space (probably a plane). |
2016-08-10 06:53:14 -0600 | answered a question | Where to report Bug in OpenCV Adroid Create a new issue here. |
2016-08-10 06:07:07 -0600 | asked a question | Calculate Gunnar Farnebäck flow on a masked image Hello, I'm wondering if it's possible to use masked images for flow calculation? E.g. I'm assuming a non-rectangular (although connected and convex shaped without holes) image. There are some portions that should be ignored for flow calculation, as they lead to wrong matches. What would be the best approach to calculate the flow, preferably a Gunnar Farnebäck flow here? Removing all flow vectors that point "outside" of the image does not work. |
2016-08-10 05:21:15 -0600 | answered a question | Restrict flow field to horizontal disparity It should be quite doable to modify the Gunnar Farnebäck algorithm implemented in OpenCV to find flows along a fixed axis. The equation that finds the displacement from the calculated polynomials can be changed to solve along any given dimension. Please have a look at the paper for reference: http://www.diva-portal.org/smash/get/... Sadly, the OpenCV implementation has virtually no comments, thus I'm not able to identify the corresponding lines in the code quickly. |
2016-08-10 05:17:45 -0600 | commented question | Restrict flow field to horizontal disparity ROI limiting does not work. The flow algorithms implemented in OpenCV require some margin along the image borders. It should be possible to adjust the Gunnar Farneback Algorithm quite simply - the equation that finds the displacement from the calculated polynomials can be modified to solve along any given dimension. However I've found another approach, thus I don't need the modification anymore. |
2016-07-21 02:04:03 -0600 | commented question | Restrict flow field to horizontal disparity Thanks for your comment, but that does not work. For diagonal edges in the image, a diagonal flow might be detected. Dropping the Y component will just give an X component that's too short. |
2016-07-20 18:17:30 -0600 | asked a question | Restrict flow field to horizontal disparity Hello, I was wondering if any of the flow field implementations in OpenCV can be restricted to only allow disparity along the horizontal axis. Background: I know which linear movements are occurring and I only want to measure disparity along this axis. If not, any hints on how such a feature could be implemented would be highly appreciate. |
2015-12-09 05:23:43 -0600 | received badge | ● Enthusiast |
2015-12-08 04:34:11 -0600 | commented question | Inverse cv::linearPolar causes missing patch in output |
2015-12-07 12:50:32 -0600 | commented answer | Selective hole filling Brilliant out-of-the-box answer! |
2015-12-07 10:01:38 -0600 | received badge | ● Editor (source) |
2015-12-07 10:01:05 -0600 | asked a question | Inverse cv::linearPolar causes missing patch in output I'm trying apply an inverse linear polar transform to some quadratic images: The result, I'd expect is the following (done using Photoshope): However, the result I get from OpenCV's linearPolar function is the following: Some part of the image is missing and can be noticed as a black slice. The code I am using is: Where Am I doing something wrong here? |
2015-12-02 09:16:26 -0600 | answered a question | Utilise known extrinsic parameters when stitching panoramas In regards of the OpenCV Stitching Pipeline, it's fairly easy: Deactivate the (homography-based) Rotation Estimator, which would overwrite the existing extrinsic parameters. Then, use bundle adjustment (ray-based or reprojection-based) to refine the extrinsic and intrinsic parameters. The documentation is somewhat difficult to understand without background knowledge. I can suggest "Computer Vision: Algorithms and Applications" from "Richard Szeliski" as a read here. |
2015-12-02 09:12:39 -0600 | asked a question | Using OpenGL in an OpenCV application causes imread to malfunction I'm currently building some visualisation using the Irrlicht Engine (built upon OpenGL) for an OpenCV application I'm developing. However, as soon as I start to execute OpenGL commands, I can no longer use OpenCV's I would be very glad about any advices on how I could hunt down that issue, since |
2015-11-19 06:41:50 -0600 | answered a question | What is the best practise for passing cv::Mats around Since it's not explicitly stated yet, the following is dangerous, since the default |
2015-03-25 08:06:37 -0600 | received badge | ● Supporter (source) |
2015-03-25 08:06:36 -0600 | received badge | ● Scholar (source) |
2015-03-25 08:06:34 -0600 | commented answer | Convert CV_16UC3 color image to CV_8U grayscale Works like a charm. Thank you. Just change CV_8U3 to CV_8UC3. |
2015-03-24 22:30:51 -0600 | asked a question | Convert CV_16UC3 color image to CV_8U grayscale Hi I'm trying to convert a CV_16UC3 color image to a CV_8U grayscale image. However, the cvtColor function does not accept CV_16UC3 input images. How could I do the desired conversion? |
2015-01-08 06:04:36 -0600 | asked a question | Utilise known extrinsic parameters when stitching panoramas Dear OpenCV Community, I am currently designing a mobile 360° panorama stitching app using OpenCV. Since a 360° panorama needs a lot of source images (I use 62 at the moment), the adjustment (especially finding the extrinsic parameters) of the images is quite slow. Luckily, I can utilize orientation data derived from the smartphone's sensors to calculate the extrinsic camera parameters for each image. This way, I do not need to detect and match features at all. However, those parameters are subject to slight drift. This means that a few images are slightly displaced in the result: Is it possible to optimize those parameters I already know, by, for example, matching image features? I'm thinking here about only matching adjacent images to gain some performance, but I have no idea how this fits into the stitching pipeline. TL/DR: I already know extrinsic camera parameters, but I would like to optimize them based on image features for a better result. |