2020-02-22 11:03:38 -0600 | received badge | ● Popular Question (source) |
2018-02-28 06:34:44 -0600 | received badge | ● Famous Question (source) |
2016-07-20 08:08:31 -0600 | received badge | ● Notable Question (source) |
2015-06-15 21:03:29 -0600 | received badge | ● Popular Question (source) |
2013-05-08 05:58:28 -0600 | received badge | ● Critic (source) |
2013-05-08 05:40:59 -0600 | asked a question | How can i compute SVD and and verify that the ratio of the first-to-last singular value is sane with OpenCV? I want to verify that homography matrix will give good results and this question has answer for it and i don't know how to implement the answer. So how can i compute SVD and and verify that the ratio of the first-to-last singular value is sane with OpenCV? |
2013-05-07 17:04:36 -0600 | commented answer | good result or bad result for findHomography How can i compute SVD and and verify that the ratio of the first-to-last singular value is sane with OpenCV? |
2013-05-07 14:21:47 -0600 | commented answer | How to know if findHomography + warpPerspective will give good result beforehand? Thanks for your answer, i should have explained that this is an android app and i am trying to align photos that have taken by users and most likely these photos will have some person in it and this person will be changed position in these photos. So this app works fine on most of photos but when it comes to photos that background is some kind of solid color and only person in the images has key points then the result of align gets really distorted. I have tried to filter out matches with actual distances but it still gets some false matches from person in the photo. This is why i am trying to examine the homography matrix and be sure if the images will be aligned. |
2013-05-05 07:50:49 -0600 | asked a question | How to know if findHomography + warpPerspective will give good result beforehand? I am trying to align some photos by using findHomography + warpPerspective which works great on most of photos. However for some photos it gives really distorted result or entirely washed out with gray color results. So i want to eliminate these photos beforehand and i won't apply warpPerspective to them. My question is how can i examine the result of findHomography so i can skip the warpPerspective if it is going to give me entirely distorted result. |
2013-05-03 16:44:59 -0600 | asked a question | How to solve lighting differences on stitching images? I am trying to copy a small image to a bigger image, both images are from the same scene and they are aligned very well. I am using laplacian blending which makes it look seamless. I have one problem that i couldn’t solve yet, which is Illumination problem. Both photos are from same scene and they have taken with very small time difference however there is still some color changes because of lightining differences. I have tried to solve this problem with using ExposureCompansation class from opencv stitching module, unfortunately i couldn’t make it to work and it is poorly documented and when i search for it i find similar problems asked on stackoverflow and none of them answered yet. So it seems i need to develop my own solution for this Illumination problem and i don’t know where to start. Please tell me where to start. Source Images Destination Image Result Image with problem |
2013-05-02 06:35:16 -0600 | asked a question | Why ExposureCompensator gives black image as result? I am trying to use ExposureCompensator class. I am trying to paste a small image to a bigger image but before pasting it i want to balance brightness of two images. When i run the code smallImage will return as a black image. PS: Also if i use ExposureCompensator::GAIN_BLOCK it gives an error, Integer division by zero. |
2013-04-25 06:59:46 -0600 | received badge | ● Editor (source) |
2013-04-23 06:57:38 -0600 | asked a question | How to replace SURF with FREAK in Features2D + Homography example?? I am trying to run Features2D + Homography example on android however SURF is excluded from OpenCV4Android distribution. My question is simple, how can i modify this example to replace SurfFeatureDetector and SurfDescriptorExtractor with something that will work on Android? |
2013-04-21 18:05:48 -0600 | received badge | ● Scholar (source) |
2013-04-18 15:22:44 -0600 | commented answer | Why am i getting this OpenCV error Assertion Failed? I did but it didn't work. Now code works because i changed the source image which oddly loaded as single channel. |
2013-04-18 05:56:29 -0600 | commented answer | Why am i getting this OpenCV error Assertion Failed? I am trying to run a sample code for pyramid blending. However i got this error in this part of code. |
2013-04-17 18:24:05 -0600 | asked a question | Why am i getting this OpenCV error Assertion Failed? Here is the code : The Here is the error message: OpenCV Error: Assertion failed (!fixedType() || ((Mat*)obj)->type() == mtype) in unknown function. PS: I am using Visual Studio 2012 and OpenCV 2.4.4 |
2013-03-30 19:55:46 -0600 | received badge | ● Supporter (source) |
2013-03-30 03:38:53 -0600 | received badge | ● Student (source) |
2013-03-29 16:46:52 -0600 | commented answer | How can i blend multiple images that has same scene except one object on different positions in every image? Actually this code will eventually be used on android phone so photos won't be perfectly aligned and i beleive there will be some exposure differences also. |
2013-03-28 17:30:59 -0600 | asked a question | How can i blend multiple images that has same scene except one object on different positions in every image? I want to blend multiple photo shots of same scene but only one object is in different position on every shot. I want to know what kind of algorithm would give desired results. Here is an example |