2020-05-25 16:00:44 -0600 | received badge | ● Necromancer (source) |
2018-01-04 21:04:17 -0600 | received badge | ● Popular Question (source) |
2015-04-21 13:30:54 -0600 | received badge | ● Good Answer (source) |
2015-03-02 09:41:24 -0600 | received badge | ● Nice Question (source) |
2015-02-24 11:23:56 -0600 | answered a question | Is it possible to have ALL header files referenced by OpenCV included in the standard download? Probably not. Firstly there would be legal issues, the headers in visual studio are Microsoft's property you can't just bundle them (it's not even legal to distribute debug versions of their standard runtime redistributes) Then it would be a problem for people using different versions of visual studio, perhaps even different locales or languages. The cmake system does a pretty good job of finding and building external versions of libraries like tiff. |
2015-02-03 16:16:03 -0600 | answered a question | georectifying oblique digital images can be done in opencv ? Yes it's (relatively) simple. See chapter 12 in Opencv Book, "bird's eye transform" |
2015-01-30 14:58:02 -0600 | answered a question | read camera image incorrect Cap set isn't very reliable because a lot of simpel camera drivers don't properly respond under windows. If it is a simple webcam you might be limited to 640x480. Working out what data format is returned is always a bit of a challenge. Try the various BAYERxxxx options in cvtColor() |
2015-01-28 18:46:54 -0600 | asked a question | UMat get values Is there anyway of accessing the pixel value at a given coordinate in a UMat image without copying the entire image back to a Mat ? ie from C++ not in an OpenCL kernel |
2015-01-23 12:01:32 -0600 | asked a question | OpenCV 3.0.0 roadmap Is there any published plans for 3.0 in the near and long term? We are looking at a major rewrite of our product for 3.0 and especially want to use the opencl functionality. Is there any roadmap for which areas will be implemented in opencl ? Currently there isn't even a list of which functions have GPU versions (other than looking for OpenCL sources) We don't want to explore using a particular technique which would only be practical at OpenCL speeds if it cannot be implemented on the GPU for reasons we don't know. Similarly we don't want to invest in implementing missing functionality, eg HoughCircles, if others are working on it. |
2014-03-20 17:07:04 -0600 | received badge | ● Nice Answer (source) |
2013-07-05 11:19:28 -0600 | received badge | ● Supporter (source) |
2013-07-05 11:19:27 -0600 | received badge | ● Scholar (source) |
2013-07-03 10:48:22 -0600 | received badge | ● Editor (source) |
2013-07-03 10:47:24 -0600 | asked a question | Contour ordering Is there any guarantee about the ordering of contours returned by FindContours()? The result are 'almost' ordered by image row of the first point - except for a few percent which aren't ! |
2013-05-12 13:41:54 -0600 | answered a question | extruct 3D points I think you are misunderstanding - you must supply the known 3d points along with the 2d image points. The purpose of solvePnP is to find the orientation of a known object |
2013-02-03 11:05:13 -0600 | answered a question | Opitmization with TBB You don't need the 30day evaluation TBB it is available as GPL TBB essentially allows you to call the same function in parallel on each cpu core/hyperthread. It's useful when you have a function that does the same thing to different data, doesn't depend on the output of other calls of itself and can be split into 4/8/16 blocks. CUDA (since you have an NVIdia card) runs instructions on the GPU, it can process 100-1000 tasks in parallel but it takes time to get the data onto and off the card - so is useful when you want to process an entire image and have functions that operate only on a local section of the image. Yes the cuda build of opencv takes along time because CUDA pre-builds the GPU code for different cards ahead of time. |
2013-02-01 23:42:27 -0600 | commented answer | Tips on how to build opencv 2.4.3 (VC10) Opencv No longer needs TBB on windows it uses concurrency framework. Or at least the pre-built dlls do- there is no info on how to enable it, or if you have to |
2013-01-30 23:04:37 -0600 | answered a question | Resolution and camera calibration Yes if you scale down the calibration images and the final images in the same way then everything cancels out. You can also do the calibration at full resolution and then scale down the images - you just have to scale some of the calibration factors, fx,fy,cx,cy and the distortion values by the same factor. Trickier is using a sub region of the calibrated image, the undistort is based on radial terms from the center so you need a little more 'finesse' |
2012-11-16 15:37:48 -0600 | asked a question | Can only read cv::Mat from filestorage? I can write cv::Matx cv::Pointx etc to an xml file with ">>" But I can't find a way to read them back, either directly or with a cv::FileNodeIterator. Don't work. Am I missing something? |
2012-11-04 12:57:40 -0600 | answered a question | 5-points algorithm in opencv ? I haven't tested it but Nghia Oh's page lists an implementation |
2012-11-04 12:49:01 -0600 | commented answer | 5-points algorithm in opencv ? I would suggest you post the code on github, or simply a tarball on dropbox - then we can work on getting it into an opencv submission. Then everyone benefits! |
2012-11-02 18:18:07 -0600 | commented question | Problem with running OpenCV with GPU support.. Make sure you have one (and only one!) cdua sdk installed and build opencv from source. We hit some weird errors because we had parts of an old cuda left behind |
2012-11-02 17:38:55 -0600 | asked a question | Stereo calibration accuracy Is there any way of getting a goodness of fit for the stereo calibration result? Using the asymetric circle pattern and a pair of already calibrated high quality cameras we took >200 images at different rotations and inclinations to get a "gold standard" calibration. We then selected subsets of 25-32 images from this set to look at the statistical variation in the results and we get a variation in the baseline length of around 1%. This seems very high for simply fitting stereo correspondence from 25+ images of 44 points. I would have thought you should get a good answer from a single image! (uniform lighting, no saturation, all markers found in all frames, target filled the frame and was held flat on a glass plate etc etc) |
2012-11-02 17:11:47 -0600 | commented answer | camera calibration depth coverage Unless you have to refocus. On cheap lenses the focal length can change by 10-20% over the focal length. Even on high-end SLR lenses it will change by 1% |
2012-10-26 14:33:26 -0600 | commented answer | Is triangulatePoints outputing rubish ? @Guido - here is a good place to start http://www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/#more-1023 |
2012-10-25 14:11:16 -0600 | commented question | findCirclesGrid NOT Working The circlesgrid is a lot more sensitive to lighting differences than chessboard. We find that 25-35% of our images in a calibration sequence fail for no obvious reason. |
2012-10-20 22:59:55 -0600 | answered a question | Is triangulatePoints outputing rubish ? I've used triangulatepoints sucessfuly, although I got better results with an iterative solution based on the code in Hartley and Zisserman The different format used by the functions is a little annoying Call cv::undistortPoints() passing in the screen points as cv::Mat(1,4,CV_64FC2) channel [0]=x, [1]=y, then pass the result to cv::correctMatches() You then need to convert the result to a cv::Mat(2,n,CV_64FC1) for each 'n' points in left and simalrly in right. Remember you need to generate the projection matrix for each camera with cv::stereoRectify() The result is 4D value in each column - x,y,z,w so you need to divide x,y,z by w |
2012-10-20 20:42:59 -0600 | answered a question | camera calibration depth coverage Are you sure it has a focal length of 1500mm ? That's a very large telescope Ideally you would calibrate your camera at the distance it is going to be capturing a scene - the lens characteristics may change when you refocus. This isn't always possible - if you are looking at something a long way off - in which case use the closest target to the real position you can. There are online depth-of-focus calculators that will tell you the range of distances over which you are in focus without changing the lens focus |
2012-10-20 20:36:21 -0600 | answered a question | link opencv 2.4.2 with QT Do you mean use the Qt enhanced high gui window or use opencv in your Qt app? If you are programming with Qt - simply add the openCV include and lib dirs, include the opencv library files as with any other and make sure the opencv dll's are in the path. If you want openCV to use the Qt version of highgui and none of the downloads are suitable then you need to build opencv from source. This depends on your OS but there are walkthroughs on the docs.opencv.org site |
2012-10-19 19:15:42 -0600 | asked a question | findCirclesGrid bug? I need a little more help checking with this before I file it as a bug. Running cv::findCirclesGrid I get a "Windows has triggered a breakpoint in calibrate.exe." error.
I can step into the function upto the end of The sample app works, except when I try and debug that I can't step to SimpleBlobdetector, instead it just returns from findCirclesGrid. The arguements to the function are identical, they are both linking against 2.4.2, I removed any other versions from the system and both are being compiled with the same settings in VS2010 on windows7-64 |
2012-10-17 22:55:21 -0600 | received badge | ● Citizen Patrol (source) |
2012-10-17 22:54:16 -0600 | answered a question | OpenCV Camera Calibration for telecentric lenses A pinhole model certainly isn't valid! Never done it myself but the first thing to try would be to use the regular chessboard/circles target flat to your object. Record the positions of the markers and then use this to fit a warp/distortion to the scene. Telecentric lenses tend to be bad near the edges so try and just use the center of the field. |
2012-10-15 03:35:15 -0600 | received badge | ● Nice Answer (source) |
2012-10-15 03:32:40 -0600 | received badge | ● Teacher (source) |
2012-10-14 14:23:31 -0600 | commented question | operations on cv::Point Sorry missed dot/cross. IIRC =/ and =* didn't work with a scaler. Normalize() and Length() would be useful as would an easy way of converting between cv Point3 and a row/column of a 3x3 Mat. I do a lot of 3D positioning ! |
2012-10-13 18:57:57 -0600 | asked a question | Fast squares detection I need to find the location of squares in a 5Mp video image - quickly and accurately. Current implementation is canny->contours->simplify->pick ones with 4 edges But the canny is far too slow and the result is sensitive to lighting and obstructions. I considered Hough transform and then solving for intersections - I know roughly where the squares are from previous frame so I could implement my own restricted rho,theta search. How fast is the Harris corner detector in GoodFeaturesToTrack? Any other approaches to consider? |
2012-10-13 18:33:16 -0600 | answered a question | How to calculate the distance from the camera origin to any of the corners? (square chessboard calibration) See SolvePnP Supply the image coordinates or each square's corners (or just the corners of the board) along with their real-world positions on the board - just like you did for the camera calibration. The rvec and tvec returned will contain the rotation of the camera relative to the chessboard and the position (translation) of the chessboard in camera coords. A single flat symmetric chessboard target isn't really the best choice for this if you care about the angle. |
2012-10-10 12:51:35 -0600 | commented answer | findCirclesGrid @llya - but it's not terribly clear what is a row and what is a column! |
2012-10-04 13:38:16 -0600 | asked a question | operations on cv::Point There aren't many operations defined on the cv::Point2/3 data type IS this deliberate? Am I supposed to convert everything to a cv::Mat? Or is there some technical reason? I'm starting to build a library of; dot & cross product, length, isEmpty() etc but this seems like a duplication of effort |
2012-10-04 13:09:45 -0600 | commented answer | estimateAffine3D result Yes that's why affine for 3D doesn't really make much sense - I wondered why the function was there |
2012-10-04 13:08:09 -0600 | answered a question | findCirclesGrid Width Height must be the correct way around. |
2012-09-17 01:45:00 -0600 | received badge | ● Student (source) |
2012-09-14 10:16:02 -0600 | asked a question | estimateAffine3D result I am trying to measure the 3D relative position between two markers detected with a cvTriangulatePoints() and a stereo rig. I get reasonably accurate 3D positions for the markers. I then try and find the rigid body transform between them: Using my own SVD based solution I get a reasonable answer With estimateAffine3D() I get: Not sure how to interpret the output of affine 3D, some of the values seem to line up (give or take a sign change). But how do you have a rotation matrix value of "-2" ? I'm not even sure what an affine transform means in 3D! |
2012-09-14 09:39:24 -0600 | commented answer | findCirclesGrid Is there an svg version of this? The png is badly pixelated and the pattern generator python is very difficult to install on Windows |