2017-01-07 09:51:30 -0600 | received badge | ● Scholar (source) |
2017-01-07 07:07:47 -0600 | asked a question | Matrix multiplication without memory allocation Is it possible to speed up the overloaded matrix multiplication operator (*) in OpenCV by using preallocated cv::Mat instance with correct dimensions as a placeholder for where the result is being written into? Something like the existing function: only simpler. I would like to have something like this: My concern is performance. Is it possible that is equally fast as the hypothetical function: ? |
2016-12-23 04:25:51 -0600 | asked a question | VideoCapture Inappropriate ioctl for device On the line A following error message shows up Platform is Arch Linux. gives VIDEO I/O section |
2016-05-20 10:06:30 -0600 | commented question | segmentation fault on cv::imshow("windowName", cvImagePtr->image) any news on this issue? |
2016-04-25 10:05:49 -0600 | asked a question | findEssentialMat for coplanar points [this is a copy of a [question I just posted on StackOverflow](http://stackoverflow.com/questions/36844139/opencv-findessentialmat)] I came to a conclusion that OpenCV's findEssentialMat is not working properly for coplanar points. The documentation specifies that it uses Nister's 5 point algorithm, and the corresponding paper declares that the algorithm works fine for coplanar points. The points are generated like this: Excerpt from This shows us that algorithm sometimes performs good (error == 0), and sometimes something weird happens (error == 0.199337). Is there any other explanation for this? Obviously, the algorithm is deterministic and error 0.199337 will appear for a specific configuration of points. What is this configuration, I wasn't able to figure out. I also experimented with different prob and threshold parameters for findEssentialMat. And I tried using more/less points and different camera poses... same thing is happening. |
2016-03-24 10:09:31 -0600 | answered a question | index matrix
into
|
2016-03-24 09:54:17 -0600 | asked a question | Viz3d removeWidget RtlValidateHeap An error occurs when trying to remova a widget from a Viz3d window. The error only occurs using a Release configuration in Visual Studio. When using Debug configuration, everything works fine.
The call stack at the moment of exception looks like this: At the bottom, we can see I an altered code, if I perform
before calling
then the error is delayed. It is delayed up until the point when execution leaves the current scope. The scope in which |
2015-10-01 07:50:59 -0600 | answered a question | triangulatePoints() function To the best of my knowledge the |
2015-10-01 07:22:14 -0600 | commented question | Given a pair of stereo-calibrated cameras and a set of 2D point correspondences, what would be a proper way to obtain 3D coordinates of those points through triangulation? Any update on this? I wonder what do your PL and PR look like. |
2015-10-01 06:39:37 -0600 | commented question | triangulate 3d points from stereo images? Any update on this? Can you tell me why did you choose to insert negative value for translation in your projection matrix (-3.682632 )? |
2015-09-30 11:32:12 -0600 | commented question | Coordinate axis with triangulatePoints Is the question unclear in some way? I would consider this to be basic stuff... I just cannot find a definite answer. Specifically, when I use Instead I get some wild values, indicating that my camera moved about 5 meters in some random direction. |
2015-09-25 12:59:36 -0600 | received badge | ● Student (source) |
2015-09-25 04:12:15 -0600 | received badge | ● Editor (source) |
2015-09-25 04:11:22 -0600 | asked a question | Coordinate axis with triangulatePoints So, I have the projection matrix of the left camera: and the projection matrix of my right camera: And when I perform My assumption was that OpenCV uses Right Hand coordinate system like this:
So, when I positioned my cameras with projection matrices, the complete picture would look like this: But my experiment leads me to believe that OpenCV uses Left Hand coordinate system: And that my projection matrices have effectively messed up the left and right concept: Is everything I've said correct? Is the latter coordinate system really the one that is used by OpenCV? If I assume that it is, everything seems to work fine. But when I want to visualize things using |
2015-09-15 07:45:41 -0600 | received badge | ● Enthusiast |
2015-09-02 03:46:56 -0600 | commented answer | calculate distance using disparity map How come pixel size isn't included in the formula? Or is the pixel size implicitly included if we state disparity in the same measurment unit as focal length? |
2015-08-04 05:34:24 -0600 | received badge | ● Supporter (source) |