2016-12-07 04:16:46 -0600 | received badge | ● Teacher |
2015-11-02 03:19:25 -0600 | asked a question | Copy cropped image Hi, I undistort an image with opencv. After this I want to remove the black regions in the periphery which are caused by the undistortion. Therefore I define a border which I want to remove. Because I need the image as raw pointer as input for a texture in ogre I want the image to be on one memory block. Therefore I can't use the imgRoi. I tried to get this via "copyTo". My problem is that the result image is wrong. It looks shifted. So, what is wrong with my code? Best regards Pellaeon |
2015-06-17 08:46:47 -0600 | asked a question | undistortion of a camera image with remap Hi, I need an undistorted image of a camera for an AR application. cv::undistort is too slow for my purpose, so I want to try initUndistortRectifyMap and remap to do the init only once and safe computational time. Here is my first test: At first, I create an opencv matrix with my image (format is BGRA), then I create the camera and distortion matrix. After this, I call initUndistortRectifyMap and then remap. As you can see in screen.jpg the camera image is wrong. I have no idea whats the problem. Any suggestions? What's wrong in my code? Best regards Pellaeon |
2015-05-11 08:12:30 -0600 | commented question | how to use undistort proper push to top |
2015-05-07 09:12:14 -0600 | received badge | ● Editor (source) |
2015-05-07 08:56:17 -0600 | asked a question | how to use undistort proper I want toshow an undistorted camera image in my application and found the opencv function "undistort". I created a matrix and filled it with the intrinsic parameters. But when I start my application the image is wrong. Looks like when there is a shift with the color channels. And what is the proper matrix size for the distortion coefficients? E.g. a (4,1) matrix or a (1,4) matrix? |
2015-01-30 02:35:41 -0600 | asked a question | Play a video file with the right timing Hi, I use VideoCapture to read an avi file and copy the frames to a video texture within my ogre3d application. I want the video is shown with a right timing (not too slow, not too fast). My first approach was to use the elapsed time, to calculate the passed (fps * elapsedTime [in seconds]) frames and if the passed frame is > 1 do a videocapture.read().
Unfortunately, the video is to slow with this code. Therefore, I use teh following command to set the video stream to the right position:
Now my question to this solution: is there a performance drawback. Surely, there is some buffer mechanism in the background so that the read call can grab the next frame effecently? But when I set the position every time, perhaps the buffering is troubled? Best regards Pellaeon |