2021-05-26 09:53:05 -0600 | received badge | ● Famous Question (source) |
2020-12-01 03:30:29 -0600 | received badge | ● Popular Question (source) |
2020-08-12 19:51:13 -0600 | marked best answer | Scale-adaptive object tracking The KCF tracker built into OpenCV does a very good job of tracking an object as it moves relative to the camera, but the size of the bounding box is fixed, and it does not adapt to the scale of the object changing. Are there algorithms with similar performance that are able to adapt to the gradually changing scale of the object? |
2020-08-10 13:16:17 -0600 | asked a question | Scale-adaptive object tracking Scale-adaptive object tracking The KCF tracker built into OpenCV does a very good job of tracking an object as it moves |
2020-08-03 10:56:29 -0600 | asked a question | Specify CUDA stream for DNN evaluation Specify CUDA stream for DNN evaluation I have a CUDA-accelerated pipeline for processing an image. At the end of the pip |
2020-08-03 10:54:32 -0600 | marked best answer | Parallelizing GPU processing of multiple images For each frame of a video, I apply some transformations and then write the frame out to an image file. I am using OpenCV's CUDA API for this, so it looks something like this, in a loop: Since I send a single frame to the GPU, and then wait for its completion at the end of the loop, I can only process one frame at a time. What I would like to do is send multiple frames (in multiple streams) to the GPU to be processed at the same time, then save them to disk as the work gets finished. What is the best way to do this? |
2020-08-03 10:54:32 -0600 | received badge | ● Scholar (source) |
2020-07-30 10:34:25 -0600 | edited question | Parallelizing GPU processing of multiple images Parallelizing GPU processing of multiple images For each frame of a video, I apply some transformations and then write t |
2020-07-30 10:32:48 -0600 | asked a question | Parallelizing GPU processing of multiple images Parallelizing GPU processing of multiple images For each frame of a video, I apply some transformations and then write t |
2020-07-21 12:52:56 -0600 | commented question | Assign to a single channel of GpuMat Seems that cv2.cuda.merge is probably on the right track. |
2020-07-21 12:05:21 -0600 | asked a question | Assign to a single channel of GpuMat Assign to a single channel of GpuMat I have some code which generates separate grayscale images, and then I composite th |
2020-05-24 13:30:29 -0600 | received badge | ● Supporter (source) |
2020-05-20 12:08:30 -0600 | commented question | Building DNN module with cuDNN backend According to this guide I should also pass -DOPENCV_DNN_CUDA=ON so I'm trying that now. |
2020-05-20 12:06:43 -0600 | edited question | Building DNN module with cuDNN backend Building DNN module with cuDNN backend I am building OpenCV 4.3.0-dev with cuDNN support. My cuDNN version is the latest |
2020-05-20 12:05:50 -0600 | edited question | Building DNN module with cuDNN backend Building DNN module with cuDNN backend I am building OpenCV 4.3.0-dev with cuDNN support. My cuDNN version is the latest |
2020-05-20 12:04:19 -0600 | asked a question | Building DNN module with cuDNN backend Building DNN module with cuDNN backend I am building OpenCV 4.3.0-dev with cuDNN support. I pass these options to CMake: |
2020-05-18 19:27:35 -0600 | edited question | Build and install the Python 3 module Build and install the Python 3 module I built OpenCV 4.x from a git checkout with the necessary options to build the Pyt |
2020-05-18 19:27:11 -0600 | asked a question | Build and install the Python 3 module Build and install the Python 3 module I built OpenCV 4.x from a git checkout with the necessary options to build the Pyt |
2020-02-09 00:15:14 -0600 | asked a question | Error using grayscale input on YOLOv3 network Error using grayscale input on YOLOv3 network I have a YOLOv3 network based on this config file with a notable change to |
2019-08-01 12:38:15 -0600 | received badge | ● Enthusiast |
2019-07-31 16:23:24 -0600 | commented question | Improving an algorithm for detecting fish in a canal Thanks for your comment. Yes, they can swim at any place vertically in the image. Though I'm sure that if you looked at |
2019-07-30 13:22:09 -0600 | edited question | Improving an algorithm for detecting fish in a canal Improving an algorithm for detecting fish in a canal I have many hours of video captured by an infrared camera placed by |
2019-07-30 13:20:23 -0600 | asked a question | Improving an algorithm for detecting fish in a canal Improving an algorithm for detecting fish in a canal I have many hours of video captured by an infrared camera placed by |
2019-06-18 10:48:02 -0600 | received badge | ● Notable Question (source) |
2018-11-25 09:56:00 -0600 | received badge | ● Popular Question (source) |
2017-05-17 15:22:20 -0600 | commented question | Stereo rectification with dissimilar cameras The calibration matrices were provided to me by the people who built the system, and one or both of the cameras may be changed in the future so I don't want to have to recalibrate on my own each time. |
2017-05-17 14:12:54 -0600 | asked a question | Stereo rectification with dissimilar cameras I have a stereo camera system with two different cameras, with different focal lengths, optical center points, and image resolutions. They are positioned horizontally and the relative rotation is negligible. I've been given the intrinsic matrices for each camera, their distortion coefficients, as well as the rotation matrix and translation vector describing their relationship. I want to rectify a pair of photos taken by the cameras at the same time. However, the results have been complete garbage. I've first tried ignoring that the image resolutions are different, and using
However again the output is garbage. I've re-read the code to make sure I didn't make any copy-paste errors, have compared with similar implementations, and consulted the relevant chapters in the "Learning OpenCV 3" book. I've written out image files at each step to make sure the undistortion and scaling are correct. Are there any sanity checks I can do to make sure that the camera matrices I'm receiving are correct? (more) |
2017-05-11 00:04:36 -0600 | received badge | ● Critic (source) |
2017-04-27 09:15:27 -0600 | received badge | ● Student (source) |
2017-04-27 08:29:35 -0600 | received badge | ● Editor (source) |
2017-04-27 07:57:47 -0600 | asked a question | Perspective transform without crop I have two images, src and dst. I'm trying to perform some transformations on src to make it align better with dst. One of the first transformations I'm applying is a perspective transform. I have some landmark points on both images, and I'm assuming that the landmarks fall on a plane and that all that has changed is the camera's perspective. I'm using However, if I then apply this transformation to src, some of the image might be transformed outside of my viewport, causing the image to be cropped. For instance, the top left corner (0, 0) might be transformed to (-10, 10), which means this part of the image is lost. So I'm trying to perform the transformation and get an uncropped image. I've played around with using If I translate src before I apply the transformation, then I think I've "invalidated" the transformation. So I think I have to modify the transformation matrix somehow to apply the translation at the same time. An answer on StackOverflow from Matt Freeman, on a question titled "OpenCV warpperspective" (I cannot link to it due to stupid karma rules), seemed promising, but didn't quite work. If this is correct, then I should be able to find the bounding rectangle again and it should be (0, 0, Instead I get Ugh due to more stupid karma rules I can't even answer my own question. Matt Freeman wrote, (more) |