Ask Your Question

rgov's profile - activity

2021-05-26 09:53:05 -0600 received badge  Famous Question (source)
2020-12-01 03:30:29 -0600 received badge  Popular Question (source)
2020-08-12 19:51:13 -0600 marked best answer Scale-adaptive object tracking

The KCF tracker built into OpenCV does a very good job of tracking an object as it moves relative to the camera, but the size of the bounding box is fixed, and it does not adapt to the scale of the object changing.

Are there algorithms with similar performance that are able to adapt to the gradually changing scale of the object?

2020-08-10 13:16:17 -0600 asked a question Scale-adaptive object tracking

Scale-adaptive object tracking The KCF tracker built into OpenCV does a very good job of tracking an object as it moves

2020-08-03 10:56:29 -0600 asked a question Specify CUDA stream for DNN evaluation

Specify CUDA stream for DNN evaluation I have a CUDA-accelerated pipeline for processing an image. At the end of the pip

2020-08-03 10:54:32 -0600 marked best answer Parallelizing GPU processing of multiple images

For each frame of a video, I apply some transformations and then write the frame out to an image file. I am using OpenCV's CUDA API for this, so it looks something like this, in a loop:

# read frame from video
_, frame = video.read()

# upload frame to GPU
frame = cv2.cuda_GpuMat(frame)

# create a CUDA stream
stream = cv2.cuda_Stream()

# do things to the frame
# ...

# download the frame to CPU memory
frame = frame.download(steam=stream)

# wait for the stream to complete (CPU memory available)
stream.waitForCompletion()

# save frame out to disk
# ...

Since I send a single frame to the GPU, and then wait for its completion at the end of the loop, I can only process one frame at a time.

What I would like to do is send multiple frames (in multiple streams) to the GPU to be processed at the same time, then save them to disk as the work gets finished.

What is the best way to do this?

2020-08-03 10:54:32 -0600 received badge  Scholar (source)
2020-07-30 10:34:25 -0600 edited question Parallelizing GPU processing of multiple images

Parallelizing GPU processing of multiple images For each frame of a video, I apply some transformations and then write t

2020-07-30 10:32:48 -0600 asked a question Parallelizing GPU processing of multiple images

Parallelizing GPU processing of multiple images For each frame of a video, I apply some transformations and then write t

2020-07-21 12:52:56 -0600 commented question Assign to a single channel of GpuMat

Seems that cv2.cuda.merge is probably on the right track.

2020-07-21 12:05:21 -0600 asked a question Assign to a single channel of GpuMat

Assign to a single channel of GpuMat I have some code which generates separate grayscale images, and then I composite th

2020-05-24 13:30:29 -0600 received badge  Supporter (source)
2020-05-20 12:08:30 -0600 commented question Building DNN module with cuDNN backend

According to this guide I should also pass -DOPENCV_DNN_CUDA=ON so I'm trying that now.

2020-05-20 12:06:43 -0600 edited question Building DNN module with cuDNN backend

Building DNN module with cuDNN backend I am building OpenCV 4.3.0-dev with cuDNN support. My cuDNN version is the latest

2020-05-20 12:05:50 -0600 edited question Building DNN module with cuDNN backend

Building DNN module with cuDNN backend I am building OpenCV 4.3.0-dev with cuDNN support. My cuDNN version is the latest

2020-05-20 12:04:19 -0600 asked a question Building DNN module with cuDNN backend

Building DNN module with cuDNN backend I am building OpenCV 4.3.0-dev with cuDNN support. I pass these options to CMake:

2020-05-18 19:27:35 -0600 edited question Build and install the Python 3 module

Build and install the Python 3 module I built OpenCV 4.x from a git checkout with the necessary options to build the Pyt

2020-05-18 19:27:11 -0600 asked a question Build and install the Python 3 module

Build and install the Python 3 module I built OpenCV 4.x from a git checkout with the necessary options to build the Pyt

2020-02-09 00:15:14 -0600 asked a question Error using grayscale input on YOLOv3 network

Error using grayscale input on YOLOv3 network I have a YOLOv3 network based on this config file with a notable change to

2019-08-01 12:38:15 -0600 received badge  Enthusiast
2019-07-31 16:23:24 -0600 commented question Improving an algorithm for detecting fish in a canal

Thanks for your comment. Yes, they can swim at any place vertically in the image. Though I'm sure that if you looked at

2019-07-30 13:22:09 -0600 edited question Improving an algorithm for detecting fish in a canal

Improving an algorithm for detecting fish in a canal I have many hours of video captured by an infrared camera placed by

2019-07-30 13:20:23 -0600 asked a question Improving an algorithm for detecting fish in a canal

Improving an algorithm for detecting fish in a canal I have many hours of video captured by an infrared camera placed by

2019-06-18 10:48:02 -0600 received badge  Notable Question (source)
2018-11-25 09:56:00 -0600 received badge  Popular Question (source)
2017-05-17 15:22:20 -0600 commented question Stereo rectification with dissimilar cameras

The calibration matrices were provided to me by the people who built the system, and one or both of the cameras may be changed in the future so I don't want to have to recalibrate on my own each time.

2017-05-17 14:12:54 -0600 asked a question Stereo rectification with dissimilar cameras

I have a stereo camera system with two different cameras, with different focal lengths, optical center points, and image resolutions. They are positioned horizontally and the relative rotation is negligible.

I've been given the intrinsic matrices for each camera, their distortion coefficients, as well as the rotation matrix and translation vector describing their relationship.

I want to rectify a pair of photos taken by the cameras at the same time. However, the results have been complete garbage.

I've first tried ignoring that the image resolutions are different, and using cv2.stereoRectify then cv2.initUndistortRectifyMap then cv2.remap. Since this didn't work, I added a preprocessing step to scale both images to the same dimension. The algorithm is now:

  1. Remove distortion from each image using cv2.undistort
  2. Scale the images to the same width and height with cv2.resize
  3. Transform the focal length and optical center points of the camera matrices accordingly (per this answer)
  4. Perform cv2.stereoRectify with the new camera matrices and zero distortion
  5. Compute the rectification map with cv2.initUndistortRectifyMap for each camera
  6. Apply the rectification map with cv2.remap on each image

However again the output is garbage. I've re-read the code to make sure I didn't make any copy-paste errors, have compared with similar implementations, and consulted the relevant chapters in the "Learning OpenCV 3" book. I've written out image files at each step to make sure the undistortion and scaling are correct.

Are there any sanity checks I can do to make sure that the camera matrices I'm receiving are correct?

# Undistort the images without rectifying them
left_ud = cv2.undistort(left, left_camera_matrix, left_distortion_coeffs)
right_ud  = cv2.undistort(right, right_camera_matrix, right_distortion_coeffs)

# Now scale the images to the same width and height
mw = max(left_ud.shape[1], right_ud.shape[1])
mh = max(left_ud.shape[0], right_ud.shape[0])

left_s = cv2.resize(left_ud, (mw, mh), interpolation=cv2.INTER_CUBIC)
right_s  = cv2.resize(right_ud, (mw, mh), interpolation=cv2.INTER_CUBIC)

# Adjust the matrices... oops make sure I do (transform)*(camera) not (camera)*(transform)
left_camera_matrix = np.array([
  [float(mw) / left_ud.shape[1], 0, 0],
  [0, float(mh) / left_ud.shape[0], 0],
  [0, 0, 1]
]).dot(left_camera_matrix)
right_camera_matrix = np.array([
  [float(mw) / right_ud.shape[1], 0, 0],
  [0, float(mh) / right_ud.shape[0], 0],
  [0, 0, 1]
]).dot(right_camera_matrix)

# Clear the distortion coefficients
left_distortion_coeffs = right_distortion_coefficients = np.zeros((1,5))


# Rectify both cameras
R1, R2 = np.zeros((3, 3)), np.zeros((3, 3))
P1, P2 = np.zeros((3, 4)), np.zeros((3, 4))

_, _, _, _, _, left_roi, right_roi = cv2.stereoRectify(
  left_camera_matrix, left_distortion_coeffs,
  right_camera_matrix, right_distortion_coeffs,
  (mw, mh),
  R, T,
  R1, R2, P1, P2,
  flags=cv2.CALIB_ZERO_DISPARITY,
  alpha=0.0  # tried different values here
)

# Now compute the rectification maps and apply them
map1, map2 = cv2.initUndistortRectifyMap(
  left_camera_matrix, left_distortion_coeffs,
  R1, P1[:,:3],
  (mw, mh),
  m1type = cv2.CV_32FC1
)

left_out = cv2.remap(left_s, map1, map2, cv2.INTER_LINEAR)
cv2.rectangle(left_out, (left_roi[0], left_roi[1]), (left_roi[0]+left_roi[2], left_roi[1]+left_roi[3]), (0, 255, 0), thickness=3)

map1, map2 = cv2.initUndistortRectifyMap ...
(more)
2017-05-11 00:04:36 -0600 received badge  Critic (source)
2017-04-27 09:15:27 -0600 received badge  Student (source)
2017-04-27 08:29:35 -0600 received badge  Editor (source)
2017-04-27 07:57:47 -0600 asked a question Perspective transform without crop

I have two images, src and dst. I'm trying to perform some transformations on src to make it align better with dst.

One of the first transformations I'm applying is a perspective transform. I have some landmark points on both images, and I'm assuming that the landmarks fall on a plane and that all that has changed is the camera's perspective. I'm using cv2.findHomography to find the transformation matrix which represents the change in the camera.

However, if I then apply this transformation to src, some of the image might be transformed outside of my viewport, causing the image to be cropped. For instance, the top left corner (0, 0) might be transformed to (-10, 10), which means this part of the image is lost.

So I'm trying to perform the transformation and get an uncropped image.

I've played around with using cv2.perspectiveTransform on a list of points representing the corners of src, and then getting the cv2.boundingRect which tells me the size of the array I need to store the uncropped image. But I can't figure out how to translate the image so that none of its points get transformed out of bounds.

If I translate src before I apply the transformation, then I think I've "invalidated" the transformation. So I think I have to modify the transformation matrix somehow to apply the translation at the same time.


An answer on StackOverflow from Matt Freeman, on a question titled "OpenCV warpperspective" (I cannot link to it due to stupid karma rules), seemed promising, but didn't quite work.

# Compute the perspective transform matrix for the points
ph, _ = cv2.findHomography(pts_src, pts_dst)

# Find the corners after the transform has been applied
height, width = src.shape[:2]
corners = np.array([
  [0, 0],
  [0, height - 1],
  [width - 1, height - 1],
  [width - 1, 0]
])
corners = cv2.perspectiveTransform(np.float32([corners]), ph)[0]

# Find the bounding rectangle
bx, by, bwidth, bheight = cv2.boundingRect(corners)

# Compute the translation homography that will move (bx, by) to (0, 0)
th = np.array([
  [ 1, 0, -bx ],
  [ 0, 1, -by ],
  [ 0, 0,   1 ]
])

# Combine the homographies
pth = ph.dot(th)

# Apply the transformation to the image
warped = cv2.warpPerspective(src, pth, (bwidth, bheight), flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_CONSTANT)

If this is correct, then I should be able to find the bounding rectangle again and it should be (0, 0, bwidth, bheight):

corners = np.array([
  [0, 0],
  [0, height - 1],
  [width - 1, height - 1],
  [width - 1, 0]
])
corners = cv2.perspectiveTransform(np.float32([corners]), pth)[0]    
bx2, by2, bwidth2, bheight2 = cv2.boundingRect(corners)

print bx, by, bwidth, bheight
print bx2, by2, bwidth2, bheight2

Instead I get

-229 -82 947 1270
265 134 671 1096

Ugh due to more stupid karma rules I can't even answer my own question.

Matt Freeman wrote,

Homographies can be combined using matrix multiplication (which is why they are so powerful). If A and B are homographies, then AB represents the homography ...

(more)